<%BANNER%>

Design and Analysis of Optimal Decoding Models for Brain-Machine Interfaces

xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101112_AAAAAE INGEST_TIME 2010-11-12T05:41:57Z PACKAGE UFE0010077_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 5895 DFID F20101112_AAACMD ORIGIN DEPOSITOR PATH kim_s_Page_058thm.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
d1752ecd8d4d7bce940d46ca8f273b2e
SHA-1
ed466a1f66b556434d28552f6fbf9a5e5d683f5b
21096 F20101112_AAACLO kim_s_Page_015.QC.jpg
2c29d84b92b43d3d1fe6f101b7c2b881
6cef09ac5fff93fbfafc897253966d40dc4f51ad
1053954 F20101112_AAABJB kim_s_Page_129.tif
b28bc34c151e4265827bbbf337145713
4dee64ea41738fd896897fe95a7497fd37143370
F20101112_AAABIM kim_s_Page_114.tif
c8545060f41cfccea6378a9df7c217c0
e1d0949e7c096627a62ad622f9d2a7c4f566a75b
25271604 F20101112_AAABHY kim_s_Page_100.tif
1b5f27861632c4a6d7820ac4105107c7
aa66fdcd6e3b88c5b2324a3daafd7ac600d04553
6094 F20101112_AAACME kim_s_Page_124thm.jpg
0112a2632bb977c93c8f829d0f6fbd7f
c1b7ee583b93b876e64924ccdb1ef7c39e89881f
20826 F20101112_AAACLP kim_s_Page_133.QC.jpg
8920cc1c14a7bddbf6ab98461fa12ea5
cb9dc96070a9f098db44ef2e1af0e4f056aee66a
F20101112_AAABJC kim_s_Page_130.tif
eb95063f6e5a76640a4424687407d2f3
f1c01cb0b9965ba9f1230af35f64c556578474bc
F20101112_AAABIN kim_s_Page_115.tif
21cc9c2c8c26a3b83a7d48f20b72ff6d
97b6b8628435d969d9ce1f72a7f4473e2b9ab027
F20101112_AAABHZ kim_s_Page_101.tif
dd2a0eb2c343d9a1df6b02fed020fd51
0ab1885a583a3a5f36bf45486dd9b39e4201f4f4
6569 F20101112_AAACMF kim_s_Page_174thm.jpg
3a1f80bf304e22a85eb5054b297f8abf
e8383588b887969730b5af052d7c881999605db7
22415 F20101112_AAACLQ kim_s_Page_069.QC.jpg
7adda229e07226ac5b83fbac74e3c9e6
6e4b2c4609290744c7af6bc35006cff7870e83fb
F20101112_AAABJD kim_s_Page_131.tif
06736f7bd6272b41b10d267526b2ff3b
2ce4513a4fc3e2ffdd6d0b6cc61bed51abcf013b
F20101112_AAABIO kim_s_Page_116.tif
a83b400033b7998f8da93213fea36015
5881eca381e8d0e6db949b1a34bc001eb17c4434
4906 F20101112_AAACMG kim_s_Page_183thm.jpg
6efbef44da0e22c0911c213dc9c35ca9
a3c5afdc63de417118a06f1412ead556b474f04c
6452 F20101112_AAACLR kim_s_Page_111thm.jpg
6682d2abd420fd2497d4d250f9114182
3753ae8b6ee239f595afea6b4510d1bc87c98c22
F20101112_AAABJE kim_s_Page_132.tif
da4686de02584614850342dafb068c8f
731f6bec68608cabef2633f7d01dbe767a6ec288
F20101112_AAABIP kim_s_Page_117.tif
a4547ac5e4f1ff26357d57c20924d019
f0dc3cffa6243a6c10bcb98beae680057d622aa9
6642 F20101112_AAACMH kim_s_Page_106thm.jpg
1b0c19c0eae42af67fd2bf9bec779a4f
741c8bbfdbfe531730bdb43315e801c42c66b8cf
6007 F20101112_AAACLS kim_s_Page_153thm.jpg
e365a4ebfc8e05b343043e80dd26f8d4
3e44762d73471e4736aad500f47d86293b539f42
F20101112_AAABJF kim_s_Page_133.tif
b3610ef00a18e640a69730155f26edbf
7bcadc6563f35ac12a416ca3adc12db116bc0bdb
8423998 F20101112_AAABIQ kim_s_Page_118.tif
d33da9d782f9225a468455fcefe8ea1a
f98ab249ebf5f385d230a706e3903d6c83a0d364
5975 F20101112_AAACMI kim_s_Page_078thm.jpg
65f1f811a49cbc0b2784050ed6e7db2c
bfa6db3c919877c47a81fdb7076caf2164bdfe31
23850 F20101112_AAACLT kim_s_Page_101.QC.jpg
d81d3bbe0c0d21b423703ec3468e00db
f74ae2e815487adc5185eed65128b2255859c061
F20101112_AAABJG kim_s_Page_134.tif
761fe302cacdf9256c981d0bc495cf55
b58fe71869a853aae8b964ecc50d92457302efb7
F20101112_AAABIR kim_s_Page_119.tif
dac486d042273eeca0bca43883b4153e
ad0b7699a7b84210b0d303fa18ba7cfb82ac433d
6449 F20101112_AAACMJ kim_s_Page_072thm.jpg
71f57a81961591784c72b57c07afd32e
bab8b4bbd4ea31740129610d01e3e0ff265ab77c
6477 F20101112_AAACLU kim_s_Page_157thm.jpg
ac01571d7d929a5dd8f3f2061dd75ebe
8e59e9a1507afabaaf1acf2cc10461da1e323c2c
F20101112_AAABJH kim_s_Page_135.tif
edb9b84b9b0a8bd009f5c932a68d3f95
2e2c847dafc1f48f39b1404d8122dc1a018ff1c0
F20101112_AAABIS kim_s_Page_120.tif
98051fcaeac35eb0f37789e2f2265928
e3b1498d6158de5c1371dabfac563979e4c7927a
23930 F20101112_AAACMK kim_s_Page_073.QC.jpg
c422478c9c19430e86f385b45315e6a7
fe3509a3342a0e08e2427bbe22f0858a49165b27
14221 F20101112_AAACLV kim_s_Page_156.QC.jpg
111171b07397340a532fcafb9cf3726c
1b44ee8889e09a7728e3df99a4423ed0bbd59a64
F20101112_AAABJI kim_s_Page_136.tif
3861cc33e01cda737ac35b17888ce9d7
a8a6e3f1ab95543a2cbe03cfb7fba2f3321e0d55
F20101112_AAABIT kim_s_Page_121.tif
e22b5f36739aa6de29d5c41a9d8b5cd7
b340ff075df16c50ed70cd1edfb8cc36317e5d18
18973 F20101112_AAACML kim_s_Page_132.QC.jpg
91e4b5ba712695c92b94f77571d7c437
58f7f897cf89c10df935d57f339fc491f97a8377
6533 F20101112_AAACLW kim_s_Page_144thm.jpg
2007eca8f786237aaa0aaf62c76b2a95
82715d39fc0045cd208f4b2d99dc609903979d55
F20101112_AAABJJ kim_s_Page_137.tif
8201436899f6dc2dd70f0990d0d4afe2
737dc32941f5f7aa227bcc271242d1320f898a75
F20101112_AAABIU kim_s_Page_122.tif
b2c0a22f5889f45510b21d13c0d6ca7c
82085e47bf194194f96fcd4d59aa5dd5920575ea
5995 F20101112_AAACNA kim_s_Page_107thm.jpg
b5204b78eac4caa5d5ce2ae2834b26f0
d54b0e7bfd10bdf2b74614b65507f1829f8ea39d
19244 F20101112_AAACMM kim_s_Page_109.QC.jpg
632e51b26cab46fed01864d483620cb3
ab1f6f16c82836c4282634eba095867bd0969462
16039 F20101112_AAACLX kim_s_Page_093.QC.jpg
9a67471a77b55b81cf32939da0d46989
927f5f41799b2084661863921a3ceb11cb7702f6
F20101112_AAABJK kim_s_Page_138.tif
2f8140e975df29ade5eaf79903250870
e49b91f8dff55bedb9db2f44ea99d2c9975f0fd3
F20101112_AAABIV kim_s_Page_123.tif
398451d899bbc6abb966e3c6bc018470
f3d11cca9845ed64ec76a0343ac584354cd11253
5990 F20101112_AAACNB kim_s_Page_113thm.jpg
dda7fe28ad3f564c9078292957e00cc1
de03fa074570de809ffddf523ec3e2a28d901400
22153 F20101112_AAACMN kim_s_Page_134.QC.jpg
a00e69df24f4c57feef41abe5812ca65
9e4be353ff66960cf3980dce757d3512f0717853
6160 F20101112_AAACLY kim_s_Page_119thm.jpg
064a0afcff41da53a0c7d38cdd3221d7
c2d85ad26a749b487234cf8661b050d226a1b123
F20101112_AAABJL kim_s_Page_139.tif
0aca693513a7a8f734d6b6186eb16ac1
31210b30650a10936b56547bef940f4595286363
F20101112_AAABIW kim_s_Page_124.tif
3de660d51b8933c7c5b0c99cd87ced6e
8e74776ff9b591cde0414faa17025172aecd7583
18155 F20101112_AAACNC kim_s_Page_131.QC.jpg
4d57a084403a934cd4f3deff4b51fb72
3808a08506782ed434994d4fd60038ca46248e44
268838 F20101112_AAACMO UFE0010077_00001.xml FULL
f28755e8896198d739a4b93788c90545
e7966208957f13cca3a07cb20738a79af3c5478f
7215 F20101112_AAACLZ kim_s_Page_011thm.jpg
4eef5a64fd4ca6985c2e9c1673f0f012
c8c7c2397784fb325dbaf964a012f8e8ec45e0e2
F20101112_AAABJM kim_s_Page_140.tif
671dd8936c7403dce0a7d1f3a31dd961
c6ce432489d090e81ffb4e7a51ea499b86fa4883
F20101112_AAABIX kim_s_Page_125.tif
ca33b4f558dce8628b6925afc6928416
480a39ec0bfaf9ec5e9b266ee4b7438484ee7012
F20101112_AAABKA kim_s_Page_154.tif
a2f0882e8b24b2286b91f5fd6aace97b
53598d09e1c7ec50dc67fba4fb6c4c341472cc8d
6508 F20101112_AAACND kim_s_Page_138thm.jpg
3dfdab6efeb99e3b1a74ac1ae4f2f003
a50c2555d1ebb866b0b7800d9769cded94322454
F20101112_AAABIY kim_s_Page_126.tif
1edcff15502ab7dc25c9bda412c9b591
350aa99657f1546a869103abd44c9d838928d0b6
F20101112_AAABKB kim_s_Page_155.tif
21a3c72faecae85d3b56f0e8ba55aecc
fa2ca510eccbe2f43c62ee7c6cc96a7f0c353b21
5051 F20101112_AAACNE kim_s_Page_160thm.jpg
f47458592fd630796653960d0f4c1f26
8b3db6500af2149b9f6c311f588eb3c3e912faae
3248 F20101112_AAACMP kim_s_Page_007thm.jpg
1cabcd145498f9965728771829f3596f
f1ad555b63df9405750c8521ce8ed603a9dd79c0
F20101112_AAABJN kim_s_Page_141.tif
3651b477ee9a715f2950d10e20527acb
0b901673419543a8afc4edbd76dad26a1a9b2051
F20101112_AAABIZ kim_s_Page_127.tif
768e26c925aa4b4e193c6b3e7c2ffc38
eae896aa6ec1476e43be4fd9cd9b84b69a193a5c
F20101112_AAABKC kim_s_Page_156.tif
e278f0181a1251a7e8fcc35b1d739600
54c8dae3f5010d7657e416430803269624aedb7d
22650 F20101112_AAACNF kim_s_Page_161.QC.jpg
8959e0221fafa7915ef5996af9538285
f317f1dea9e0b48ebe5fdec74dcc7286013659c0
6983 F20101112_AAACMQ kim_s_Page_010thm.jpg
5ee8b4e49d0d937beab15d5c9c3c188a
c2658abd2917362d406fb4aade28a685172e65f5
F20101112_AAABJO kim_s_Page_142.tif
5ef1ba31f0f0f3c1dc03f57512f7c468
6396fec0cbe69984092f941ef7858e549a1cc23d
F20101112_AAABKD kim_s_Page_157.tif
81aa83512495f028ee45813218afe337
f5b2f1306051b5c2c41e2410551a277922132875
16244 F20101112_AAACNG kim_s_Page_167.QC.jpg
7d9242953fd4582e2dbc34a5a148d00a
985e0d110d7f1c4a466a131c80a354c478ef94a3
5824 F20101112_AAACMR kim_s_Page_026thm.jpg
0c9f1b5b54f836bcaa5d797c2880b448
3447b490234f543a7cd535f2cfecd7f05b730497
F20101112_AAABJP kim_s_Page_143.tif
6db26ab86cc7081cf1c74387b1fe54be
3acd793b39f5882bd72734445361779734220d38
F20101112_AAABKE kim_s_Page_158.tif
8aad3fc8f0d9629f53658f80a732ece1
334a35c44d79a6f0f261cb9b9e014a96ed06ec82
6152 F20101112_AAACNH kim_s_Page_176thm.jpg
3ec97f15d8dc96cedab649da5eccce7d
ecc97518354c3957c599f34d36740e583f37c6d0
19254 F20101112_AAACMS kim_s_Page_028.QC.jpg
175c3155233f638af47127a099c1e29b
392f38891eb6add40c63d7031a10412182c297e9
F20101112_AAABJQ kim_s_Page_144.tif
26cd63b7c02ef71c69ccb1536ae80b65
ff891d80ccac0c966165cd0d66d2524394cdeccc
F20101112_AAABKF kim_s_Page_159.tif
e07e2eb6255fe8ca624f30938b8ab389
d0b55b47249515d373db1478b1238779d51acaa3
13588 F20101112_AAACMT kim_s_Page_031.QC.jpg
1e9350f482048bf23f30115a96ed063c
f76720adc552dab67194e17be155493adeee37e8
F20101112_AAABJR kim_s_Page_145.tif
fc605c767e17f0cbb880f592997f7c36
62877515ad993bd71ea8227b7b5a4d813dc5d6c7
F20101112_AAABKG kim_s_Page_160.tif
56a978e4fc6f698a86364edaeeff3c0d
2ae15b9002c64b954d2683bae2e080cc289c555f
23983 F20101112_AAACMU kim_s_Page_040.QC.jpg
ce42d5b57c4b80c33771e3a7d16e81d9
134262b27295aaf104ee1ffd3889933eedb89c73
F20101112_AAABJS kim_s_Page_146.tif
0a204bcb50fa88054d58d0b6f7342211
52bbfa360c3399a9966863c40c024ede7d4dedb3
F20101112_AAABKH kim_s_Page_161.tif
7bcd532bbb4a90e43de8a2816de10bf3
eb9931c4a35d759407b46751ee6cccd6c2377c27
23326 F20101112_AAACMV kim_s_Page_046.QC.jpg
8480e82af85a24193d2aafc0a59fb79f
0fa960d8fd0f5f521ebe50c384c7101e4bf8f05d
F20101112_AAABJT kim_s_Page_147.tif
58069354c5781ffba3c68070288f0909
f634890d9115f5604817aa05833733fe8ab03b59
F20101112_AAABKI kim_s_Page_162.tif
e49a7591e16a4b040169e93ad5d74061
473ac4db6d493dcd11339ea583c008fcf731a456
19881 F20101112_AAACMW kim_s_Page_060.QC.jpg
74888b5c8c976d85f1817fbb9874acd3
cb5a56773c7165bfdb8c8d5fd4b81d73480870fe
F20101112_AAABJU kim_s_Page_148.tif
9b15401c0597a152016b303a55a480a1
460b1e754aba954452578da1002fd7eaa6f05436
F20101112_AAABKJ kim_s_Page_163.tif
f0e053de790b693d1946c0fbcad29d97
b125a4cfbf87a130a520346ceae3f0680169e7dc
6683 F20101112_AAACMX kim_s_Page_073thm.jpg
545e7ace2972799e7739a74786838f7c
0156b54422702e007c859e6ea68af97bdca0cbe9
F20101112_AAABJV kim_s_Page_149.tif
85694fbbe28549500d63631c5b0655c7
2876a4fee7527a7ab9ff27900ea640252668231f
F20101112_AAABKK kim_s_Page_164.tif
cbe5e8c6284205794448f943f4a38a52
c101b9cc45cc41140cabf8b1cc6bbac55b7208ab
6314 F20101112_AAACMY kim_s_Page_074thm.jpg
77bc2ca6182f5ad21f34d620047cf445
b87597d40318e7b03e6a801e88ef68f8a17e832a
F20101112_AAABKL kim_s_Page_165.tif
7eab15f57eacb7423d7af8f8731dd141
e1351c09d1db36336638206511516c2585ee1f57
F20101112_AAABJW kim_s_Page_150.tif
505f26065c13bb27543dfb706e2c9250
ba1fcecd6ab2937b04154cc35825678745bdce98
26456 F20101112_AAACMZ kim_s_Page_083.QC.jpg
3018a44cb750b26a097fae4259aa5b4b
b204730ac3d0c3ae50a5b0bc181b16f0281499ce
F20101112_AAABLA kim_s_Page_180.tif
8a090d2f7a1b629ffecbaef00bd31f6e
99b3f2a951b9798df30655aff8071c2d25895a77
F20101112_AAABKM kim_s_Page_166.tif
24a32bd7bb941d416cfa136654ff65fa
f981515d1a72ab6c1fd54e6c8d09c026924ade81
F20101112_AAABJX kim_s_Page_151.tif
8c9d5151cc25d5f934347f2cd94aa88c
a772fd0a6c04a06b899f06f6c82763646faf459a
F20101112_AAABLB kim_s_Page_181.tif
6411064daa085d053740662f422116a9
523631779440bb7415010d9bd5338ae4efbf2b0e
F20101112_AAABKN kim_s_Page_167.tif
d3fc37de773ee6a648bb6ec2cff64db7
8c462dfc5d3c261ede282f680e8d581cfb8a4ca0
F20101112_AAABJY kim_s_Page_152.tif
66205f889ff0d92d51b5bb3ff4da829a
0b57becd6f8e4f36ae8246e6a10cbd5403b19a7c
F20101112_AAABLC kim_s_Page_182.tif
b691146da94e5cb083719580e629f3c9
2b2da55da39c79d21bac286e70836b955e92d633
F20101112_AAABJZ kim_s_Page_153.tif
a7fbd24c5fe0a61e678bf82d6ca63fcb
4868e5d015314e5760ccc263df5b7bc9b99127cc
F20101112_AAABLD kim_s_Page_183.tif
d27afb74d15d1ffdbbbb337f71d8ffa3
5c3cc806865f243484267456b93f0a095fb0570c
F20101112_AAABKO kim_s_Page_168.tif
f4618d59002b62672999894f706600b0
02fbdb758a001bf497ac8b7d6e7b25e0bc009ea5
8120 F20101112_AAABLE kim_s_Page_001.pro
0858e463a379c207f646934646a067d0
683d62778597d6633da20d7679dd1c7551a3f946
F20101112_AAABKP kim_s_Page_169.tif
4227d340fde48718f5ffa98f16599f13
cce30c5a23283ca18b343e53baec4ac61e8d64b9
1117 F20101112_AAABLF kim_s_Page_002.pro
2c41aacd2ecdb778a3456e7b84803e20
0a52052b4e388e7ee23c232b20bee9b004804308
F20101112_AAABKQ kim_s_Page_170.tif
b00bc9e8f0adc24d05420a8f1e15da30
9d2594b9b29ed1afc63c57595db26ec21c3ad1f5
1633 F20101112_AAABLG kim_s_Page_003.pro
c90e814cef64658ee05cc022d8162162
6aa8a0c114c0ddd989b7657f2bf0ca93b278368c
F20101112_AAABKR kim_s_Page_171.tif
356998c60581f98887823d47b73baa27
98eeca49caf01cdbaabf94d68ebc183d61afc1d0
25555 F20101112_AAABLH kim_s_Page_004.pro
a68b8b69562ab97319277b64b4f982fd
581d8fecce24759fe0c27ae953593d512268c438
F20101112_AAABKS kim_s_Page_172.tif
37215dc6a8ea2965ca78de4d883ab4ca
d3464a211c301779803ea8fb50a98cc09c188fc4
57275 F20101112_AAABLI kim_s_Page_005.pro
fa336884bae8f271e0547a488d846707
8a4090d17f8f789d539d6630970f77478cec17d1
F20101112_AAABKT kim_s_Page_173.tif
3fb6e85286451a2acde2fc689b14551d
3305454b9b1c29a96f600de564470b47b6a1437d
84019 F20101112_AAABLJ kim_s_Page_006.pro
d2c9c21eb0e7b29d9442bf83f79ba347
ceb3f10067cf66f33c865a857902f24f1f007ad3
F20101112_AAABKU kim_s_Page_174.tif
5604aa608ec6f5a98e02d13f5550271f
1fadb9809a6db0e199e79941f94cedd9f4a17b9a
32247 F20101112_AAABLK kim_s_Page_007.pro
84f51889b168b164e2619c09aab8e3af
53948318e85b516bec9816f1393ad2e7d9f9340e
F20101112_AAABKV kim_s_Page_175.tif
bc3d82a009c3e20c337d69e8d9dbd9a8
b8b8ebef62784f79f2450bee3e5c7f661bec062f
51375 F20101112_AAABLL kim_s_Page_008.pro
50b344b700743335091cca877493c4a7
753e1c3938082d474015eb6539f7d66de4c8eb02
F20101112_AAABKW kim_s_Page_176.tif
c3303b7acac737e8bd5192aed341adad
57b8bab928cb39e2fdd8b45b20600dbbb0dbcfa2
56795 F20101112_AAABLM kim_s_Page_009.pro
643b9e1a42f18125a985b306bdd7fe56
25f541b784d4fb838cf4f7412df5ccf4bff5afee
F20101112_AAABKX kim_s_Page_177.tif
a694a7e4a0aaa9de52068c9851b4281d
7a7c25c5ba85439dfbc231aeec040b39d4fa7c2e
8878 F20101112_AAABMA kim_s_Page_023.pro
6eb44ad0cf134fd07ae595881d642f22
c4cbffbc0da08c303b76b257068a85302c4d8973
69738 F20101112_AAABLN kim_s_Page_010.pro
6a4b6c63d47a3ab3b6a109989785f2ac
0473244bf7746ade3e08bc803046f7c9bdbf4a3e
F20101112_AAABKY kim_s_Page_178.tif
9d4a65903f83a76593d0bf3e8d98b705
8332aebc248525972da682c609aeb4d95a5b03e2
42919 F20101112_AAABMB kim_s_Page_024.pro
c00c6bf14fb9fe81508d7f280a314d08
7a07ae80f349ad8874f72bc294878f2ef86dc988
70368 F20101112_AAABLO kim_s_Page_011.pro
1b35a053b7319e2012fea4d541b54c84
20df9b6dfbd21649cbfbf39326dc8a1b9eae165e
F20101112_AAABKZ kim_s_Page_179.tif
9c57621f3d75a12472070e97e5908eba
22464c15b7b0e7d6910c07937a1f6d4378a6bafa
47728 F20101112_AAABMC kim_s_Page_025.pro
5006f962954d7f91373301ee140ae1de
97050a3804c1b6289cbaa7fe7cb562e7ddaa13ac
32604 F20101112_AAABMD kim_s_Page_026.pro
4a73c13304b5472a56174e27e1fe05d3
f2ff55bfb0c6af8f898f8064dcf24073e4bab6c6
23092 F20101112_AAABLP kim_s_Page_012.pro
83099077c10b22aee9b291d8852f3423
ed05299ef53a23e191902efeedd971a8432439fe
24727 F20101112_AAABME kim_s_Page_027.pro
063460978e7e52dc590261b43db4a665
bbe77af324831182db71850be2bb32eb5e90bac0
39604 F20101112_AAABLQ kim_s_Page_013.pro
ef07e239c3c3579361a29b35f1dbeed0
02166fb89df91a09072f719fecc8820b0e710b00
24743 F20101112_AAABMF kim_s_Page_028.pro
46ac0b53a91b093327cfe3735520a2ab
91ba278df83481d9a54b0e9c87941220c2f4ee5d
30112 F20101112_AAABLR kim_s_Page_014.pro
88701d50d5b7e49d5ddfc7425ca8d5b6
a47bab4bb59725b9be72e425c9443b74e33b06d8
39704 F20101112_AAABMG kim_s_Page_029.pro
ac802b810d09e5aa0ec25d2b0a5c09c7
e161b793621c8538ef36ea4ec3c1d7cd248b94f8
45562 F20101112_AAABLS kim_s_Page_015.pro
b15c0612ba56139108d5d234088e1c5a
8fc136d7644371f8ba37c435c38e96245e9b8a8c
40884 F20101112_AAABMH kim_s_Page_030.pro
c0d372c2287afca9d88a78acc58ca550
d17f32bc2f08e8ec62f4b3721cea45bc9ddebf64
39709 F20101112_AAABLT kim_s_Page_016.pro
d7001471782de6c0d2f3c94ec434cdeb
b3a78f64b5e1939fbcc27cf61dba8de2ea3b938a
19665 F20101112_AAABMI kim_s_Page_031.pro
752ea213949a8488182bc26bd014946c
f255e620ba7e600af8a78a8f1cc78a8dd927e002
48943 F20101112_AAABLU kim_s_Page_017.pro
93bbe4e97522864ca9bc391e53a53e94
461c7f59a0f8bdb357033141123cb3a145dd2bf2
11923 F20101112_AAABMJ kim_s_Page_032.pro
62624d239c2c38b9cd3af9befaae5243
86506f0da0d3203321091aa6ee88ce4cb97c0442
51266 F20101112_AAABLV kim_s_Page_018.pro
beae9f567b18b1feeaf04df8c42b6a32
56032d6c86015a7a0bceab9e19618a0d66491f33
34203 F20101112_AAABMK kim_s_Page_033.pro
8276480248bb1a56210109d2b4ed7122
afeb8a184fe9d22651fe5362e0eaaa50fa52b05c
46441 F20101112_AAABLW kim_s_Page_019.pro
90e23c5b44e510e3990dc89f0ce4b39e
0deb6fb55c8414d8d75710d5e3e19f97d539ccd9
36521 F20101112_AAABML kim_s_Page_034.pro
8008022f441fd3c087cc85714df9d266
10c10055c5a27da4578c2f9ca399ce68d59f43ea
46899 F20101112_AAABLX kim_s_Page_020.pro
be04ba903e8b126ef80ad065348a2f3a
bf95137cfccb9849326aa2d8d7ac48ceecad9ac3
41322 F20101112_AAABNA kim_s_Page_049.pro
0051fa5780feba9383b01b9ad9eda714
48fbbb794110775f3301a327b843369ee00f6f6c
47888 F20101112_AAABMM kim_s_Page_035.pro
d17c0b1811f6c10f2efce8ad693b31fc
26aa0ecf9de7edd55efdcf0328b8f2270161f2ba
52065 F20101112_AAABLY kim_s_Page_021.pro
68c0409cd34c432f0ae62a1952f5c2a6
9536f3c1f70736d033ed4d89b8a2e76592b495d6
52895 F20101112_AAABNB kim_s_Page_050.pro
57a068bbac27d49cbad0cc04730335a6
d838ac2a129527daa80e2340ea85d71288ccb4b2
49317 F20101112_AAABMN kim_s_Page_036.pro
bf1bed71a68ad6bb17a7d1542156dd8c
d1b50169fc935153fdc41c6679b644791ada5294
52595 F20101112_AAABLZ kim_s_Page_022.pro
8b26d7c4c3356ac8cb3ca89976feeddc
2be7a761decb7e1b5e35335b97fc9b8496ee73a6
30018 F20101112_AAABNC kim_s_Page_051.pro
e49d53a6ed4fc3ccd96ed0a4c119b31c
bc15ea5ee90ccfa6a2f347afcba2f3571685880b
32538 F20101112_AAABMO kim_s_Page_037.pro
a7b5cd486a00548c658cd1cf8284a31e
04ff3666e1f457b7cd35de3dbc192936a13b67c9
38560 F20101112_AAABND kim_s_Page_052.pro
7f52ada12fe5b183ca5e99a6a46a2ba7
cca614b8d2ae1ed0afbb4bc6bb5db0f7b858977c
44074 F20101112_AAABMP kim_s_Page_038.pro
c28eab0a25b81de5abedf4ff0d48c3e0
3a2a9c21e204e731090b1762e9be288ed82ad1d3
32300 F20101112_AAABNE kim_s_Page_053.pro
fdd5889d380e8bc6d6dfbf88a2fb14f2
eaf24fa20ef65d5ce27aba9e7e679ef0b031bf9f
47304 F20101112_AAABNF kim_s_Page_054.pro
0c6884cae941a420179f7c8c34d01986
6413df34dd69f7e79ae623dba043e9caf07f2017
49849 F20101112_AAABMQ kim_s_Page_039.pro
074c0d2d70bd30ea73f2ad31f6f2efa5
5e24759328a54dacd21ab1752b3fed34720358ee
38313 F20101112_AAABNG kim_s_Page_055.pro
a90f1a530f0821a77a35cb21668be372
1291ab50fa2f66e372b733a799447d0f9e860f71
36716 F20101112_AAABMR kim_s_Page_040.pro
b40e7735ee8761f9592da83a51dc48d7
e8d8a9cbe6a7551fff94c76d223b10f16b854ed3
43572 F20101112_AAABNH kim_s_Page_056.pro
c5647fc7eb5c5a71046047d64c7288a9
e9ca09b179637d6d891d349ffe27aabda8673bb0
32826 F20101112_AAABMS kim_s_Page_041.pro
93d7ced416e85f9a8c6827c33d98f568
808bccfa1a63aa911c636908738f93342b6a7240
41975 F20101112_AAABNI kim_s_Page_057.pro
3becfafc7a1982edeedea565b8f0cd9a
f4f96cb12950b4471d0581c6af97f693eb7b26fb
50040 F20101112_AAABMT kim_s_Page_042.pro
ffa9eea18cdc01232c7e6ff453bcdb7d
1373992ce696ba6fd992acf99d89d229ed21c337
39426 F20101112_AAABNJ kim_s_Page_058.pro
f7e31427e8d6fb2b786b203a00d3ce55
c6b156e6f6d158ed1ff6e45094460e71b7d23398
39324 F20101112_AAABMU kim_s_Page_043.pro
b7be56b672216d627480d811fafcc595
db4846a9d932a1206c7cbe17903fe2ad6ac5dc47
45525 F20101112_AAABNK kim_s_Page_059.pro
1644e51e9d83baa2cc6c14a8f866a2b6
1ab4887e883c8c0a3a60ced5e7372629c5f2e87f
20788 F20101112_AAABMV kim_s_Page_044.pro
7f9095e4e5451c8542b7568518a7be10
7574d6132bc2133bbba0df0c2bd05cf02476bb30
39358 F20101112_AAABNL kim_s_Page_060.pro
1c58a87811fab92dcc8ab1c14f30fc05
de8f94282cc0f5898829efeae788b6ebfe5664b6
44087 F20101112_AAABMW kim_s_Page_045.pro
14a710936d7d3a77e5717e4e1c390a00
992890d4022fed393ffbf9a0d333f171dabba45c
51073 F20101112_AAABOA kim_s_Page_075.pro
4daab245d795e59790a486c6dd11b91d
0c67f3ea74138cc999a73eecbff13c67de18ffa0
43985 F20101112_AAABNM kim_s_Page_061.pro
de82c28cf49bbec89f8fdff947dda567
6d7732e4a76474ca59291b1f24478b5a70bc40ec
47807 F20101112_AAABMX kim_s_Page_046.pro
d4ea58163ea4c05115c93aee928c0e2e
badf22244386cf2206a692461ed892abc1b9cca2
44137 F20101112_AAABOB kim_s_Page_076.pro
2a827a5b167de2e57ff92e485d1ec86a
faa7e6e93a05693ed4b0948a7c32659ae2917195
37895 F20101112_AAABNN kim_s_Page_062.pro
e2bb759a846114bfed5dbf9fd2af50ab
4f0e6568a8da574966c0ca94e206a65ca1036ef5
43584 F20101112_AAABMY kim_s_Page_047.pro
a212c8da49124f5afc882762d19407fa
f10fd17da43b434fbe487d7cad86abe618e7149d
30835 F20101112_AAABOC kim_s_Page_077.pro
6d1fe4e1339987982014cd284f2731b0
aab3e3121266ea87904acd95c3583ca858c6641c
38740 F20101112_AAABNO kim_s_Page_063.pro
29f5c55fa8811d3e4b6a5cbf2d17f137
00191fd82dacbe526dafb28a03561cd4f260798f
36952 F20101112_AAABMZ kim_s_Page_048.pro
57bff0b7e735a879c989e12c0b33de2f
7e4008b0d426912438e36d8355b148b050efb243
45404 F20101112_AAABOD kim_s_Page_078.pro
0f68692b98219aafb2b8b0f3b7eaaa21
ee8553424c2889ee119addb7d8ff390bf13ec781
32716 F20101112_AAABNP kim_s_Page_064.pro
8462caf5d89783901df81f446561ae9e
555c53bd957dafa209c1832ee5cb236470e483bb
23348 F20101112_AAABOE kim_s_Page_079.pro
7a483366b5ca47a5ef623e71ee7b9649
1333e5d90d5e0975bcc41cb162bab78db3cf2e90
53415 F20101112_AAABNQ kim_s_Page_065.pro
41a6580d9cac483f40411c2008c72434
cbe5a2f550978f7f114e6e91fcb85759e1d5143a
11903 F20101112_AAABOF kim_s_Page_080.pro
fc8be850ce63a2eb6a8ea83a420a2fa5
0c4ee09e55e4aff0808345304428c59e0033ac05
51513 F20101112_AAABOG kim_s_Page_081.pro
84dc9455835785f10821080e1d411f5f
832ed69a23ee063fa9faeffeb71afffaeffbfe6f
36890 F20101112_AAABNR kim_s_Page_066.pro
cf0d7a366fca8942f4027c788a98b078
f74c5b557da5ac1554739059503edf47c6731282
12125 F20101112_AAABOH kim_s_Page_082.pro
86a05e45dda30a0b045d464fd3e1407a
beb19bb7b54c0ae8cae89a0b7871decbfe9c8006
43526 F20101112_AAABNS kim_s_Page_067.pro
d69b1c38066bcdb9b39723a3abae7560
c393b230b3460b79d01d6bdd204cc016695ff6dd
53322 F20101112_AAABOI kim_s_Page_083.pro
149ca113e37eb957e1b2cdf85f604833
c1b692ae5c6cc1ce120bae28fb2b1e22e4292f07
44109 F20101112_AAABNT kim_s_Page_068.pro
591271ff56b6ea9231ec5deaf71c86e1
f3ea0be8f2771cfbbbd35e89c60fabd26987caa6
47822 F20101112_AAABOJ kim_s_Page_084.pro
21b17a055e9fa32990a00004d92cfac6
17fdcf81f1025caa1d7dfda720dfa6f875430da7
47436 F20101112_AAABNU kim_s_Page_069.pro
c10ea86abddaeac8eb18aa1d1b1bb65d
425f94430a39268c47690ab6e659644f90cbc253
39678 F20101112_AAABOK kim_s_Page_085.pro
514e25e567af414a8a4e4ad847e1f077
6f074c686ff4b57888ec4c45ab1962a9606b230a
27542 F20101112_AAABNV kim_s_Page_070.pro
97ea398a3fa6afee764db40363ed3c4a
f12bfbfe5d7934846bfaf5724acf3be1a5097c09
16150 F20101112_AAABOL kim_s_Page_086.pro
0fc986dc633cf7f35fce81d2fa4ea464
8074090fea999aeb4213a5b936d30a2e81e35185
40343 F20101112_AAABNW kim_s_Page_071.pro
8ad551a376457931cbeb746ab52d0425
8b2a2e6ed0ce12dad1346d4ce5b15b1cbb96690e
45116 F20101112_AAABOM kim_s_Page_087.pro
4945fa6aa2d0b8203e9eb4f81390f108
8bbf21223084561f53d98d8017e6e4ea9596a1d4
38988 F20101112_AAABNX kim_s_Page_072.pro
205066b3ea12ba15d2f1340b22559e4e
951bc04699ec4c097cfb0c81a805a2a69b353b65
51474 F20101112_AAABPA kim_s_Page_101.pro
e7f34ce5ada031d813060ebb7ea46133
47d17daa24b3b734ac688d3cd3a5f632d2fb42b7
50960 F20101112_AAABON kim_s_Page_088.pro
1dfcf1bd283d61d8ab5232406dee7766
c3a623b250133ee6d48fc167c5d6cc3aa3c37332
49965 F20101112_AAABNY kim_s_Page_073.pro
11cd01c0996fff2458470550cf75a39f
25b069ea895193d5e912386118d0d52b43766bd5
46112 F20101112_AAABPB kim_s_Page_102.pro
77b4186ab76f905507d1ef950c02503d
1daee77a4da0090429f06feb0ed7b467590477d6
51382 F20101112_AAABOO kim_s_Page_089.pro
ea097422bdb271cdda0fb1e98633fdbf
ed7304acffe8da5f75bd413fff2fca4174f4c673
46873 F20101112_AAABNZ kim_s_Page_074.pro
986dc77c5fceefdc3a54fde6591c0a45
015bc23420466b7209503e0996ff816dad9b7c46
41384 F20101112_AAABPC kim_s_Page_103.pro
52b7d0cd94487d8c77f05bff47b5c1a9
09e1aa4f266236e9e006dc5160273a00b1033eaa
48654 F20101112_AAABOP kim_s_Page_090.pro
5d88b9994d1e7b0024e9c0c8b621e35c
ece7386562bbcafd8c39d50c6fb0c3fe9386da8e
51468 F20101112_AAABPD kim_s_Page_104.pro
bfe607f0e60cbc75f1df3b0738314fea
355ae28623a07ea34843c24d2be1cd297775fb89
33743 F20101112_AAABOQ kim_s_Page_091.pro
861ebf58d4798f608dc66ca1533da3c7
bcab06ce1194812e7b5e5eaa49fa4fbeee874b6f
50990 F20101112_AAABPE kim_s_Page_105.pro
c3248721bc6418951746e2fb4c212eaa
aee6d50fe04a7900ec91c8f21c5918260900a6b9
40555 F20101112_AAABOR kim_s_Page_092.pro
d4a71be3cc0a072329ed362baa1e5dd5
ec42d36b046fa780edb1b31d8ab2eeec98433b36
52105 F20101112_AAABPF kim_s_Page_106.pro
956d81226e3a9a64bdc390e9945130e9
22443a5aa12da922ab3b4a8833662b9bc1ecb086
34165 F20101112_AAABPG kim_s_Page_107.pro
78fec5481a7540db74d319dfb20f5f2c
e5b2d006bbc1eac9322974262d120c8bf373dd78
31831 F20101112_AAABOS kim_s_Page_093.pro
5af3991495b408d7223afdc8c91ae20c
32e91cb5672e55fb393710fe0956c55a05d9c1b0
45762 F20101112_AAABPH kim_s_Page_108.pro
b5f05832cbad04fc6c8441b8564bf3d4
cbb8fd466e5540d437d52db3ff8e4477f94fb75f
23752 F20101112_AAABPI kim_s_Page_109.pro
a1767a6a8b6bc0f6b2ddec710ab64b6b
72691c5ebc97bd6fb108c7d99ac287eb394a5d4e
43678 F20101112_AAABOT kim_s_Page_094.pro
f68cca0b006e6334668dc705d32304ca
df7216a94f11462e9a2a57cac41415e01d644be3
44149 F20101112_AAABPJ kim_s_Page_110.pro
37ef5e0f273e0595593f68f5789b735d
7da7a34da609d67b9f4a2b973567edd0d532dba7
43073 F20101112_AAABOU kim_s_Page_095.pro
9f7bc7601cdd0e13507f0b1ac820a30f
26928f5bffc54677d7fc0f2398b0198f47f4e7d8
35769 F20101112_AAABPK kim_s_Page_111.pro
1092ef558fc215e5712c0eaf5a821293
c5583d36820ad134c00b86f59ef067e595398fcc
28875 F20101112_AAABOV kim_s_Page_096.pro
24d809949fd27ec5ee5e028724ad4c0a
7404da7712c2f65acabfaa9f09654d57d6c1fff5
17338 F20101112_AAABPL kim_s_Page_112.pro
9b9b4198f566623273e0ba2ac6adbab7
63d9c8e9239cfc979ae5b19a3ba3382febc52429
32847 F20101112_AAABOW kim_s_Page_097.pro
983d6f0374ce943bf96dfe4e615817ac
50d3cad95c51bdca3dd39b4ab620d978257a8c50
49505 F20101112_AAABQA kim_s_Page_127.pro
fcdc7781af4c3e643e6e5cab699df74b
ac106f598e411e2d363711b0b9f6d4415a1289fa
45844 F20101112_AAABPM kim_s_Page_113.pro
3ce8ac54596146139b9d947963710c9d
80f4a7910cc5b2dd99bb15b33693fa8767db18e7
50181 F20101112_AAABOX kim_s_Page_098.pro
0ce311ceeea658a71f73f77b72372c82
5e23d1bdfcfeab5ceb0fc9bd10a8bca2279c1ada
31397 F20101112_AAABQB kim_s_Page_128.pro
dc286dadb482a5d629e44fee9d4bc1ef
2885a7be927fb168f5b26f5ab64cdc3b39abb9de
49217 F20101112_AAABPN kim_s_Page_114.pro
167a7ffec8504cd19ba8a07c54711e3f
39e7d718138f8471a1dede303b7c0c23eedfaab9
48387 F20101112_AAABOY kim_s_Page_099.pro
291cf716292d699f4b2beb9565f6ef85
f57c35701990392721daa0490c0584680a31c401
49008 F20101112_AAABQC kim_s_Page_129.pro
c153452ebbe42ad60e305f3fd04a7377
5e7b9e1f22dc12830bfef2a651beb5fd127ca930
45901 F20101112_AAABPO kim_s_Page_115.pro
1acf080196007e72fba89ef6562c0a1c
843324ad5490f2144898656ee94c733573814db3
41625 F20101112_AAABOZ kim_s_Page_100.pro
d591c0701ced5d5af6c5d5e1f45615b1
6bc43beccc1c0eba0a5fef8ff7bd6ec3932b5158
28406 F20101112_AAABQD kim_s_Page_130.pro
130440c0cef59c13aae657350717d306
f230c095306163488e8fcfdf02601c4df9ce1606
39755 F20101112_AAABPP kim_s_Page_116.pro
4cec0f7c626091a909374d3bdc7e9d4e
b793d507a15e720d12fa6f422f98ce25655fbd12
32600 F20101112_AAABQE kim_s_Page_131.pro
bbcde8f67dfd42d2350d17ecf37a92f6
9764569acf88e013c5a16cac5a4f79632e8b51fa
46290 F20101112_AAABPQ kim_s_Page_117.pro
3c24fb28b1df8313b046be33907a66eb
f867b951004a3aaba5f7cd60a51b2b5aa6f302c5
28495 F20101112_AAABQF kim_s_Page_132.pro
c8dce8e1a037f3b15f5f3fa0bbd1238d
b49d0bee8d5c0b09425c98402f0f02c91c80fc0d
35760 F20101112_AAABPR kim_s_Page_118.pro
ebd5ae7a80281e1d4b3e49c41984d0f0
fb97eec4ac1f56edfece8715d5d8a525e52536d6
43124 F20101112_AAABQG kim_s_Page_133.pro
1d9dad8d6f2698e96e02f1d605a6ddbe
5e64824a7b3fda15ec3a1df3eb2288a2f188469f
43627 F20101112_AAABPS kim_s_Page_119.pro
92e7aa6fb3625f39c8cbdd7aa340e503
12df359134f549355e954c2668cf2c7d0da12771
46942 F20101112_AAABQH kim_s_Page_134.pro
07324e8195e55703599348d9681ca514
d1e1f82ed81b73ec61405244df20f85147d7a5d6
50955 F20101112_AAABQI kim_s_Page_135.pro
d873f44adde5d71de9f8155d8504cef9
60c450c6e738b70cc8ce537c6b71b5c801b6ec18
41526 F20101112_AAABPT kim_s_Page_120.pro
771a51fa0fe3fb7116cfe0762ca1783e
25a152afa97f2862a3cfc593f918c85205430e63
43344 F20101112_AAABQJ kim_s_Page_136.pro
45900a95847fd52275711ff9037566e1
4fa56b36290963b6ce4acc39f417f5e9f308070f
50183 F20101112_AAABPU kim_s_Page_121.pro
8b62e33438efbaad439d0e3730e965a5
6157ddc4d139c83d918e3dd0998fd81268dcc47e
47582 F20101112_AAABQK kim_s_Page_137.pro
56805a58bf4a09e088bdc1d05a9b5e44
2c35446c999a2c495c9045faa3680c4731edbc7c
39402 F20101112_AAABPV kim_s_Page_122.pro
0a21241a9775663e482e526991ccac1a
bcb0692dea84ad79d2df2a77a72391424db1ee05
51392 F20101112_AAABQL kim_s_Page_138.pro
d54d0f949f4aa6da6d37b84dfeafda84
c01e567017062c96b8816fde1788acec40ddf5b8
41429 F20101112_AAABPW kim_s_Page_123.pro
27d2230e274b33841a1a71c2015056ea
c8d05557ea8d1b7adce50352e6d45dcae66b9294
47501 F20101112_AAABQM kim_s_Page_139.pro
ba9393f23d17eef38de6a6fa4464f214
ecb94abd1483e98d17a1b7313176abfd9fee9653
44988 F20101112_AAABPX kim_s_Page_124.pro
6a45c38c06b49015fd2647cb4ad9c606
f2c8a3dbf2dd69f0ee45e0a777029bca61f5a70d
26576 F20101112_AAABRA kim_s_Page_153.pro
9fde7d4636a05385aecdd0d4524f4f37
f10ac325d3a59a7b30f73bb807583ed4613c405d
47922 F20101112_AAABQN kim_s_Page_140.pro
cc1abd0face4687b19140eb8a8c7a2f5
86b48503ac94defa65df989f9c797c95910f7e15
32987 F20101112_AAABPY kim_s_Page_125.pro
1d4a31eb526215236061e7e8dbc556c3
1553379e517d3be104a26ec836807e4d91709f94
46535 F20101112_AAABRB kim_s_Page_154.pro
30364c345b6a03b0c6bad6827a3ca5cb
bcc44cd4bbab89a9b9f266cf279f48d22f00ad60
42929 F20101112_AAABQO kim_s_Page_141.pro
fafd766f70c743dcd4db7015369cc42e
3d8aeb4e22b5cf54080c3e00b1dae7e6e40e0cd9
49951 F20101112_AAABPZ kim_s_Page_126.pro
5470ebbc5668e15d679fa5ced0f16118
7776d65e93db3bb5765e1e40da34f3e8e2f37f19
44552 F20101112_AAABRC kim_s_Page_155.pro
12b0459e628eb47f339f55aa6bfeaec2
30a43bdc0086db332027825583f30c66986ebad9
44128 F20101112_AAABQP kim_s_Page_142.pro
301b2a0acd421784a201dfabe4cf9139
7877b4b2f5d671cc14bfbc653c81435c624c0716
12878 F20101112_AAABRD kim_s_Page_156.pro
ab5091ca90ab043f6158640720163ade
2a5361fce8c1674185b452a9dfdd2283a5890b3d
53810 F20101112_AAABQQ kim_s_Page_143.pro
86b72f79bd6b30b8eb7a34c50ab48f25
d72cf3dee197cdf75220976027ec16b200f9f705
30985 F20101112_AAABRE kim_s_Page_157.pro
5814dc1e6771ef8f90750bb340fd01a2
e5523c1e8cc74be2d93a6c0a5ec5d1f41e17f321
52004 F20101112_AAABQR kim_s_Page_144.pro
e1c844e46c01a20b42f8aae072dde868
eaced7ff55de048d6692a0d7bca01f16fa97abf5
34680 F20101112_AAABRF kim_s_Page_158.pro
3b7f9ee93003e3a483db3eea8c945d32
be28e9455383a4a8c20fdd0e5f7e4e1f3fa2ff11
26773 F20101112_AAABQS kim_s_Page_145.pro
5a9fb53bbc65e4308581e8f6d9ad3693
fda67bb686bfff61d252c3decf6c0acb12d93846
31343 F20101112_AAABRG kim_s_Page_159.pro
478b49c7cade155694172949cbcfebee
33fb6c12d376806189feca28490670857025d539
52376 F20101112_AAABQT kim_s_Page_146.pro
345718ee66c1fff5d9894b42424c3667
dc11f11b48dd7cf9bc13b146b3debd61c5195b44
25019 F20101112_AAABRH kim_s_Page_160.pro
1f36706d736552776ee55a972c74a3d7
7e872e8fb5987b4967ae72f82af882131374377b
49058 F20101112_AAABRI kim_s_Page_161.pro
93e505b2ca0c055ad8c307e6fd853ae6
1cbcdf0e9bbf19a2d9c5d6cd9660cfdde75a75ee
51404 F20101112_AAABQU kim_s_Page_147.pro
072fb5f3d7cbde6462e8713539a82b79
d2d454e3e95766db6d58c2e8adc389b31afe3f14
17296 F20101112_AAABRJ kim_s_Page_162.pro
552a1556cbcd5c19eba04e2702d81131
3d25e7a1b10fe04f6b6b3db8c527d621d5c1dae5
39732 F20101112_AAABQV kim_s_Page_148.pro
37157e166f12a208e2f6f91d04e55aa4
91e5c01ef8c0a24f52f014c574a8506daf1a8c4c
51760 F20101112_AAABRK kim_s_Page_163.pro
0f0bb5f7ae1bee0343b4ae9aee95dfac
e12862340e12e871875a88be37406126c2c35f19
40746 F20101112_AAABQW kim_s_Page_149.pro
5b91b205e23e5499f7dbd6ecfef69ac6
dd5bb278048cdad6b7245f17bd8a40e78b6b0f7b
49893 F20101112_AAABRL kim_s_Page_164.pro
cc37b24dbb035ef84f7ffdf818307d27
60bb4dbc14e8b1d4c413a7ca9f4b8b4592ad3b12
51087 F20101112_AAABQX kim_s_Page_150.pro
9d2cdb10d555a1f4f2622d63be8e7c9d
31ac40966ba132193d3f83d3431982e3fe2b1313
62719 F20101112_AAABSA kim_s_Page_179.pro
152d1c120f55f155ba4934086c615abf
219505011d5a725f5f63c8ee34952dd4b1c8f711
34508 F20101112_AAABRM kim_s_Page_165.pro
f7f61aeca402aa8ab9eb042da104ceb0
001dffb6da1425476d9730af905963f1f786b732
10321 F20101112_AAABQY kim_s_Page_151.pro
4026334e494e44732791630533f3d4e6
5a5e9b3a06645582068fca6d0bb1f7df3c5fa5f3
59853 F20101112_AAABSB kim_s_Page_180.pro
db3225c1b662ca7222797c563dd4710c
d46d5bc8ded99912a4a7242c64d4d3254327a41a
50200 F20101112_AAABRN kim_s_Page_166.pro
f194b06ffc0b3cd7cb60ec2c63e59116
010fbddf409e296e40b6c2b5ee0988ca011e9716
52603 F20101112_AAABQZ kim_s_Page_152.pro
949cddce65e17fb22103df3b19c8c04f
aa40a58b30d9da2b10369ef2b7e39a4f39e14482
65794 F20101112_AAABSC kim_s_Page_181.pro
7f37c327ace9069e2f363066256d2f63
c2b6bf5420bfef93c79d30e76ff0b95a2e1dbe81
13096 F20101112_AAABRO kim_s_Page_167.pro
1402f7714800f96e1dd671737869d2bf
258250d64305ec6301cf27a26ee55da35a26ec00
61561 F20101112_AAABSD kim_s_Page_182.pro
14fe915ac0ecb9defc2a19cc7065fd51
a17e6103227e0de27cf19f731495e3864b202f99
44865 F20101112_AAABRP kim_s_Page_168.pro
b6a8f7c3909f9083f1cb141bca7f126f
fd2612d28e7c959cc0ea7609be870f4f3f702f83
32642 F20101112_AAABSE kim_s_Page_183.pro
ca54dd2d107f7ecfdcb2bf62f0c8294f
376fac4f80bc3ff55f71d8e2ae27becc970bda68
49533 F20101112_AAABRQ kim_s_Page_169.pro
1926c1c3f85190030baf421f2f3a0481
bd88efdae633be7558e38bdc3d8c2f3971fc8915
459 F20101112_AAABSF kim_s_Page_001.txt
d2b136ab5664bc02dc1085def42b689c
abe1c09ad45862d2275771bf0a3c7357db8ff90c
50184 F20101112_AAABRR kim_s_Page_170.pro
207f4b74e439e69cb3b2a9f8b0876d3f
cb57159482130501299d6b8f28b45b1e6cb56921
105 F20101112_AAABSG kim_s_Page_002.txt
d75a2e2d5bfc7e37f4f328558014f795
dd5f7ed723659e8fa4fb4cdd960011766fb3ad88
50165 F20101112_AAABRS kim_s_Page_171.pro
adc66055e0948d159398ab78dee93b2e
61ec514a0604e34a26df096e723512083ffe7865
119 F20101112_AAABSH kim_s_Page_003.txt
82837e41b06b07594d35cbe76473758e
16be5d5dcdc08a28c865fd73af4f5023c759f59c
52461 F20101112_AAABRT kim_s_Page_172.pro
8ee96bacea21cdbfa72fb68ef4ae2098
6990f0e8e3200b7bb8067f8c2235b864a761e867
1090 F20101112_AAABSI kim_s_Page_004.txt
6652a5882da143db1eac55a77fe18067
f9357ae5b7a1a184651313c67dd65986ffa18020
53089 F20101112_AAABRU kim_s_Page_173.pro
e8fb6cfb7249a1695a5ae52a14bdf554
1c5f419009d338058f8871877630f55e89a66c34
2471 F20101112_AAABSJ kim_s_Page_005.txt
738b4c454b762ec3b3715f9900ea98ed
eb5e8aa85f7e881e6c240b158bb0c0fdcef5e843
3548 F20101112_AAABSK kim_s_Page_006.txt
2328281725e6530479cb71834d9f5009
be7bd1ccd5d9c1f00c81fec40ef9793ead3ba3eb
52952 F20101112_AAABRV kim_s_Page_174.pro
2439f32d76bba175083568b1c721809f
e147721a583e680c02e4bd69ad470b6a27e586e5
1368 F20101112_AAABSL kim_s_Page_007.txt
1a2d35ec9ad2e3ef6c0daa9d942dae10
9c335d02b6c2bf109715441a98d47fc8cd17b104
28542 F20101112_AAABRW kim_s_Page_175.pro
5afcbfc49913b4e62ba23311c7ecfae7
366e9e331d80dc8f63ee6ca80a758c19498ef7ef
2104 F20101112_AAABTA kim_s_Page_022.txt
14c65d30ac9d2e25f7cc785b2d1f3943
cf32fda6b8ea0c71daaacafa2323cfc8b0cb862d
2172 F20101112_AAABSM kim_s_Page_008.txt
400db00fba3b6c682cd7fc45e1bc7a26
fa09a6cbe0466401ef642a0a1c582c44253ccec9
51340 F20101112_AAABRX kim_s_Page_176.pro
c067a3a1fcc79b3f641f089a4041dde9
2cd89aae5998f60977d2ffe29fd7fea4bfe3ff34
355 F20101112_AAABTB kim_s_Page_023.txt
013d47ea88226885754778b52b60e3ee
c1ed6c68a33dd5bd0d50b04785d28162e6f7645e
2374 F20101112_AAABSN kim_s_Page_009.txt
91a12721155d38cd3e9e3f7682447c5a
798def5536a0912fddcee603bb7d9a24392f9f8f
56936 F20101112_AAABRY kim_s_Page_177.pro
f14cc4aa8c70687e267f4b621ab1de89
60dd57cc679fb2f351db357f5a646f41aec4d3b2
1760 F20101112_AAABTC kim_s_Page_024.txt
2e3d9ebbebe47bfdfbb7f93e3e21f91a
b4b1bc430cd8adace9ce161a0a49e282fbbc790a
2911 F20101112_AAABSO kim_s_Page_010.txt
f7a2a3f81bbbe5242331a9531a8ba34a
79d3cccf74d3c0ef2a9faefdce63c3ea6a3e1e82
53388 F20101112_AAABRZ kim_s_Page_178.pro
aa9eb24567cb45500db1b02f83164624
a5623df214c9c625e8628082413a30cbea2306a3
23363 F20101112_AAAAQB kim_s_Page_001.jpg
4ff89d7f3bb602c98ae03fae24262c34
b747b35ea0ddd75df948df9364588238e395fe90
2058 F20101112_AAABTD kim_s_Page_025.txt
1210c6feda01c479726a80ef0338e910
7f236ad36fdcd8d7d0f5a841a0ed32f4f3c16fbb
2883 F20101112_AAABSP kim_s_Page_011.txt
9d8058ff4006d33fbc3f7f60362ef17e
d9d2b1290d1be444c6e6b9d1da6e662b417b9d48
10243 F20101112_AAAAQC kim_s_Page_002.jpg
b4211fd1da55a70fe78762891e349de9
06309569e5623136470f2a651aaf7a65987dee34
1444 F20101112_AAABTE kim_s_Page_026.txt
78ac74012b128c36042bbfca252625b0
68c1ab51128ad7b379b1435c5b6573e83b65ed50
935 F20101112_AAABSQ kim_s_Page_012.txt
616d6acb2ffb515728196814afbdad8e
f25a9c12c868030ad3b7c4e5d97d255869684be5
10950 F20101112_AAAAQD kim_s_Page_003.jpg
d8af3e16a6a1792a2793418f23ad1044
90acb60b1be7bc12bb542176b85eb387a7587bc0
1383 F20101112_AAABTF kim_s_Page_027.txt
e31a508f457b3c86347435dbd2714b04
23fb42c781c8a1f1f2c35a0f5b763712e01c7b38
1757 F20101112_AAABSR kim_s_Page_013.txt
be3fef98ad37523b298b0aac7972697c
68eb5e568c778803daceb8d30a8900260f9590c0
42433 F20101112_AAAAQE kim_s_Page_004.jpg
8c973c04a52e6c324c0375247b8f3e78
275555a2426d3dfe89911b15f02680f510515420
1013 F20101112_AAABTG kim_s_Page_028.txt
191ec25458ca84df695a64043057d857
e4b558f45bfb49cfd0130ac542e6322fd9d38362
1202 F20101112_AAABSS kim_s_Page_014.txt
3373501bf50072f24494fab5f87f97a5
e0b191fc55277b82f03dc2cc5dc3024c79461d6b
67073 F20101112_AAAAQF kim_s_Page_005.jpg
0a87886959b7337d8d41050b613a8912
f2981fc29ad31804084776e471223ac6a4cabdb7
1702 F20101112_AAABTH kim_s_Page_029.txt
426807336af2a3fd7bc8377a5d767c1b
8b1cbac758e44ff2b5cfdc2949952c2abf258991
1870 F20101112_AAABST kim_s_Page_015.txt
aeed672b0ee8968736c68e99518d8c55
196a7248babb94c3e292179c0f59213be4476c8f
1933 F20101112_AAABTI kim_s_Page_030.txt
15a8ed3e3ca513146ae655e722ededfa
eca07285df79626ae54c5b2f8a04c939b6d4817d
1737 F20101112_AAABSU kim_s_Page_016.txt
e1a5cdcf45c46dd300d548dd0ab58524
934025ca2aba7caaa47fba98743c56f2e26c85e3
100466 F20101112_AAAAQG kim_s_Page_006.jpg
2901cb2fe9dbe2535ddc6170e5d203e4
2212d3c0dc0e62c16079f33004b1a1de9d02e6f1
951 F20101112_AAABTJ kim_s_Page_031.txt
c26962b7424f34c875546c107e55722e
a351034b435325fe2842011be7eae7723c0342f6
1962 F20101112_AAABSV kim_s_Page_017.txt
34913c168954433f118fd238074357b2
bdd39a2540d532977a74d3a6318ce31e05e4f473
43964 F20101112_AAAAQH kim_s_Page_007.jpg
934b6c9cea595d119e7c9ee9242f47d8
69368a561730e17ec0c11b4599b37d3a1c77c643
917 F20101112_AAABTK kim_s_Page_032.txt
3ddc26d1baba4d3322b215251b4cee2e
447d2e6eefcda2d39f621766a6fd3bfd74d4be8b
70270 F20101112_AAAAQI kim_s_Page_008.jpg
e2e549a66c2495db3d149500fb11c4ad
e820a12120825e65020923973e9cc29ca234f41b
1470 F20101112_AAABTL kim_s_Page_033.txt
9a3c538111f83643b2fd507d8fbd4e0c
6540b05d38f27473adfd1c02a58e66b064b1ef50
2018 F20101112_AAABSW kim_s_Page_018.txt
78d042dd8ae59d27a116fadf6edad8a8
89780d0a47e8533d7d0c218555be64bfefedd77b
80757 F20101112_AAAAQJ kim_s_Page_009.jpg
a24e1bcc4cdf7b4a6c6125fa4159f606
2f8584d8147321210096fca2d4fed859ae34998b
1607 F20101112_AAABTM kim_s_Page_034.txt
a59faa5d80d218dfa5300d33754ee8b0
d290db7e604dbe94f3a7865813e6b17ac05878e2
1845 F20101112_AAABSX kim_s_Page_019.txt
36f6afeb982868e5fb302c5890961590
9f3758fb987623fa03e55d9601dd242ff5818a20
1588 F20101112_AAABUA kim_s_Page_048.txt
b0c716a3622cd2fd06656913238e7c32
a03205e13503ea52959a72470631cb353beab699
101837 F20101112_AAAAQK kim_s_Page_010.jpg
415da513ff45e6212b28f964678800ff
b30372d01d8e85a3a43602e303b2db70f6e58f11
2034 F20101112_AAABTN kim_s_Page_035.txt
cc4e745037b326f0eb23bed43137f290
05dbff0634490f6e1a70c4d85f9189d098fd062b
23732 F20101112_AAAAPW kim_s_Page_018.QC.jpg
0ed434bac345db7302bfca22b06863d3
9d3f11bd0a84f750210bdf8eee15bb61e17b6f21
1894 F20101112_AAABSY kim_s_Page_020.txt
5b81b69fec5287d72af851bb1ad37bc4
7e9ea6303cb022913e8e3d1412ac4d99a02832c1
1763 F20101112_AAABUB kim_s_Page_049.txt
dd7f1bbe07e9713ba837edba36e77d52
a9d32fa7a00c9598d7e37a11f195d6765e92f5a9
100454 F20101112_AAAAQL kim_s_Page_011.jpg
1b68fe7f930833abfc1aeb865575ca6d
7103a92fe34b4cd3b05a276f4c104146ee681aed
2076 F20101112_AAABUC kim_s_Page_050.txt
9d084ade9117e6bd57030f81a7a76193
0702b35c2c439a2bb4acf971e0f88474f0417015
1951 F20101112_AAABTO kim_s_Page_036.txt
6e3f2bf1a42bd9a708358e3f1b3f746d
cd844eacc52491ed63d22c96437f8953842d3543
19629 F20101112_AAAAPX kim_s_Page_080.QC.jpg
49eef92733a99fa6b66a2e175483a4f3
c558c1ad0e8fdd69bc931d3f6d39c2981fee9cf6
2049 F20101112_AAABSZ kim_s_Page_021.txt
e3bcd4667608f6936a50ffa1088ed12d
13ca8e0c82b80b36acc882c9e0a9badd23c353c8
59668 F20101112_AAAARA kim_s_Page_026.jpg
4f436f17bac5fe1d4e2d4f10222d0974
4790dfc6aa9b3fbce39224037e93c7f1726cfad7
39456 F20101112_AAAAQM kim_s_Page_012.jpg
d976a9b38f585ccbc61c15494e628fe6
6c4970f9b3db866ea2b536eafde3946a0aa2b86b
1516 F20101112_AAABUD kim_s_Page_051.txt
03d6c99d0a6a554f1225165c99f2188e
9b83bedde57ba05165b07361eb9c1a9c0571a1f0
1379 F20101112_AAABTP kim_s_Page_037.txt
e948b80134598b36961d62b628a258e8
117b0d549c94fc5d8c52a3ad477cb5e9c41087c0
207734 F20101112_AAAAPY UFE0010077_00001.mets
4f8eccec71f89d4dcdb740c37f4f2777
7414cad3afeda51c15a5f6b11ee32fa40eabd06c
58714 F20101112_AAAARB kim_s_Page_027.jpg
9f83f333b79a142ac90678c6cb7b3f86
ea541c5a85acc37a86198e43ea2fc974a9c9402b
60303 F20101112_AAAAQN kim_s_Page_013.jpg
2a0c3f1c377b2cec16575f85d0d38737
d09e4b40ca0b3f49dd865ea43b17907c2246bbd1
1541 F20101112_AAABUE kim_s_Page_052.txt
b53ad18b3d88358b64d338ff069409b8
00660847b3f118f707965e34157de14c651a955e
1865 F20101112_AAABTQ kim_s_Page_038.txt
7102908ed306bcb51f4776d2e92cc230
6e2afa0038e83e03ebc31bad36e23faee945bc82
62079 F20101112_AAAARC kim_s_Page_028.jpg
c41ae9e4ced27f055ff9885317896128
55fd7a73cdb10c4f62dbaa1e3a621a451b49d0a4
46310 F20101112_AAAAQO kim_s_Page_014.jpg
5cadd9db67f8be2395b1e90b27ed609e
c09c7e05d8ae19f404f6ae3b2cfd037278eecf9b
6574 F20101112_AAACAA kim_s_Page_150thm.jpg
be784ebfa583d08552d6a4b814b1f4ef
e49ad5b4725f07db96f618742e15487def478b26
1363 F20101112_AAABUF kim_s_Page_053.txt
ff8f0044d07fa16afe2ebe28811a82c5
cb72c13f26bcd1f3ebc8b09ebe8f1ed299e00c00
70078 F20101112_AAAARD kim_s_Page_029.jpg
7dfbfb78c8afd2ea40e99517bf2c7545
cb34ef8c6e7fd29c59bef71d839f3f13994be908
65649 F20101112_AAAAQP kim_s_Page_015.jpg
5c72fe5d4f8d00ed5d139d874d5fd926
1166cf42bc14b062845ece23b4047bf28e359e8e
1973 F20101112_AAABTR kim_s_Page_039.txt
a3feb82dd84dd65be17fbbd1d37fa910
3d2f689e6ebcbfb1b8a94028c6f58513d8c7023c
4813 F20101112_AAACAB kim_s_Page_167thm.jpg
7634136938d2ff8a7161b03066dd3e86
63d522e80ed0551c4a8680b6ffa95376cd523db8
1871 F20101112_AAABUG kim_s_Page_054.txt
6f16248fab6d7df7ddcfc7fbb03ccfff
f68b8725ae024d8e1f0e507ec916d2099938423c
69598 F20101112_AAAARE kim_s_Page_030.jpg
7ec58445c07cb1f6ca9f4e597336c57c
7550e46bca62ea0aa5eb717b3cffc472a21b78bb
69434 F20101112_AAAAQQ kim_s_Page_016.jpg
2ca5c555fc118246f96a3d49ee44b6f0
dc7b01daf86234f3bf495d631fe8b2e42367632f
1460 F20101112_AAABTS kim_s_Page_040.txt
b001df38c951f84c0d1431de029b9d2c
34ab5a21fa1dd3b73f3fe06c58aa24da5a10f5ac
20738 F20101112_AAACAC kim_s_Page_110.QC.jpg
7c6c048c96009750339b406cab56d482
c3487f140330c231c8d665f967acc1fe1ba7bba2
1593 F20101112_AAABUH kim_s_Page_055.txt
14af438895ac5324799af35da8c78867
f1ae3777e4597111a372c1eea8ff68061bbd4d1d
43511 F20101112_AAAARF kim_s_Page_031.jpg
92d5fcee0de126a508fdb0f5fe767e35
ef117043aebff3c75e80f3a0989a37a5e5ead44a
68376 F20101112_AAAAQR kim_s_Page_017.jpg
6caeab26074d6c7f71aa461923587c7d
deef657b9d3ccc1ac518b4ea5f2bd22282ad1204
1413 F20101112_AAABTT kim_s_Page_041.txt
04e3d2a50b73f4d482d5c422bc657107
a729c2c41e47ed2b09c7340ba6de894857eb0905
1810 F20101112_AAABUI kim_s_Page_056.txt
0ac3db52ed53e66f7a1657850263fc2a
d389b63c3512ae8f434a9a66e5d363df4245ba11
40660 F20101112_AAAARG kim_s_Page_032.jpg
c33853676627973d988494fa707b9088
c14f19c12789e0ddef876464a2a9e00d0df819d5
73710 F20101112_AAAAQS kim_s_Page_018.jpg
33f19d29afe65383fc28ccc77c0925c6
eaeaa05bd131c6ba65f936c2439f1ba0d879c984
2001 F20101112_AAABTU kim_s_Page_042.txt
e94a381c0e4f6be49e54506641154fd8
36b6e615e7881030d386989693e5c6f5e0fc4c02
22221 F20101112_AAACAD kim_s_Page_090.QC.jpg
209186b19239d219bbd0a1c564041993
75d72c5964a8229a5eb4aa99f9c69a0943ccfac1
1683 F20101112_AAABUJ kim_s_Page_057.txt
fa657e660aea512ec05a8e464903c89e
d3290d55693be832ec0435bd55a6a67bec9c3f51
51378 F20101112_AAAARH kim_s_Page_033.jpg
70286e6e5e41e0ca8d6c1839479862ca
787cf6d4b8570cacdcd559db3834ab5a1793b94c
68856 F20101112_AAAAQT kim_s_Page_019.jpg
235af2f46cef2dbd89cd1833f355d7d8
f806774ed7f837f8c2abc550307ffa037a2144c4
1626 F20101112_AAABTV kim_s_Page_043.txt
c532bf66cd7bd687a8224d2e60dcd52d
67696ce1864228093d1c1f7f1826feb62a7ed6ae
5741 F20101112_AAACAE kim_s_Page_051thm.jpg
6c93afa0ffa7424edbe5ebe80e7afe29
e4713d30b2a3a656f4b3d9bf0911ec4bfaaff563
1584 F20101112_AAABUK kim_s_Page_058.txt
0f7aef6a318d0711918407491630ac12
d08e4faf41937532e11c2739ea30719f63729ec4
60485 F20101112_AAAARI kim_s_Page_034.jpg
8c8d488823d2bd5d3e0c9981c5467dea
fd35d727d362d771f1e0f2c2b2cfe09831e03e00
68103 F20101112_AAAAQU kim_s_Page_020.jpg
8c604dc7dd2f6ba99dff4e045d8a786e
d25045827afd21ecb9f624f877cfa3d42d041c12
912 F20101112_AAABTW kim_s_Page_044.txt
7a458d4e65fc8650c2b42c43604b578c
df95f1589795cd155f57735870d83020562d1453
6629 F20101112_AAACAF kim_s_Page_128thm.jpg
3e070ec8a95af3f6e56ff6f618697e81
52f01a59c86733ff922717b2127ffaee175be79a
1858 F20101112_AAABUL kim_s_Page_059.txt
1e66b91a0f65b716aac1e5eedb7c0c18
1084ee9d0fbe8356d741f86e01b1b0173522e0ac
64148 F20101112_AAAARJ kim_s_Page_035.jpg
dfbdc1c398d52a378e390db6c8a0b087
7ca41e59330c312d808f0f5dd8459bd57bef1d67
22808 F20101112_AAACAG kim_s_Page_025.QC.jpg
4a81f4cab87411dc5e757af1e525c774
347b727c5e56e0f0d12dabebea53ee1fdc988666
1901 F20101112_AAABVA kim_s_Page_074.txt
cfe90e277719f18262bfde0b31dec2d7
fc4d5d0f3950efef2c21264638f53c7d85325b86
1594 F20101112_AAABUM kim_s_Page_060.txt
69c5a09f2537a09c4c7290d7e16ca24a
69b4aa90d5edd00e002e3719e0ba0cf746b9bc93
70443 F20101112_AAAARK kim_s_Page_036.jpg
72c70baafe3effc246a8144d9e13c0bb
83c859803f9f6ac96223340521f41b25646e532d
74043 F20101112_AAAAQV kim_s_Page_021.jpg
f3f3d49bca733489517e8a619a84a2c0
27f44b6b1e9beddb04e090f6ee012494c6b45956
1829 F20101112_AAABTX kim_s_Page_045.txt
c0c989188f205c885d80bf5394a4d7e0
63e29089e431fa887c52b5ebc8e401ac7db2d009
5004 F20101112_AAACAH kim_s_Page_145thm.jpg
57e4dc61a6b3aebfe8faf869f4cb5558
ad7c4178a136d240714e00bdec7d305dab3cb19c
2008 F20101112_AAABVB kim_s_Page_075.txt
161031d2d8dae199145fcf8dc16e5d62
e0c3a5197c7048f333401003978fbec355c133b1
1844 F20101112_AAABUN kim_s_Page_061.txt
978086f6d50bb09cb71685921643776a
49778f67ebc8b12b78aa461b250211f72ad27abc
50421 F20101112_AAAARL kim_s_Page_037.jpg
42e90768bff926e679686440773df9e4
75a17e68d89560adbc6b59b3cec39e5da9111770
73418 F20101112_AAAAQW kim_s_Page_022.jpg
009c67e9e494ec075a3cc9c2eaae0dbc
9db7ab743117af605134ca591a965ffb814b5c3d
1882 F20101112_AAABTY kim_s_Page_046.txt
220e97e8180b025b1ebaa8a5d28714aa
365081e99636582c10dfdee656a291ea30d1e3b4
6784 F20101112_AAACAI kim_s_Page_022thm.jpg
aa9b6c5a96498525b583b85d18b6e745
ff2464d6ecdb38b27777a2025ffbe3eee55be7fd
1783 F20101112_AAABVC kim_s_Page_076.txt
f80cc1e28cfbf79f1266754ca62cc87a
923caf57992e7675a06626e4f161a70f2c5349cc
1510 F20101112_AAABUO kim_s_Page_062.txt
4baad1efb1b21d9b4ab3444082e9c1bd
e4a2fdca5d828a1f11ab177d731167101a97c3dd
57635 F20101112_AAAASA kim_s_Page_052.jpg
afd959122c9ae5a8c1ed1f1dd48c18a1
ee12d724efb1dd2beac72a5ef6aada0041f49935
62459 F20101112_AAAARM kim_s_Page_038.jpg
c1e0c4136dadc57f16aa4626b5ad1275
fe8d82f404ad865d4364e967941a14e28e29f74b
19973 F20101112_AAAAQX kim_s_Page_023.jpg
59080194fba39f69b35021b5c34af8ac
ba8721826df5664e545963f21a60c9bef5c5d356
1797 F20101112_AAABTZ kim_s_Page_047.txt
4fba4b68ebac4bed0cbe48a3b70c5838
9a0670bb1bc16957ae382634d34f0d2ace183f7d
20891 F20101112_AAACAJ kim_s_Page_068.QC.jpg
a496a579958e23f3373da46b66858d8c
54186099e7e896bb8ae34dad1be694813f65a9b2
1531 F20101112_AAABVD kim_s_Page_077.txt
214f71926115b09ca8e4edaa339f9fc3
fdcbefebe7e9a22482ee2af4c48ecbcdf26a0569
F20101112_AAABUP kim_s_Page_063.txt
c2f0af1272df663bbe70a48d75eaefaa
5dc32eb01052a62a99c6c77e7bcd0441828b7e48
54292 F20101112_AAAASB kim_s_Page_053.jpg
fd1c65810368cf7bf121a27260084494
ed073afc3b2ea313259ac50afe3b645e9e41f47d
70061 F20101112_AAAARN kim_s_Page_039.jpg
7f54cd4aee768886c7965c72ed3ea25f
ca0a3adc275fd2cc2c4b0bc6e19caf3bf75c8968
64451 F20101112_AAAAQY kim_s_Page_024.jpg
cb70fe02e8fd4814fafb4f6a43892b1d
83356cdc526e1c22f6e1075e1036ca945f584d09
6193 F20101112_AAACAK kim_s_Page_069thm.jpg
d7d14560cd9d7de557ff79f8ccb8c86e
b4c30ea2c1612bfdacd7d072ea6f0e65ef73501c
F20101112_AAABVE kim_s_Page_078.txt
d55a1f7759064c184a889ed9faf6b988
f96f3bf8a42185f614771cf5a4612df97d414be8
1472 F20101112_AAABUQ kim_s_Page_064.txt
779e80dcd14ce66236065d8cb8dd0244
9707879c7c530f124b1d9ddcdb9291fdb46831ca
75199 F20101112_AAAASC kim_s_Page_054.jpg
0193682b3eebfb1c922d9a93eda79fa8
9d78b106037d3f3111c93a5a7b396f00d48dcdcc
79586 F20101112_AAAARO kim_s_Page_040.jpg
b8965fcee7e20888286a100d669a32ab
7b94a79822db554eb5819be278819daa8f5c4c9a
71473 F20101112_AAAAQZ kim_s_Page_025.jpg
eeaf4dd6c3389b993a9e6e9729475e60
ff48b8ee2ec18a3a038cbd2e49d6913152e8fee9
23806 F20101112_AAACBA kim_s_Page_127.QC.jpg
ed02205e0f355935c2ba70bce7d379b3
3030c3fa348056bb0d7a7f35974367fe5446b38b
24233 F20101112_AAACAL kim_s_Page_049.QC.jpg
1ab1564694eb2b34e2ae1003322984ef
9846f4deb22dbe7fa2f37724ad96655cb422ab38
1577 F20101112_AAABVF kim_s_Page_079.txt
33d80155c30d0631b02616eb3f96e365
7751a552b97b42136f3b552d5894833095f105ac
2191 F20101112_AAABUR kim_s_Page_065.txt
9c55adbcfd850f052b3c828c9e37779f
479ffc96beca1b894ff485637dfb39de9927b40f
80145 F20101112_AAAASD kim_s_Page_055.jpg
eba2b7557ca1ef87f0d6fdae29264ee8
0d90fe323256554e2ebf51cec0339c8896bff77b
70875 F20101112_AAAARP kim_s_Page_041.jpg
54e55cb7d8e797f17a9812b29edcf344
5dc2605613655f6dd28b2f142751c7b54abbc9e4
6136 F20101112_AAACBB kim_s_Page_059thm.jpg
0d165aa6828d8f997f2390235f02a330
b120b73967014f60a5688337aee9eb0a9bab45f4
5252 F20101112_AAACAM kim_s_Page_118thm.jpg
d9f9d115be76df4c8da2819d3dc7664e
4ae4c59c236f566af7013f1ac019e444a1dee4a9
508 F20101112_AAABVG kim_s_Page_080.txt
292b0e797f8fe3f4380b6db8b28edff3
f3fa07d6ab2a38092509ff785d5f555ae0d9de83
1599 F20101112_AAABUS kim_s_Page_066.txt
bb9907592f5e4f346c701b1c646950a5
dbe11e29e0393378b8aa63a7efee9719c90a0a34
66297 F20101112_AAAASE kim_s_Page_056.jpg
62f9662f4cae7a72a7d5f6a515c1742e
17652f3c66d65d6fd15fca7ae659722ed9b1b5d3
71297 F20101112_AAAARQ kim_s_Page_042.jpg
6ad2324afee9d26c99fa738049d07fdb
8e2348dcea862090d2ba8982cd4497eaf16f0f10
23439 F20101112_AAACBC kim_s_Page_150.QC.jpg
378d15dae5eb0e05901f66443e8b0b36
7b658282e130e8b30432c29981dfdee801a4ffc7
20528 F20101112_AAACAN kim_s_Page_155.QC.jpg
8c2abb82017ceecf18edc32f187dba04
0a5c5b59306df192f6190b2a3ff2357075859c54
2045 F20101112_AAABVH kim_s_Page_081.txt
944989d4c37decdd6c5907dfbe6735a9
2a8c4783ba00cfe8339cc39ea036142a29affebd
1743 F20101112_AAABUT kim_s_Page_067.txt
437f44db1b18e54be513a93d295f650a
f60d761304a4f0d2e9684e04702b2dcc6833828e
61521 F20101112_AAAASF kim_s_Page_057.jpg
1a1ed32fbc8cafcdf47d9e6ae8342c1d
4bffc4ff4b0b08674118a482fdc911ade25b4658
56231 F20101112_AAAARR kim_s_Page_043.jpg
4d470a723f0f6f40493291c836605f48
d4d70fd316f6b7ddb00bac66e86c5796ee193d89
5976 F20101112_AAACBD kim_s_Page_076thm.jpg
6512b880cfda9bbd80255c09ec0c0732
e74a7aa182e0f8aa62f0995fa9186abc97b30b04
6362 F20101112_AAACAO kim_s_Page_139thm.jpg
93540fe07efcce22908524009f73d8bc
8c1bf58da42664bfc77603ffff0fe17a3aaa013e
586 F20101112_AAABVI kim_s_Page_082.txt
6a60790d6e98cfa2cd26d0012543cf4e
4df3f7bec5d036d6864d4824ab45d8dc4d7aba6c
1799 F20101112_AAABUU kim_s_Page_068.txt
976d0669e0777058223741f3754a4fcd
c736a9238d179e90810d5d53c544abeba4cee93b
58222 F20101112_AAAASG kim_s_Page_058.jpg
53129e093e02f1ddd2c00151a4389262
494e08566ec71a4d692b1d1e5df93abbeb06e4f4
32836 F20101112_AAAARS kim_s_Page_044.jpg
fc06d453d3f5e2e3abfc20d3307f56c8
3ebc7ab7999249723ebf23fbe7698f14ffb52f09
23412 F20101112_AAACAP kim_s_Page_055.QC.jpg
640a48ac47ca97aab71b4d49c84e4756
0ca839d8cd2089f4fea02bc2de0aeb71bbd30b10
2209 F20101112_AAABVJ kim_s_Page_083.txt
35ac06d6b891da93854c28096e26779e
713d8cfc7d918f6bde7838e6d58a536d683b04ce
1895 F20101112_AAABUV kim_s_Page_069.txt
85c81aee55a48ea5ba28bea12c10cbf9
c8e88b00add4893b9aea8921ec7fb0301d2ec01b
66427 F20101112_AAAASH kim_s_Page_059.jpg
7632ce5ffb0edfa9028bacd99d75a6a9
5ba903a1a4dc95722974e028c366b833dd492ae4
65330 F20101112_AAAART kim_s_Page_045.jpg
c91130f230ca7ff0e8c16a2b79425b17
0a7d48bc48b21796801e7483e431cda3becb88d2
23262 F20101112_AAACBE kim_s_Page_138.QC.jpg
3258a419661d4c71728435e793f9d646
b08256200b3e1f7afc7acf9ddcd0e76070dc3b42
19483 F20101112_AAACAQ kim_s_Page_118.QC.jpg
6b279249248adb8c926c51c29e278835
b52bcec8a053584e201084301a5f9d9443d1e790
2382 F20101112_AAABVK kim_s_Page_084.txt
6b49d1af0fe2dae12121d38281e9e3a1
d1338d76b87831ef9a91f8f059ffc9873343bc84
1144 F20101112_AAABUW kim_s_Page_070.txt
027e5630ae1401f3d14b67538b3160d7
a050c523afaf4c2cfc1bccdd23a84aaf877c1836
61999 F20101112_AAAASI kim_s_Page_060.jpg
ff3f9b3fc20b72df0ec193b03d9b9ee0
f15c6a046798f7e341ad73f14958b0b91d6ee56f
68226 F20101112_AAAARU kim_s_Page_046.jpg
3c9da4d8c5273a3c4546d8a8410e1c67
dfbc758da9c367e31906911602f5e1d031a02d92
24361 F20101112_AAACBF kim_s_Page_177.QC.jpg
ca6498fcc8c5ef8d1b130ace71c5d9a1
da73203175c2da0e4a02779a6c4955284986a35c
22269 F20101112_AAACAR kim_s_Page_117.QC.jpg
ea54c286f908582a4a1c5e7418c3a229
a12bb24477eaae73b3876a96fb20264603ed8375
1660 F20101112_AAABVL kim_s_Page_085.txt
f7f7b1f6fded7c68471a7a535b59c7cd
86e1087486d9816a3e07d675a10a1995fb4469bf
1720 F20101112_AAABUX kim_s_Page_071.txt
9d668ad723a59f90434a370b5998e9e2
e042471bea8a60b9664ba539fe2675d9f595e847
61986 F20101112_AAAASJ kim_s_Page_061.jpg
ed13749d46d10cd30b832253cf6d2df3
9fd4bb57ae9d75cd814aa703c533e058a428f956
61758 F20101112_AAAARV kim_s_Page_047.jpg
31abd341d22e2da4c2421d3d423bab6a
a422cad77913fafe697b1df21a3fcb4dbbfbaf1e
5756 F20101112_AAACBG kim_s_Page_079thm.jpg
76dae71b98cc72a8b06e3aab32b94b30
e7c783e06a41ed6476fe0a1a90f294239153c407
19052 F20101112_AAACAS kim_s_Page_043.QC.jpg
00325622008536a4f1fc8793b5e45073
7fd1f761113c78c2c4424b061c9fbe2d37a329ea
F20101112_AAABWA kim_s_Page_100.txt
4dc96e33e5524b1df5343027054906eb
44d42f33f5781f2d363b9a9e35adcd18db43e373
796 F20101112_AAABVM kim_s_Page_086.txt
35496c55d04a6a3a5e0316f5c0f1b85b
75340e7f4e25c081edd2f95b618aff9b17e00d0b
54886 F20101112_AAAASK kim_s_Page_062.jpg
38bfa39ed4d4146bb1610cc5f5ba9175
4e55c5f3bae525625fa394b6aeb01bdc512a388a
4969 F20101112_AAACBH kim_s_Page_033thm.jpg
410eb7c8b6f41de103d313467bc0097e
61ef5740066ee3bc9588bbeb017f5a6257870c0b
18834 F20101112_AAACAT kim_s_Page_052.QC.jpg
4e4687aeacf58fc7fa19bcea8a81cbb9
b406d7403760dc54beba464fe3130666f85206bb
F20101112_AAABWB kim_s_Page_101.txt
63275a4679f26409bcbef84d503666ec
130dde338ccb90d2fcb7bec61fd4e23f3a934464
F20101112_AAABVN kim_s_Page_087.txt
1387f150be4fe6402198990ff730a05b
0a286ee77835524063d0757cffa0c22b85807ffc
1590 F20101112_AAABUY kim_s_Page_072.txt
c56fe1b879010ec9491b0c43fd22d8e5
bdad629c571d5ba07fcc98676d3ba848ed0b1c12
65816 F20101112_AAAASL kim_s_Page_063.jpg
48f1aeafa6155e2b07039ce2d430def6
feadddb5ee1945b3ddb52bbded890e2be70d1782
58790 F20101112_AAAARW kim_s_Page_048.jpg
cde8b4290fdbd9d0e3c62fcf9f256e92
ed792d1e58ed52d73fef5b9c7504f41ffcbeb02e
6685 F20101112_AAACBI kim_s_Page_173thm.jpg
026e421a800a03c9bb54419169866fd0
3373872d9f05ed644b426af86793a947a6c65c0a
5586 F20101112_AAACAU kim_s_Page_077thm.jpg
23ce9c0e352a30fff742a52f67da404b
0be14e7ba631158be28ab7178a6476a92a3e33d7
1868 F20101112_AAABWC kim_s_Page_102.txt
3c1192d04581abc9de179d195e0e34da
4511f8faa520d422c4f33fc8754ac15002d87a9a
2014 F20101112_AAABVO kim_s_Page_088.txt
4544968b1aae45e2d4e8875b701eff91
3c3e0876b5d4fd7ee91d157fa501168bc7559221
1992 F20101112_AAABUZ kim_s_Page_073.txt
322f430467cf419a80777c8f0d86598a
48236fd331bb0b0947b371a7a93597cbb28eeeec
66903 F20101112_AAAASM kim_s_Page_064.jpg
95e40f8d1c21ac26a4e514e32d4ffc7b
988bb0cbdf1d4337f5ea9c8bc6aa6d76840ca98f
81024 F20101112_AAAARX kim_s_Page_049.jpg
9287d9c650c78e1a1853ab2567daddfb
d3ef033a0cdf61dac50795233c0878f2fd10cdc4
65565 F20101112_AAAATA kim_s_Page_078.jpg
4902b70f731510705bc1a2bb66ef870d
95b8c7e525f0ea78256497d4d0bdeaeada59980c
19788 F20101112_AAACBJ kim_s_Page_079.QC.jpg
6aad7df640841f1896fe27165d04e53d
8729aace505bfba949caba661ef3bfe2ed9c9b84
5896 F20101112_AAACAV kim_s_Page_045thm.jpg
9c123ff1b19fb9f8401074ee3d57d8f0
e2646029d92e65fcd954fa9623fe869e5a37292e
1971 F20101112_AAABWD kim_s_Page_103.txt
42e5cf7e5d78ae21ac5cafb30d97824b
67ef8b15edcf94aa998aeca6cebaad92afa52c6f
2021 F20101112_AAABVP kim_s_Page_089.txt
c28b9919da764ad0b3a9f7edb3f90143
76b081217c2cba15e383d5136635e1d736a68e76
74195 F20101112_AAAASN kim_s_Page_065.jpg
3a69358dba7a9709d28ee69cf48ebf2d
520d7e66ce33f9dccb466a51b4ba06a0d97be46a
74466 F20101112_AAAARY kim_s_Page_050.jpg
6a0a869e452ab9a2c7de19e056b6cf77
57fdc8087b5df41821264e20bc3b7423652f4a22
65231 F20101112_AAAATB kim_s_Page_079.jpg
ff5d5e2987107024c9f637afb869c079
4a4a1e770098a8e67978abc564476fca8e28ad5f
20305 F20101112_AAACBK kim_s_Page_085.QC.jpg
3c21ee757e07169978eec294437c91e3
3eb7442cbf2db9ff4b062010875afb2efb476021
21883 F20101112_AAACAW kim_s_Page_140.QC.jpg
5acee63dd078b20e142c9fc33c88a8c6
5c109252268947d1081e0233776992670a311afc
2036 F20101112_AAABWE kim_s_Page_104.txt
ff7cf4a38f4be5c99914fa2fb46cc12c
d7949911aaf8c444e3475fdf601af01d59de18e2
1939 F20101112_AAABVQ kim_s_Page_090.txt
572749aa5a1a2ff16da177be0d17462e
32b797dd36d6e465ff698163c711c0b9f4b03cf3
60890 F20101112_AAAASO kim_s_Page_066.jpg
e5a7c6a6e5f99575019d1182cc6b1e25
d4a386137bb0300c8a71b0a7f208afb9c8babd52
61468 F20101112_AAAARZ kim_s_Page_051.jpg
317e19e7cbdea3b00a0337848d385532
3a03d7ac926ce92fc1789589e7c9247a2d157d6e
66829 F20101112_AAAATC kim_s_Page_080.jpg
41b94118d4fc311e55fa8e02205990d4
3c93f9299d346cbcf043fd77b9d3ad7786237913
6064 F20101112_AAACCA kim_s_Page_060thm.jpg
01541a287f96401f4f3dcc5431e7f749
a2397796423bcdd4d19554e62da1895ee54ccc53
5874 F20101112_AAACBL kim_s_Page_168thm.jpg
92351e70fb345993b33d434647d654dc
5621b56d672f288c118022dbe63fe4b0f3bbec8a
6616 F20101112_AAACAX kim_s_Page_127thm.jpg
6de0e369201bc168860a6297377bc04f
89125e38ee5efaa5fad423da558ba89277427117
2006 F20101112_AAABWF kim_s_Page_105.txt
86c36c387a95aace207048decf46f150
51fd02af38ad719cbd422d45cd100c44a4c0e506
1423 F20101112_AAABVR kim_s_Page_091.txt
4919fb9b43529bc97da5c974a066fbda
6a0c1125b2ccf308f83a3ff0783505e601983538
63007 F20101112_AAAASP kim_s_Page_067.jpg
e9b7f6ea440dcc71ac13731a52ce81ad
717b13c88c98407b0c74e2e2227e4866893d0195
72990 F20101112_AAAATD kim_s_Page_081.jpg
9ef550d274cfda62b177e05fd9939d2e
f654538431eec8db84332b4ebf9cc82aee9f55eb
4967 F20101112_AAACCB kim_s_Page_082thm.jpg
4e66470749950bd1fda3330706daf509
a40f5c2d1f926932c2db5c13d49caca9295e5351
5880 F20101112_AAACBM kim_s_Page_067thm.jpg
555d5e72410dc5865f5ac4534eb3d489
4baf4732a1a85e6cec030d74bfe8985c39ffed28
6388 F20101112_AAACAY kim_s_Page_046thm.jpg
197c8a4f493fd0cd983de5f7b191c182
6c13cd58fe0b254deb268fb5e179fc891f109cfe
2060 F20101112_AAABWG kim_s_Page_106.txt
f501fff1fa465e8bf163136d48bbcd3c
df3299db0161ddd5c72be226a0cdd2a943b123da
1651 F20101112_AAABVS kim_s_Page_092.txt
d0e2241cbd103635f9aece1a3e51f38b
043e95690194d881d60ca759917345d1debe71b7
64944 F20101112_AAAASQ kim_s_Page_068.jpg
32ab634e48ad2b804e9a924480823c2c
b104e656260e4d69edee9910b8ed93605f8da26f
54995 F20101112_AAAATE kim_s_Page_082.jpg
cc5f5fc75153e03b55e36c02fdc4548b
06fd5678d1a1230ef541bc893f46195b2ccef493
5860 F20101112_AAACCC kim_s_Page_123thm.jpg
1d7566457eb1cb1b399b89b305e4da50
377b0eba8aa8a9feb2cd85f02fec17378855337c
22658 F20101112_AAACBN kim_s_Page_114.QC.jpg
e0f10e0547ad329024459d4c529da563
70205d38af34d19cb4a34773bd28af805600df4f
6155 F20101112_AAACAZ kim_s_Page_020thm.jpg
df564fdb03cd5c006fb17066f16f738c
07038df951688f63981914804965c21b33e736fd
1441 F20101112_AAABWH kim_s_Page_107.txt
e2b84b5f9cfa59216e0b9703ff5f1ea2
4552000024be9bcb5679018914cb65d0c92347ad
1300 F20101112_AAABVT kim_s_Page_093.txt
73d70cdbd7e8acec917d649258fdc48a
38ffdfa965da34f7d40e62da13d3469c89c12093
68029 F20101112_AAAASR kim_s_Page_069.jpg
b24f860f160bbffe1f584c2eae809277
d1dbc1ee66259f7c671ed4c6ab36307eae287e05
86983 F20101112_AAAATF kim_s_Page_083.jpg
e06faef4fdc071a0bf18db7da61d2daa
15c2d6fa540354bb3ae60b12d5eeb0f9764a15e7
5453 F20101112_AAACCD kim_s_Page_070thm.jpg
7f59c9e109b7c5a72cd7f60b225294f0
c0e3e4ba0eb3fba4ea612fa4f45e223f8a9184af
23484 F20101112_AAACBO kim_s_Page_135.QC.jpg
36e6a36c0da9c361f1e4f2e674c46d15
ef9595236cc279626d41079afa30535b0493a2f7
1842 F20101112_AAABWI kim_s_Page_108.txt
3cf7ea01be8e4fb1eee64b15fedfd06c
94351edb84c8134602884689c4f4a5435233b716
F20101112_AAABVU kim_s_Page_094.txt
d7d7c33a576271814e6fa34c11fa221b
03e7f7bd43b4592bf99265e06cf56e443b4a2f83
58590 F20101112_AAAASS kim_s_Page_070.jpg
2e5e2dd113f6a93f2c625c11e3ea200e
bac33440d4242f252b1fe154218fe6b9f5920879
74687 F20101112_AAAATG kim_s_Page_084.jpg
48c45461638c1d18721debf4096b3f70
7fdc3d72a67628cbcdb5206ef3e41ddcd11d71b4
24869 F20101112_AAACCE kim_s_Page_179.QC.jpg
90b593559802e50d03bec7395904dbc3
e6832f7ac202c98782d0f3f1447c620fed1a5ed8
6428 F20101112_AAACBP kim_s_Page_042thm.jpg
f9b3f717e1726e5b90670d51bc0f0b6d
279bf78601c3fba8a36b0806a864903fbcd0f8fc
1059 F20101112_AAABWJ kim_s_Page_109.txt
22acca20b4aaa332d6917a915bc0946a
dc3dfc40e737ce7968c4c4c759dbea3ea5b7074b
2149 F20101112_AAABVV kim_s_Page_095.txt
a971a0bc132140ec9023e6a8ced22a7b
3d5fdf6e9d5933f8f2f76f870da37eb35910c5bc
59898 F20101112_AAAAST kim_s_Page_071.jpg
6fd8e91f85c817bf33b4958105d12b5d
dec879e19a8bf5c7cb6ac01e18ba160c3cf9f5c0
63915 F20101112_AAAATH kim_s_Page_085.jpg
6a9c6c5f425888b9049561812d3b2507
6511dfb26f2a41c429f22922bcafaba8162113f0
14017 F20101112_AAACBQ kim_s_Page_004.QC.jpg
a0469fe3c5557171f1e74909a2408c2f
9260cc67727aa2ba1c495bb3621907f6b9e5be24
1753 F20101112_AAABWK kim_s_Page_110.txt
22381334f75232581bdcde2c3f6d148e
2a59570096f8caf41caf74269e4f040c93572bd2
1401 F20101112_AAABVW kim_s_Page_096.txt
969281738dbec0fd704a7d9918bd91ee
c468b33b6388bc7c95387bf3185a27687733f0f6
70448 F20101112_AAAASU kim_s_Page_072.jpg
34a0edc7ec4892c76d9a8c91b875132b
4e25267781487ce8f7c82e987babbd038c972193
33062 F20101112_AAAATI kim_s_Page_086.jpg
a5eae4d8d71e569e569c9048ba263e76
5f2d527e194d00f871d827d1fc0df42d22934115
24089 F20101112_AAACCF kim_s_Page_075.QC.jpg
ade020c439a4517c3651225ac5e6a43b
45e6a51f1e459a2b43cd47abb15de9ac884fbb44
21756 F20101112_AAACBR kim_s_Page_139.QC.jpg
6dccfd13a311c1284045e3ddc37d11a8
443bf5154f5dd35601be52215beaf36a2a15544f
1776 F20101112_AAABWL kim_s_Page_111.txt
03f7e6a047fe3ad9dacf2f362099fb32
0f610b20d792d88ce16c21f2068ffc236cfa2320
1422 F20101112_AAABVX kim_s_Page_097.txt
1587e908e0e4a25903f5f5d5675e06f0
9faa0daf3ebe3f2ffa436b3a466c45cbb5edc1de
71818 F20101112_AAAASV kim_s_Page_073.jpg
7a6978887fe6bd0b937551dd4167f1b7
ac67110eee104085f60f6c8fac63450528fb1f34
65485 F20101112_AAAATJ kim_s_Page_087.jpg
711912f8a0387044db7002523832c442
98b3ca7fa1cec7a9fadeb05bb6217e112d9b2077
20201 F20101112_AAACCG kim_s_Page_097.QC.jpg
85e4aedcfa9fbc0ba668304ec7ff8013
8328efac2733d1f5cf2c904c615a0e2a84535e5b
19979 F20101112_AAACBS kim_s_Page_063.QC.jpg
2a99c060b9a471e34c3ea1642ea61f56
75869e898720bfd2faa88e0c6bc18de09be74ad9
1974 F20101112_AAABXA kim_s_Page_126.txt
9e85eaf29e63680dfa4232e5dd1e8ac5
4ac6c7cf706399cdad95624a210243cd91237ba2
690 F20101112_AAABWM kim_s_Page_112.txt
fd1275dbb5eafcd7322f3b35a8c14591
82923e385d1afe0d1051628910ddd35445a6f248
1977 F20101112_AAABVY kim_s_Page_098.txt
27148ed3526bec8ec6bda89cad38af8a
420c2ba6b45b29a5d39d2fddd457b85f377fa8d5
68401 F20101112_AAAASW kim_s_Page_074.jpg
ff5a26e8023e520794cda6c275636400
610dbdc730f9dd53811f60d60461df40b9802f37
72440 F20101112_AAAATK kim_s_Page_088.jpg
e309350390718a7aab0124fb11035214
5cc3b34ca55462ba18b081c63dc10413ff4a12dc
6070 F20101112_AAACCH kim_s_Page_110thm.jpg
db18a851ac8dc79b033daf7912e82f64
5d0faad596ae7b1ec09625d2e1401d6ff57e47a0
5960 F20101112_AAACBT kim_s_Page_064thm.jpg
b3d01f6c1f8bb45df3ee9733f43a0a2a
1d809fc10aec72044160d880d0380417775da3a1
1942 F20101112_AAABXB kim_s_Page_127.txt
2ef2f5158f794bb9295a3687d9c421db
e08e7b1e3ae833d8ecdd669e44e8e6d764bf0b5d
1876 F20101112_AAABWN kim_s_Page_113.txt
e4de8a7e2438178e1bb8793afe281f5b
8554ddc8d1f6205578a4d0dba90593321fd85f53
72945 F20101112_AAAATL kim_s_Page_089.jpg
c508ccbfe8e6088808bfebad8cef06e5
b73e06c38f9416ccd33e9107a5297c4906b156b9
4102 F20101112_AAACCI kim_s_Page_004thm.jpg
950be677844a209a0a1e218d580d2f61
d6de69423c0426da5017b8c5ad56e7f4fb531849
5547 F20101112_AAACBU kim_s_Page_095thm.jpg
15176639554b70185134338af2544a16
55fd9832de4f9de6fdf3f212b40a72042c2eb2c3
1375 F20101112_AAABXC kim_s_Page_128.txt
ed2815739839eb3204610550914e35b8
420699e8965ecc59d8e3f54e86ba4fed205c7979
F20101112_AAABWO kim_s_Page_114.txt
1a07240607af430844a179d373743256
a7994dc8019af573bb3b11a67f38d521922f925f
1955 F20101112_AAABVZ kim_s_Page_099.txt
1148d985f700b5c2139809d6f35367b8
23c402e392c2758568408c525df833b2de6945ab
73581 F20101112_AAAASX kim_s_Page_075.jpg
e1846f75a8550a2c686fbc782e00968b
bfd91706eda47fa629b297e45e045425c0919937
72851 F20101112_AAAAUA kim_s_Page_104.jpg
d19e652b4ecda0a965fd40aae08e1c3f
4a9f63ae2c883af20988c1f065c49f2b4a66e26c
68383 F20101112_AAAATM kim_s_Page_090.jpg
6c492e871b41b200024901e8b86b7727
085d183f1ccfacc8cb740215a890caf5da1a8c67
24106 F20101112_AAACCJ kim_s_Page_050.QC.jpg
5da940b1beea9d0ae2cabf1e17d74cff
aa918dcd77e8af1278aade397f29eeeb9959f6d8
22236 F20101112_AAACBV kim_s_Page_017.QC.jpg
94d25f039a436c0777fd92d7091c48b6
58fb85374f5b0fccae75bab00af5aac429e22ff3
1937 F20101112_AAABXD kim_s_Page_129.txt
1fe7d167026c8b1068d4383ad6b670f2
525ca8e22444a8beaaff8146833ac539a6a74ef7
F20101112_AAABWP kim_s_Page_115.txt
a722a5a04d0d29131bf8b277be0284c5
895c95e0fd9a364c1c017b94294eea7675f9e65b
62194 F20101112_AAAASY kim_s_Page_076.jpg
294c86ac854e498a61a7782c5d7bb487
6b2c9ca3ac95b238a529c96ec1cff5f3b1320bec
72335 F20101112_AAAAUB kim_s_Page_105.jpg
edb82274eff1441e8b831c121bc91b2b
0808f3eaa5f109da5e520459eba7d2c69f667a83
51492 F20101112_AAAATN kim_s_Page_091.jpg
454a823e87d1aa1cd19959217200ca29
8b13fd2278f0a06bfb0739335f31382c6ec071f6
23424 F20101112_AAACCK kim_s_Page_163.QC.jpg
b4d9b2a1796311ee8173548dbc90f888
e09bd78f27ce453d3a8a278e085ee87cc45cc369
12822 F20101112_AAACBW kim_s_Page_032.QC.jpg
1bcbff48e4693a1be8b082a58fafcf7c
00082b1913c8fa3121c7925e85c9a50af6f00eb5
1275 F20101112_AAABXE kim_s_Page_130.txt
58e1acc8ebcea711ddc259f322f0596d
bc3d131ceb78fd868ecfd1f38654f38bca2f3df5
1725 F20101112_AAABWQ kim_s_Page_116.txt
96ecb2123f857c0b57c91dcfe633c1e0
31bf3999b6a75c5b3a35705af744504349f7bec4
59914 F20101112_AAAASZ kim_s_Page_077.jpg
c69c61a59587409eead6eefd18f9b95d
3907b6baaf2a906e5cca56dc9869353b5041f186
72873 F20101112_AAAAUC kim_s_Page_106.jpg
ca1ba1927baa8c0e6b7d5e06cab0d9ae
ca731e97fa78929e7dba71697406984db3a8824f
59763 F20101112_AAAATO kim_s_Page_092.jpg
fb1286fb110a74013c4225e4ddfe4f42
01eafb51876ea61c2e47bff695c7b13bd72cc2ca
21498 F20101112_AAACDA kim_s_Page_024.QC.jpg
2932d790d8ba9dc42e85bbbaa6e0185c
0be4771a3b3f8009a7177781f5856f808bca4a09
6420 F20101112_AAACCL kim_s_Page_171thm.jpg
e6d8b231af137b065567eafbffe8e17a
6524a225777e04fc2d01a2268d6e5c8785ee378c
6518 F20101112_AAACBX kim_s_Page_135thm.jpg
89a9fb59ef46330358b8038e70dff2a0
22d615c295cef5178996fca4ca9514f342e1eb50
1530 F20101112_AAABXF kim_s_Page_131.txt
a4597c9b68b615a8b74de4a91a075b77
89d4c12c1295486afdb2b78fec099ec2d63d48b4
1857 F20101112_AAABWR kim_s_Page_117.txt
8a43d93bad83bf6d31cdfefc6e4d2e7c
a4779777233a6e869924eae0043b5be24411030f
64460 F20101112_AAAAUD kim_s_Page_107.jpg
85465705ef1eb43c3b3d698f94af2041
491dd1c0bf546184951c646a0ca533f6eca2ac58
47314 F20101112_AAAATP kim_s_Page_093.jpg
6d1969027ac95a881478df1d69993d17
cacbb6f38c80089ea708df116209bc2d260f86b7
6014 F20101112_AAACDB kim_s_Page_155thm.jpg
c3782ce531a3a56988709793f62d0312
ae544a74fdd0548eef9c24ea93f76264a102f278
21838 F20101112_AAACCM kim_s_Page_113.QC.jpg
5633d7f731aa715d9c4768ccbdfcff10
a8af8f387e805029be8352578aa442289644e276
24947 F20101112_AAACBY kim_s_Page_182.QC.jpg
32b2050b598340dec49cb56bb9dd4fd5
be926179c9cf3412d4cec071861a4af2de4ab8ee
1183 F20101112_AAABXG kim_s_Page_132.txt
d556b8934ee7d2a652be5127f73e1176
8d2dbe60e957be40ce8be95939b7a78b483c3fcd
1456 F20101112_AAABWS kim_s_Page_118.txt
888599d88d5c89db9bc2047fe0c787f3
a4ad471f3f048dc3634d7d12b94cdf416adf42d6
69321 F20101112_AAAAUE kim_s_Page_108.jpg
ed8d3342a5f426e2211f2a927c19355f
1b460ab82d973308b81d0ee8ae870c9c6009c1b9
64049 F20101112_AAAATQ kim_s_Page_094.jpg
fbf50b21e666f372a43ce9f8e31e2de7
593dd4ebc43c11b28197080bff42561d942031b1
19569 F20101112_AAACDC kim_s_Page_058.QC.jpg
df73b306405587a9854dcd931b148786
5b6319c895950b848aa9f66d5bc649f23ae17c95
4823 F20101112_AAACCN kim_s_Page_005thm.jpg
77bc11b3715d915e04d8289baa888254
d7952dfca2737afabf1c9832036bd12766717a36
19927 F20101112_AAACBZ kim_s_Page_141.QC.jpg
3840a0a2159c5bf9142636466a32715a
f1c539732c758ee3854c1ff5fd208c902aa0bb20
1915 F20101112_AAABXH kim_s_Page_133.txt
0f501359839b816083ec965c83645aaf
2efa060db57cc6c9891001ada0b097523847ee3a
F20101112_AAABWT kim_s_Page_119.txt
b09e5f42408df5d89f2a5fe4e8bf3c22
a36580985314e40b1d555ca130e1a52b864ab7a2
60755 F20101112_AAAAUF kim_s_Page_109.jpg
013652c93f37b19f675cf874b9da7405
35f34ffda61cce9eef831e01222d006e9842e4bb
58569 F20101112_AAAATR kim_s_Page_095.jpg
9f44d672476328e577142673f904db78
bbf5b7b8f5d2f844af5b1a3ed54242da43277273
899418 F20101112_AAABAA kim_s_Page_077.jp2
a863f7016435a05a9bc1cbc1d2f1e92e
3fd413e08d47fbe16678321d091f96ea28c49144
22228 F20101112_AAACDD kim_s_Page_016.QC.jpg
c26d96e152859a0207316c7acc1a4f2f
f098b82311aeb7fec6d7ca160001f44cf8e253a6
5713 F20101112_AAACCO kim_s_Page_062thm.jpg
8807ddfaca334eb9cdc90f33e135a7e8
34dd541a90a88be4b23c92bc6a031aa354406288
F20101112_AAABXI kim_s_Page_134.txt
e503251f0809f1f9e73fb70988817f65
2fe14441897d84b1cfce612f62cd720824ef9005
1721 F20101112_AAABWU kim_s_Page_120.txt
e8029b33d6d905ec8df88f2e97dd92f6
0a7896cba72f599082d56c474d57e2495dbe6fb2
63734 F20101112_AAAAUG kim_s_Page_110.jpg
b68247c4518ef734e2c28c553d8b19a5
ca88ceb2b7fa656a9a8756b8aaf3f0ecae2a0a3f
66353 F20101112_AAAATS kim_s_Page_096.jpg
badc2967902eab3b2361137f374aac9f
4586a2a6e79bf1088f85a9d1ec22d3e8116a5f6c
97883 F20101112_AAABAB kim_s_Page_078.jp2
0364b4ffdc01900e8ad57bd658e49e70
78be052c9991d2cabe584fa8a9e2b59cf0ab9875
5613 F20101112_AAACDE kim_s_Page_052thm.jpg
0ca3ea73e2ccfa5f4deae1bce97b652c
0165490bcf4e204ff3f89882aab4f0c958f7dd58
6141 F20101112_AAACCP kim_s_Page_094thm.jpg
328ccf8ae96c87ab3bda6619e72db681
d7758c1942cfda0e8ff547e7cbc704b9cddc413c
2059 F20101112_AAABXJ kim_s_Page_135.txt
011198e9d8e896eb3d26ea42f7ae51a2
e776628025520f0367c25fa9a30cc562456dae3b
1976 F20101112_AAABWV kim_s_Page_121.txt
cd380db7cb76277530e1548b1269bedb
228e8569dd093495dbcf0fe47ee657ac1d80f2c2
72121 F20101112_AAAAUH kim_s_Page_111.jpg
fe6ee994d126c5213f1cec2c59f93dfb
5a9f3a3420eaa4bf1e316a066893dc33bffad3d4
66024 F20101112_AAAATT kim_s_Page_097.jpg
7bc0e9b9609a3622090068ab193defee
8c9e5e5af417d6a21caaca487eb67fd3ebf6f7fa
1051981 F20101112_AAABAC kim_s_Page_079.jp2
e9ddcca5fcceadcd75450a1ca647db62
0c7d9a4975113b671fc9198f9fc8a15b76a3e2f5
22707 F20101112_AAACDF kim_s_Page_166.QC.jpg
16750e6bd509a8377f88dd72bc442935
a9d9a4166f68f5d22ebc56035e2acbd0c8eecae9
16786 F20101112_AAACCQ kim_s_Page_091.QC.jpg
d3d976f9f5510fc327f17d7f11880fe3
5da4d59a2e29a8df210df6e0ac9352e5047c9614
1729 F20101112_AAABXK kim_s_Page_136.txt
f8f4f6deffd0ed3425f37d6d6c2114eb
9534001b88272963fb380f9b3b9e8fa39c561dca
1780 F20101112_AAABWW kim_s_Page_122.txt
ea4adcbed82e3d8b2eeb0524471a6b3a
76893ed42045bd469e0629896772a64d0a4deeb0
29854 F20101112_AAAAUI kim_s_Page_112.jpg
bba24b72c0d3084e7231fd2c9456041e
d25cd58b92d5c302253fd80a67005399ef7a11c4
69788 F20101112_AAAATU kim_s_Page_098.jpg
ee0060642623f0d014fe6142256c7468
5532f79dfa2fd188280f491e6dbe16ffdb2e30e7
1051970 F20101112_AAABAD kim_s_Page_080.jp2
04e6cfc78e3fe2d7ef40a64a8b8be78d
20a92d005158880ca698bbb6de012667a7128a7c
20254 F20101112_AAACCR kim_s_Page_153.QC.jpg
8d7adcff2ef25cb97c722c144f4d70d7
46a683c04c8f6215d000e8dc326534276ed542c3
1932 F20101112_AAABXL kim_s_Page_137.txt
ff3e1965ba6a1d580526df3c2e486128
648358a69952a3561aff835b1c34d9cc3bfc349a
F20101112_AAABWX kim_s_Page_123.txt
c222082f16cd40c05b36057e8cbcb0e3
1ea795d162f2649b01e03a355835a782730183d7
66896 F20101112_AAAAUJ kim_s_Page_113.jpg
28b53f557f59798cc3c2bf22724aca63
b91271a381f1899f0ceb48faecbe0bd1a7d8e1ce
73320 F20101112_AAAATV kim_s_Page_099.jpg
9e91e94af2950997799ee8c41052033e
dd43b4cd26f974968d970eb08a4eeee660940887
16675 F20101112_AAACDG kim_s_Page_037.QC.jpg
f3ef098e17e07188935bf7b8085ed8a1
962ab03160f0523bcc36ab4b05fdc41a01bb57af
3302 F20101112_AAACCS kim_s_Page_003.QC.jpg
86a0f285ed7a9d0a86729fa227ebc363
5e0fc5e0e74eeb705bea43564a90e18f78be7769
2065 F20101112_AAABYA kim_s_Page_152.txt
32554afc56291d38d9f475abf572f398
035cfa9dd5de53da58783748c77a7ac4beb458b0
2031 F20101112_AAABXM kim_s_Page_138.txt
f8622ecabcc3e4be99c1d874bb4c96b0
1045c14c32056fd29a2182ae3ca368bb31d3ceb2
F20101112_AAABWY kim_s_Page_124.txt
7443add86b3a609b1c5bf9837a6f86b6
e0ac2a62c22bd871fd8b9e7db1da290848f1c231
69518 F20101112_AAAAUK kim_s_Page_114.jpg
c43485acf6cd89d13f6bcfd5b6516a4f
63b987db49e5c26bbd06c4213940d506c62825f9
68526 F20101112_AAAATW kim_s_Page_100.jpg
9dd151c3178638dedefded02d3c800fe
81d676fca2039a265db3c69c352b0eb7c7cbdb3b
110716 F20101112_AAABAE kim_s_Page_081.jp2
0fbe253ab284a2987bf850173375d7ed
cd4133eb234c6ccc49893c234489eed4ec102c53
6833 F20101112_AAACDH kim_s_Page_040thm.jpg
dbf239651a1d691a6cec15b16129742e
53d77082bdd5ace2423a4491e481b2a327a7f870
11449 F20101112_AAACCT kim_s_Page_012.QC.jpg
14b7a5d13bb153f48c6778ba9d31a4be
bc873cfbd0ca663b3a0e64e4a8cf9a6be0195c6f
1111 F20101112_AAABYB kim_s_Page_153.txt
143b988059137d60e8c2578fee5cdc0c
d943bf294f69fb2cb546a67c6370ee4a8f357b66
1886 F20101112_AAABXN kim_s_Page_139.txt
a57ebee9af2008efe47f31acd60f3b7d
8f3095261755ce18668a65a1b43602c391e20a0b
1515 F20101112_AAABWZ kim_s_Page_125.txt
ddcf96bdaecb0bb7d9e1456709c0a90d
78c087ab726b2e3c96a95717331a6c46efe85f71
66587 F20101112_AAAAUL kim_s_Page_115.jpg
5712f0bf24ebf8364a6f1308543536f6
7a356e6f46d963a18b6923102ecea49c478fa9dc
74364 F20101112_AAAATX kim_s_Page_101.jpg
976468f98c94bc1fbe3721cb642ef799
99a5a7b851cd5f9ec6a2472ccd55413071eb7571
734613 F20101112_AAABAF kim_s_Page_082.jp2
a0b7e54a7e2ea27676cb2fc387151666
a4ed9c2cbffafa13879ffe07dda862db54ce1bc4
20599 F20101112_AAACDI kim_s_Page_035.QC.jpg
e6dd7a29b7d011749929afbf76dcb9a4
1bac8e977f00ced387d1341bfb47e38ab1a520ab
6296 F20101112_AAACCU kim_s_Page_039thm.jpg
3dc7edc3cd1df2b9db71ea27d8c63894
3972b60201aeb7c727dc71ee09d571540236c931
F20101112_AAABYC kim_s_Page_154.txt
5ce8f0999d7a6fde7633bcb6b6f3dbd1
297e4724faa87f52e3447058067c01f9421166a7
2012 F20101112_AAABXO kim_s_Page_140.txt
cd8123990dd9be57ce6771d2367760f5
f655ea89aea3549fa6e7f605d163a291ccd9aee4
61034 F20101112_AAAAVA kim_s_Page_130.jpg
0d40b450559ff68a7eb0d37f35557836
8a40b96ea24abff0e926161f40cb0c4a0b696eb3
58772 F20101112_AAAAUM kim_s_Page_116.jpg
f7739592703e03993ca20728fe21124c
fcbefa8ccfed6cfbb654dc16251bae567dcd5c9d
116528 F20101112_AAABAG kim_s_Page_083.jp2
13fef7773d233b3119aabade04846667
73da0fd5ae5980a3041074b9a38242c94f21d8d8
6630 F20101112_AAACDJ kim_s_Page_146thm.jpg
de2b63d0ff778a445c6cd0948b818113
af342f3cc2e619afdcd0ba572afeb40ddc4c1e73
4910 F20101112_AAACCV kim_s_Page_093thm.jpg
713569a7433007f7a4ef4af2cfa3be52
4e8f1fe297d1268dce4a75c55b19b7ba3a0b08f6
1788 F20101112_AAABYD kim_s_Page_155.txt
641ce745972e5899cdf6a78103eeba9e
39c5901bd142b6940cf473974f38fbe04063b010
1707 F20101112_AAABXP kim_s_Page_141.txt
4ec025604be145bfd49582fa9bf9442c
81b11856c74c61d5b6f5bba0c2e219d2fc892389
60466 F20101112_AAAAVB kim_s_Page_131.jpg
494021444b6d2ae24511284cf2d96989
78dd12202c58c4c692fd86b4bf2d4c716ec77a6d
66077 F20101112_AAAAUN kim_s_Page_117.jpg
f81d8b393b3b6569aef84aa11134f14a
2e6b6b4dfba86fad7a8674b855e2044eb89a1bc6
73102 F20101112_AAAATY kim_s_Page_102.jpg
05e6994e3a68f73f8f4c45fb16f1ebf9
82cd341390c15dc893297c18fc4aad2da86e770d
1051986 F20101112_AAABAH kim_s_Page_084.jp2
767f8fe91c4bb250e8f2f951fc7e2b7c
505caa4fd8a9cbd8ffd5a7a14eb351586f949fab
5479 F20101112_AAACDK kim_s_Page_043thm.jpg
e67516c25c343f63e31332de11ba17b1
2a64a1858558afbeb6b1ac51749289465975148d
3182 F20101112_AAACCW kim_s_Page_086thm.jpg
167cb342b760a01fee92c7b2671af7d8
02efe50561560ff3b44f127e59fd32c0f404f37d
643 F20101112_AAABYE kim_s_Page_156.txt
c128024bd7449168c1323757b67e373e
eb8efe8cfd5d8d3bc455e1b3f857ea1d6b7a1ec6
1816 F20101112_AAABXQ kim_s_Page_142.txt
92bd4b4f13df0050a3489b33cb2f80bc
5464641f5de11868614e2c9c5d0d3a4e65023413
58264 F20101112_AAAAVC kim_s_Page_132.jpg
a490685f10a82113944a9c2d7fe8e3cd
fdefc1918bc86bcdf31ada18cb3d5b2508ce6104
62089 F20101112_AAAAUO kim_s_Page_118.jpg
8e036ef4f62366167c4354e00711b321
ffe4c687a6dcd0e2eab18b8268ebfdf1d0ec0166
63773 F20101112_AAAATZ kim_s_Page_103.jpg
2ef95201250b1ef621dc3ee3be6b1878
8c1399b32c4ab94c19b660ba63bca6ffad3f3d38
880678 F20101112_AAABAI kim_s_Page_085.jp2
167746c6c9683f8da736c61af92d5fd7
c5f9767fdef63bbb288d26015409ce587a14b0be
22088 F20101112_AAACEA kim_s_Page_115.QC.jpg
bdfb8c2c07b42bbea09f1d2ea0bc4d0b
cfe4fb92eef785b0ce6b214046cd3bc3fbc976b6
24574 F20101112_AAACDL kim_s_Page_143.QC.jpg
c22f88924b01f3d04e57da8ee605c22f
9784450db6a45823dd4e302a5216c43ca8bcf1ff
5877 F20101112_AAACCX kim_s_Page_159thm.jpg
982b5437a53e45baedb61f37c1eb31ae
c73160307df5e72b132bdbd06b55cfd547cbf963
1339 F20101112_AAABYF kim_s_Page_157.txt
ec4545fd7acdbfb4d44ad50481075969
918436aafbe96529a0da6b8b1fb073a84c668341
2143 F20101112_AAABXR kim_s_Page_143.txt
0b1e6330d6d23ec349c2305626478155
2eff04244321294bc222d47aef5999e7c48d5463
63833 F20101112_AAAAUP kim_s_Page_119.jpg
097d3320133bbcdc18e600e7c92b7a4e
56a18291f939562d933753f606d11cde4416a731
37315 F20101112_AAABAJ kim_s_Page_086.jp2
f8f0220413c5f91e2998e0f6ab1b4dfb
beeafdc25c34ae1db152eaf1454047f6950b977b
66119 F20101112_AAAAVD kim_s_Page_133.jpg
29774eaa49021857811f526ad71e61f0
5ae6f8f94ce218150e8935c8efc3a74bf53b757f
10435 F20101112_AAACEB kim_s_Page_086.QC.jpg
991dd737e0e9fdc67c974879d15d3b10
efb12f93da4d204af1d77e4931c446926fea93b6
6556 F20101112_AAACDM kim_s_Page_055thm.jpg
feaf31c9c86beefa0236fc281be90956
c9a8b76eb550d23d1bab1001726fba126e4b949a
18988 F20101112_AAACCY kim_s_Page_116.QC.jpg
d916d15797182bcb943d6d491fefe270
219ebf2deab8e05a522142195ef77734b87b8233
1601 F20101112_AAABYG kim_s_Page_158.txt
73ebeea7509a0053919914c2c2374738
43438debc2e9168a482aa58eaecc3b1cde0d4dc4
2040 F20101112_AAABXS kim_s_Page_144.txt
15b9afccf1a3f22fa3e8e94e84d978be
2e21280deb40159bc844e05a514d31d267a6ac3e
58148 F20101112_AAAAUQ kim_s_Page_120.jpg
60270df674792277eb5bc74ce278da7c
3c5896d077079d5642cc68c07c48aa331085d9bb
98910 F20101112_AAABAK kim_s_Page_087.jp2
35ce02511780d173267350e5c3808d74
6c3bfb6423672a334dbe607b7af7758268327691
69260 F20101112_AAAAVE kim_s_Page_134.jpg
2682aca5de28778cb2498151be6d952b
ec70c4b47e3d44766d6b99d91d0ea71a14917dae
5072 F20101112_AAACEC kim_s_Page_091thm.jpg
29a55e5cdaddf0951b93c1d3243923b9
d401c90ac316532db47c0d09107c46643eda1018
22949 F20101112_AAACDN kim_s_Page_178.QC.jpg
0fcc6d3c71e117063a8fa308cf39be13
920656ccb37beab6b5427531c312d2daaae8d7dd
19140 F20101112_AAACCZ kim_s_Page_048.QC.jpg
368036e81d09bc39b4b9bc13c87c925e
617353b8df3ceef5a62e2a1c6cbdc08a5ad64182
1334 F20101112_AAABYH kim_s_Page_159.txt
4d6af18fb68ae4b5ed7233a9dfb6e95b
f8987d9f8ded9b1548ff6f84dc3fa326c4c0fdcf
1223 F20101112_AAABXT kim_s_Page_145.txt
98a6a3065e6d386baef439fe8981b641
3ea84f8c515361a005278ee0416320cf936014f3
70700 F20101112_AAAAUR kim_s_Page_121.jpg
30086b19582fc1cc015a4d9fcb86dde6
bc3f1925623d1b081877abeb807efc03f2cb6279
913651 F20101112_AAABBA kim_s_Page_103.jp2
8dec84e0aee89721b64a7cc15821e7a5
fe816b74acf265de869b37babaee4f824c945f59
108840 F20101112_AAABAL kim_s_Page_088.jp2
57355004c490e7d6fef57ed878acbdfb
377344941d572323988eaf85d4353ad2ec2a2935
72305 F20101112_AAAAVF kim_s_Page_135.jpg
d808dc2d70f96fa77e768721dd4963fb
b437183876f86d669bbecbbef7697a175eb25086
6378 F20101112_AAACED kim_s_Page_121thm.jpg
540cbf16edeb9b82ea435473706c7989
69346649e9eebe96d90256cea7e2c374931b0826
22643 F20101112_AAACDO kim_s_Page_099.QC.jpg
e803b72fbbb25487cc22499bc8a6ca7a
912cba55cbfa60ee95288613cee1a5c93183ce38
1615 F20101112_AAABYI kim_s_Page_160.txt
5d93b30daec1286afe4c77fc5c9ec3bc
db10aa308600985b8b73690a60bb9751a7cc068c
F20101112_AAABXU kim_s_Page_146.txt
256afe6919d342e774a2e14ad70771cf
ef5534e9211a16f2813bf9fd8dbf5471548fa705
57806 F20101112_AAAAUS kim_s_Page_122.jpg
b5b8f248ee4b2c67b14b624ba4aee20a
b7a34f00ca7171f4a384f236ed48e83128ec2a06
108459 F20101112_AAABBB kim_s_Page_104.jp2
8b3d45b906e59b2f67a7f858648bcd50
2731f779ef21cfdcffcbb5e8c0d740f61ba2f835
110381 F20101112_AAABAM kim_s_Page_089.jp2
3faa9fbf2e28b1acf079d83a4c856da1
535658b30433355deab76651850ab4b940ea69a0
62625 F20101112_AAAAVG kim_s_Page_136.jpg
f5e92e8d77a3367577bbb45ffd54dec3
71ddf2d6937ff01ce4bb50ce88c44b61ebce196b
23674 F20101112_AAACEE kim_s_Page_104.QC.jpg
2a9b42f1bdc0756530888abc0cfdc166
995cefc844e1e20f3ad2af3b2a77a768aee83d76
21278 F20101112_AAACDP kim_s_Page_056.QC.jpg
14d841d28eabdc1bc1d5438719dbcf52
5be2d6946262938abee9ec89516ecc17ee932348
1930 F20101112_AAABYJ kim_s_Page_161.txt
dd68073d6343eff42b2b564a64753859
4104c58d29e263a73948fd42bcbba7997667c831
2025 F20101112_AAABXV kim_s_Page_147.txt
02307f0727f18ab4973f31935272d66b
7469458deb9a852fedca2853221f7b19bd2ae599
60722 F20101112_AAAAUT kim_s_Page_123.jpg
709f323cb34221d728774e7c9228abf7
39594bbd6f484b9302a1ce68108e374a41ff0d7e
108499 F20101112_AAABBC kim_s_Page_105.jp2
91b73d25242f6cf88d9fb27d13c56150
bf0521c2ec3d302db94ee0af8573342e349d7ce7
102666 F20101112_AAABAN kim_s_Page_090.jp2
0abdffb6c0a7f03195d68b3848b3bde3
9d5e038c423edeb269bb40ba73d84ade73dd0328
68263 F20101112_AAAAVH kim_s_Page_137.jpg
60e1d4b08c49aa95b5705da239d4d2e2
e46d372e27a77b8f3216be200d74358a2f746172
6400 F20101112_AAACEF kim_s_Page_147thm.jpg
59c2eae9568b9d37de004dd554ab2a78
81fdd0fa4aa06f6e5141e24040b86f896f809275
19030 F20101112_AAACDQ kim_s_Page_092.QC.jpg
7773b3bb04fce0a0ff49e3edfcb4a695
e91bda0f527ed78ca3c7110d351e02bb9e8fb22a
835 F20101112_AAABYK kim_s_Page_162.txt
7a626a488e33d83d3faa6e2d9e2c39e7
004a7d3a7c5fab93fc643ec21dc2c83a704be380
1711 F20101112_AAABXW kim_s_Page_148.txt
9ba72d9dede2f8fa33abf26c356f2913
c4f246b9852b584f5c5b02bd442fb8d4b2099192
65289 F20101112_AAAAUU kim_s_Page_124.jpg
85bba161a011ba62369be40c9c9d259a
6d10162c2d8e6f88f47d346b93afeb90d0482973
108549 F20101112_AAABBD kim_s_Page_106.jp2
1ec346e76bfd147dc8db6d15514ae7cf
4c488e853fea5487f6d8a96a1d3a0a45da6f003f
74210 F20101112_AAABAO kim_s_Page_091.jp2
476258624f9e46a84b068221f3160f56
33c33380dbd9d31649f0bc19ac4b606dc4774e47
72028 F20101112_AAAAVI kim_s_Page_138.jpg
92108144aaa83c6f891b0ee6a5ac21cf
3582684bb9511f301b09e04de143b66e175cd7c0
6106 F20101112_AAACEG kim_s_Page_137thm.jpg
aaa76d75d23c6c23088cc48447f0a03e
fc22205b91bd2faf09c0f8823dd7340a152e3303
6195 F20101112_AAACDR kim_s_Page_019thm.jpg
cb18286cd2bcbaf7c870c40b07ee9ac3
52b459617c713724472012394afef72e860d0404
2077 F20101112_AAABYL kim_s_Page_163.txt
c1c58d52ff1583b8bcecf892aea80740
709dc68e304b038f3e2f0078aad072f36a2e403a
1828 F20101112_AAABXX kim_s_Page_149.txt
bf797426226e24b60bce6190812bdb35
69841482bdb4a61229c4eb03e2fd2283a25e3155
74881 F20101112_AAAAUV kim_s_Page_125.jpg
b5e637c9684b371fc106f3a0fb82774e
4ee6fe5f4338a488f77316846a1f159c8d206f78
954982 F20101112_AAABBE kim_s_Page_107.jp2
9c8f224e83cf2a1cd62a9eacc1754595
93817c9e35069b84f2437570bcd8020e0bd5fb23
88278 F20101112_AAABAP kim_s_Page_092.jp2
eeac821dba8f02abf58d23d6ac647c23
6ee48292160cc8e2792b8244bd853c00deb2dc23
67553 F20101112_AAAAVJ kim_s_Page_139.jpg
766da96958f303b22fc73f7a2ddd158d
c018c7a4e97940ec2e6985bd1bc41e1084ea0765
F20101112_AAACDS kim_s_Page_049thm.jpg
5ee34199e106d0f4491b53a896cf9bf6
391bd389a85bf32d24563606390c06c78914ce85
2270 F20101112_AAABZA kim_s_Page_178.txt
1da16b09133f2a7bcb9ad261575bae18
d9bebec1392be6e4a7012dcc6ad799ecdde4ab4a
1970 F20101112_AAABYM kim_s_Page_164.txt
44917f18c919b484888f6afc759d8139
02045fef52bed92a8ee0492081145c2624e9f973
F20101112_AAABXY kim_s_Page_150.txt
0325a910c63da83f44875bd3fef859a8
142684b43381274ca94e593dd5f5e28110e9db45
70620 F20101112_AAAAUW kim_s_Page_126.jpg
77234b1ee348f64ff335ae8e6a42ebe0
9163f777a43a2d759497ea33c98befa5811d24d8
70284 F20101112_AAABAQ kim_s_Page_093.jp2
dab82e4f3e81cd18b01aa360d46560d0
86335bc098d1ec90f816c7d19a8e54a9fb145a34
69994 F20101112_AAAAVK kim_s_Page_140.jpg
49aa9b86f722c535b4d858490896857e
a0f1fd61688fb30a097f17c6193c0b25ce022a37
22922 F20101112_AAACEH kim_s_Page_169.QC.jpg
abb0ecd463762a675b222c8375c53fc5
d560f605dbcede33da066a9e27693f06542b5d20
3752 F20101112_AAACDT kim_s_Page_032thm.jpg
acc031b9e940199c8d41fa0cb791dda8
4de82ff892cee13c2c485eeea56156db4471ae01
2689 F20101112_AAABZB kim_s_Page_179.txt
e72066da9bcf473f5f3457535b4b9a3e
acdef74ed13ff9743d8b4a284cbd9a74c97e9137
1575 F20101112_AAABYN kim_s_Page_165.txt
34fbe60e89bf0b3d3bdb8ea66fe0c2c3
dd605cd81051a7aa0108ea849dcc9ba118422f95
599 F20101112_AAABXZ kim_s_Page_151.txt
ed42e6bb91c866109459a702e9385ebd
b4cc0f82d5326b381cc30ac9cea5bbe13f63123b
72306 F20101112_AAAAUX kim_s_Page_127.jpg
86b3c240484bc5d40a069869b8e0d6b1
b4a382dc63c0cc60d5e72eeb72bb1f61fb239f18
100154 F20101112_AAABBF kim_s_Page_108.jp2
2b2d2c059e22b2aa7b537030fef2f57d
1df4314fc9978c6f0ca58f11b8ef4d2984360b26
96590 F20101112_AAABAR kim_s_Page_094.jp2
7ff2843898e78f627e1cc26fd4983fd7
8c405c03dc0eb7394b52d3feef1ee598052c0fd9
61415 F20101112_AAAAVL kim_s_Page_141.jpg
ef10d8d783a20936a15570374d75a742
fc9104af3a3712b93545cfb2366d9016f14b189c
6447 F20101112_AAACEI kim_s_Page_129thm.jpg
7f77a380f85614980fb4a32c0ce90483
6abb3868c58fed8bbdb7906d746842bc67d676ab
19014 F20101112_AAACDU kim_s_Page_034.QC.jpg
0a1d9e764ba19224fb610ff00e00fc78
d57aa102039157ef7ccd091f3cc899c0efdb9304
2531 F20101112_AAABZC kim_s_Page_180.txt
9206b94497ec7f2f030a2b45aba73aa3
2706af03c0eb45b98a5863d1dc367995feb031f3
1980 F20101112_AAABYO kim_s_Page_166.txt
cafd6de54660999621752590d488fad0
2bfb405933d28193eb621a979922f09fe53ed702
74259 F20101112_AAAAUY kim_s_Page_128.jpg
f08029ac9efdb4e8f590ba726f24c9b9
343aaed634dc269dd751b377b77a4ffc3b0139e4
1012432 F20101112_AAABBG kim_s_Page_109.jp2
61655d8e9c963553b7cd2061fc60f15a
a9c321a3c88accc8f5c939ac037be4e66f512409
41742 F20101112_AAAAWA kim_s_Page_156.jpg
93d029f0f92cf807b5512062327eecc0
3ba39ba7f21af9a68841c9a12444de5e07814365
85728 F20101112_AAABAS kim_s_Page_095.jp2
6bfe5eeee44638a7bade6c2eced6cd48
411614ac156accc8963589bbb2dd55da9943336c
66201 F20101112_AAAAVM kim_s_Page_142.jpg
de99cf558530bb3c9b697b385a37644b
5f63d3c00415e65dd214e4b40af61d410fc9b314
20637 F20101112_AAACEJ kim_s_Page_136.QC.jpg
8ed028efcc0b037732ee01097484a84a
b2195e8517caca9f80fb3d568aa7e4098480c3b2
20582 F20101112_AAACDV kim_s_Page_064.QC.jpg
d1df917405ef57ec36751612ec1dea9c
60cdf3895bbc2ad95cfbdee4d2dda7bdad065766
2811 F20101112_AAABZD kim_s_Page_181.txt
a4b0aae4a1204b68dab8184c5f22dfc5
8b81d9298f767788404782ad23ce1706ac51b737
565 F20101112_AAABYP kim_s_Page_167.txt
db65e7da6a36a49a837189759150d0c1
cd81b7659b916433ecf413e15bc091ac7fb55eee
95452 F20101112_AAABBH kim_s_Page_110.jp2
5e9634c38638a2134e832f9efeee3940
bee75b311dcb47c6edd8f3fb42aefd9dbd3e1d83
68883 F20101112_AAAAWB kim_s_Page_157.jpg
d75271c5f04f40da13752a5d5ba71d5c
18385cae0845fadea340b50b65dbe54d48b70929
890194 F20101112_AAABAT kim_s_Page_096.jp2
43e39c8ee1539c10ad343fd19228b312
ae87e8089344b378c8780dd1f268653b001b98cb
75953 F20101112_AAAAVN kim_s_Page_143.jpg
9f8391173ad9ffa4919750f518b9661c
88c7a6362c5fe2dbf4f5307d1cc192787d7832c5
6131 F20101112_AAACEK kim_s_Page_030thm.jpg
74bb0fcde5c3aacab43487eef34cfcf8
09ba3b870e1252a8b1d4b44e921b6eb01688f5cd
5599 F20101112_AAACDW kim_s_Page_116thm.jpg
709ebff876b82bbf32637b063ed38eb4
41fa86311a4937383da93d0828be550491e78db5
2611 F20101112_AAABZE kim_s_Page_182.txt
4b04e67914f5b156178b09a088c241ac
6a1db765bf97df200b28c6fd71f9eaeb1f90d1a6
1866 F20101112_AAABYQ kim_s_Page_168.txt
3d0df4c21b25f651530ffcb4d4d1ef8a
039226f01f1e3d5b4b38724aa4229b46651157f6
69989 F20101112_AAAAUZ kim_s_Page_129.jpg
cb9b3a1379e8f677ac27b691512ce9d3
22791b859bd1badfa93c2e5a27bd9867bb43857c
1051963 F20101112_AAABBI kim_s_Page_111.jp2
e49da3dfde848b11f5267d0fe79762fd
e8e5374515d1f2bb3e82442ad7baa066def3fb41
64591 F20101112_AAAAWC kim_s_Page_158.jpg
a76b91b0ff3752e3d1b31f439557ea51
d72abf75bce7480b8680118eee7d089fe219ea26
888173 F20101112_AAABAU kim_s_Page_097.jp2
e999608d41fcaa12bde4106cedf22789
864ac2e4a0e7c7c22fe5c402b1150ea9d7bec65b
73145 F20101112_AAAAVO kim_s_Page_144.jpg
4dc02cf7519a33c7ef79e2494891602d
d8078fbf0d9a5dadb266698bee18ea426bba7a46
5597 F20101112_AAACFA kim_s_Page_071thm.jpg
4d5b3e37d38a5e95ddc8f7fec06873e8
d4c7783c8669684930732d82ee9f74ec24f11d1a
5752 F20101112_AAACEL kim_s_Page_097thm.jpg
9432206a4c140efbcb9de2b6401730a6
91b8c0b36ad808df16bd49f761c0b2023a0d0840
6482 F20101112_AAACDX kim_s_Page_177thm.jpg
ddd485490aacd635df23f8a3d855be8f
b67013f7d5c1bf4c1a6bfa01ef92479783b34a85
1345 F20101112_AAABZF kim_s_Page_183.txt
1291f870558b5ae5793385e2555e3618
8aa46a93cc430223074aab9084346dbb8da96841
1950 F20101112_AAABYR kim_s_Page_169.txt
2d05b8795295af0fa0c4d678d5a98e34
fab5d71bca0a5d903b47572c3a2c6268c5512245
40270 F20101112_AAABBJ kim_s_Page_112.jp2
d0a970f9dc7a3eea059746bb4580462e
4a2f7c246fd173ef9db348375e639251d40e0358
61868 F20101112_AAAAWD kim_s_Page_159.jpg
3265eebbbd718e80f4c790a3ab8bc715
7e1bf333e2d84a5d0729b01f633dda8e36f56780
102548 F20101112_AAABAV kim_s_Page_098.jp2
4f381d5a9edc4104f90cd99d9f10d2a7
10bf9cdc103044bf48d82182d15d666355b0c51c
46455 F20101112_AAAAVP kim_s_Page_145.jpg
2c10516fbe13ac85c64fc166c2dcf651
4b24aecf09a6091c26d21d63599a53a18374f004
6453 F20101112_AAACFB kim_s_Page_170thm.jpg
58a022d6ecc7dc98fff8ee6b3e69cc29
d6c6f4698d51ef5971aaeb22da9f70f7f433c6cc
19423 F20101112_AAACEM kim_s_Page_103.QC.jpg
8a29888f81b66f3a92aec7d4a8157b0a
625478134c0b5902f959ad64f79725cf9f1256bd
23230 F20101112_AAACDY kim_s_Page_170.QC.jpg
57348477bfbc089d8c2f6cf94f25755b
cb51a38d1db9d5fdce00315a7f4ce67cf90447e6
2316 F20101112_AAABZG kim_s_Page_001thm.jpg
7f48456e15355ac1de71ba3fa23c21f5
700faa1e6e361c6b23f77789e793be2d252e8384
F20101112_AAABYS kim_s_Page_170.txt
bd2611b3bdeee557ce4a5d5db56412e8
02eaaa56d703d04886ea148469a937639cd9b6c9
98672 F20101112_AAABBK kim_s_Page_113.jp2
1666f33ed61568e4383fe6e01400e437
63c2ad7a3a60edd83f088c6c94e15a4f1acf5ca8
51581 F20101112_AAAAWE kim_s_Page_160.jpg
09c06ebb12d9dd4ff9e2b37e2280e322
db0ffe031682749d22f99f82969074c49bd5fd21
106134 F20101112_AAABAW kim_s_Page_099.jp2
e31036c8843e61cf1669986e377899ab
179899f4a671e4444b00cbd4bdb127ef78c23dd9
74078 F20101112_AAAAVQ kim_s_Page_146.jpg
ae7b2e8d6e925f64041f9c1e03c77869
8de573173d48f468426ddb4f3963d2bc53dc440e
3067 F20101112_AAACFC kim_s_Page_112thm.jpg
7f2730a92017d29292af9bfae37d1fb5
a8af859c90c5a3c0a0111710ffa41072ee16d179
6120 F20101112_AAACEN kim_s_Page_035thm.jpg
8e73a95b9ce153a3fc36d9de3fb54363
4c21ccf8b43e9c0aab8cfe869a5252f4f695dcd3
16352 F20101112_AAACDZ kim_s_Page_160.QC.jpg
88157b993cd58cda02e86c5ca533d40c
62aed4df28fca616fa9ce48a2f05f23cfdb417d9
1345744 F20101112_AAABZH kim_s.pdf
7264eaffbd4d7ace567463c6d8090cf0
3e7889d898c3ec28442781d2836524e13004c11b
1979 F20101112_AAABYT kim_s_Page_171.txt
27684de6aac530144e2837c48fb1b935
11b1072c6f06ff58ecd9c53bb99964676095fb98
106399 F20101112_AAABCA kim_s_Page_129.jp2
ee2c9ff9aa3ebf7b02e790b6b315d3d3
58631999727771a15e63368a027c145eecce6cf3
105428 F20101112_AAABBL kim_s_Page_114.jp2
5f580224f28cc54e05b3088f5e73a01c
2add946bb306e52a9f7e33aba51ac64899fb4ce6
70051 F20101112_AAAAWF kim_s_Page_161.jpg
1698d5a5a698bb96d614832ab85796ab
182a40f9bd3d928adb9cf9911f9a2eb131d61fe8
920846 F20101112_AAABAX kim_s_Page_100.jp2
0e7032f99cce827d3306b65427d3ecb0
8bd9e958c858eb3187b11d00fc5224b6807c492e
71885 F20101112_AAAAVR kim_s_Page_147.jpg
8e8eee33799a1e4aa66112ca05a71d10
9bbf97e307b6c2fe18e1250e7a20310d96a9868b
3378 F20101112_AAACFD kim_s_Page_012thm.jpg
c8bc447410b2654ace386a34e43650c8
6362d88a3c16bc7bf26ef2b31865cce0d1ec875e
21208 F20101112_AAACEO kim_s_Page_154.QC.jpg
5187c56873c10c2b8ca0b8217a76f33a
353ff9cea66002c95f54de081376e2c5a0a57833
22615 F20101112_AAABZI kim_s_Page_126.QC.jpg
ba5cf4a85eab6e1fe1838ec8848766ed
dce06340da2340f93cde81f82f723a2fe07904b1
F20101112_AAABYU kim_s_Page_172.txt
f9026973083e15cea75d3c3554abd374
98bd0a0616c2332365e7f38fa1257f7711cea997
101134 F20101112_AAABBM kim_s_Page_115.jp2
b9554645969158cbc450487c3748de92
47537a5dc31cf69fd36251d0e11c9c770c9bbf6d
45746 F20101112_AAAAWG kim_s_Page_162.jpg
e684ef3c255d19dfc2587476a4feb2c2
eef483ae859724d75c8af072a30df0cba1413414
110223 F20101112_AAABAY kim_s_Page_101.jp2
45b29997773edcbce7390385af5f2326
ac9b796c9329d2b6b5e7b6510355bb6e14e378f5
58440 F20101112_AAAAVS kim_s_Page_148.jpg
f948ada731bd4c120724e423fc580034
1cae6271616075b02647ba93ff58385fb4971e94
968479 F20101112_AAABCB kim_s_Page_130.jp2
faf0f0278441e8555607374c7794b796
dd2a104bab1d0ce8bc2f083ad41c0035a19ddb24
5366 F20101112_AAACFE kim_s_Page_066thm.jpg
991d8658d14f5debcfd38420fe662f51
d1b82cdc458093dd61be3e3efea4bb065b304b16
F20101112_AAACEP kim_s_Page_131thm.jpg
6096e0f7ca912f17671a05e758dd8ca6
8e62598a75bb0e2a0711adab698f2de1bef9b0a2
23028 F20101112_AAABZJ kim_s_Page_054.QC.jpg
d7aa1403ebfbae01b9808fdaf2b685fc
310d1694c90b2af99c83ce74717f85a0c1d9d253
2114 F20101112_AAABYV kim_s_Page_173.txt
793553b76a8de8761b546c16d99cfa4e
f0dd3f3860915c2debc60d4578459adbbf366cc1
86016 F20101112_AAABBN kim_s_Page_116.jp2
c23bcf9a6d8f9cff75037b57a01aeb53
703715ae86aa0a521ccef1179d2e60ac5e9c5606
73493 F20101112_AAAAWH kim_s_Page_163.jpg
6d454effc5f0a8872ad30bc5b08149ac
125552ef4a996ae385b7b313d66c9e8f0fab19ad
103592 F20101112_AAABAZ kim_s_Page_102.jp2
c0e620497a1709d02d029dc6af0a106d
4ad43877a7afc8f1914e541484f7b6d39038321c
65453 F20101112_AAAAVT kim_s_Page_149.jpg
1b1847f2a840efa5f37b94d15c0214b4
1c4f6329a0359b26bf0c1aa488f2f79b7d601d93
841164 F20101112_AAABCC kim_s_Page_131.jp2
d9edee6ec4f930e482ebda446be8535a
42c331feb98869a0e38dce53ab483592fb24482d
6270 F20101112_AAACFF kim_s_Page_099thm.jpg
809266e878815dabb5aa3857e67b4f0a
30b2056cd1dfd7863dc4eb367f1343f48cbe6bbe
6226 F20101112_AAACEQ kim_s_Page_029thm.jpg
61ec0444b76e0426609f438349509984
1a455672d403bf6a273fa95d9318a1d6a4ffa5fc
21195 F20101112_AAABZK kim_s_Page_176.QC.jpg
7ca91f0ed6552623e2761e7e2044494f
55e358e64fcb0eafcc3f1d46dc42821208f4cda8
2078 F20101112_AAABYW kim_s_Page_174.txt
a2b262b9302bfe0092dfcde21b9bb602
5a8c49b118a4ec5a4d1050264fc1df0d4a4e10e6
100467 F20101112_AAABBO kim_s_Page_117.jp2
0742a885064bbf54a34b9f3df534a423
7670a4267b0edc3710371bf9a0f129a1016c36bf
69922 F20101112_AAAAWI kim_s_Page_164.jpg
1db5ca8d5e3970e0ae0a39e7b1a519c2
a3b45ffa1e3e3163269cc45ecce89ee27ae43819
71464 F20101112_AAAAVU kim_s_Page_150.jpg
02908a9768ed34fd4edfb715ad2c697b
2c5cf632e700affb480a6e546690b3febe64089f
725292 F20101112_AAABCD kim_s_Page_132.jp2
8bbbe20bc2a2606685a999a073309bf2
1d0523e643d4a7e00fc1324863ff47337cc96e46
4192 F20101112_AAACFG kim_s_Page_031thm.jpg
f834e6ddd5d954ebab602fe1aa96e8b0
e8a9de7af88a171f844016476d8b3fa0f50e21a4
22080 F20101112_AAACER kim_s_Page_019.QC.jpg
45c16aed5ce2c4bbb4c388fb6c151092
43ae90450d549339407c1cbad9eda386a12c7115
21130 F20101112_AAABZL kim_s_Page_041.QC.jpg
1e7211985c841db04a9864b3ddf16285
1bc9f9847a631217afc170bc709c80cde46f3770
1133 F20101112_AAABYX kim_s_Page_175.txt
99a05a3a4bf25d2a911510e5a648892d
7423c7c97b34d0d702417d095e0b4ad646118da9
848835 F20101112_AAABBP kim_s_Page_118.jp2
99717fd12ff5d8bb1e4b8ef2b515c315
97978daeb6ef44828230fd1d918142eceaa1b441
67169 F20101112_AAAAWJ kim_s_Page_165.jpg
920da5238999f3454175ff42e79a4e82
5924becaa76bd5818162008dc38361ff98c874d6
57998 F20101112_AAAAVV kim_s_Page_151.jpg
4fbe6b48f74c1e2a96a932437e0159ec
a2912452b923684b789fc69f2b1914f86c3dd281
95775 F20101112_AAABCE kim_s_Page_133.jp2
6b065f573a8b2ddcdffb68dcd9a7ef02
da8751defd1ad2eb5912cefc1c21777633df4d26
24189 F20101112_AAACFH kim_s_Page_022.QC.jpg
179ee977f644be7e02e26c96fe04f146
d645a6a2d2dfc57c856ab8795d6b0f0ee8892462
6550 F20101112_AAACES kim_s_Page_098thm.jpg
bef7b0518f7ebf95e9c760ad3735b210
453efd64763a1f47457c95315254aa072c28ab02
23190 F20101112_AAABZM kim_s_Page_106.QC.jpg
e367bc3ffd5b2718279d57e06fe31671
23d1299821771c5f56ac98fb0a489f4499ceb33c
F20101112_AAABYY kim_s_Page_176.txt
aebd6c838037fca4565e1076e307c384
eba74dc0a6e9046ad37c58dfba32a470b7aeab39
95748 F20101112_AAABBQ kim_s_Page_119.jp2
3e0761f2d894da450d7910036a2bf472
97a7f1256181adf40f48b651ad2fe619db44ee4d
70037 F20101112_AAAAWK kim_s_Page_166.jpg
93726e6f104c86b2d8c70c07ef71a740
dbe3eb693df326a7bd487ca8e262fd44c93e70c7
73069 F20101112_AAAAVW kim_s_Page_152.jpg
2efe947133ae99f05ab8d4189c507243
6a97f97393b05532317b18f35c145c7950b981b7
102235 F20101112_AAABCF kim_s_Page_134.jp2
317c7fe3d9a5e910e89fbcb800398ff9
08224ca21870f6ad4114c60be6c88d56997bbc39
6492 F20101112_AAACET kim_s_Page_036thm.jpg
57ddec68938e2c549423ce689f1e51b4
06e7fc9734518e551040f3eab8ed3c28f89ebeea
6126 F20101112_AAABZN kim_s_Page_041thm.jpg
050adfd2e8e397dd363e00ea00977eb7
384e4f5fc2e149a17ff215a68241a48b70a1f189
2417 F20101112_AAABYZ kim_s_Page_177.txt
cd44dcb9c3b0833b3dec9e053df6e02d
7301045a8fcd7dcdd6f72748312c77cacb6b00cc
86747 F20101112_AAABBR kim_s_Page_120.jp2
ca5e1e7345169736ba8e823f6655cf55
433321412993e7ec67599ba20c2a35a735810b3d
54103 F20101112_AAAAWL kim_s_Page_167.jpg
f9947b2c62f14574253de6bbaf2b8212
f055407325b396ef62abe5b3a0a5d8833cf3d42a
65591 F20101112_AAAAVX kim_s_Page_153.jpg
29d97ddc072c7ad7f8481dafe5dbf85d
b8e4f16fd0f22180954267d56789eb896d7cf447
22386 F20101112_AAACFI kim_s_Page_072.QC.jpg
b432e6d432f33a5cd225de8dedc2ea7b
846e2cf5f680265c97ab415f8f06f213bdc189f9
5854 F20101112_AAACEU kim_s_Page_149thm.jpg
293b6bbfb0ef271888f98413423dbc2b
cb6af7ae0bb453bddf6021a8305513428d770d7f
19046 F20101112_AAABZO kim_s_Page_120.QC.jpg
c0ce195ea6be4370a1bb901106a5f568
6741882d6fe476a86b1eece2b50aa26d91c49b15
86083 F20101112_AAAAXA kim_s_Page_182.jpg
87735198b150b6fd6322d6132125473c
690ad3ec3078abb1d9fa3d85c63f72abcb71a996
106779 F20101112_AAABBS kim_s_Page_121.jp2
9f15af5366ca77f08dbe0bdc2ca9cff2
7a998949a5e80d1477598157366fc1631d18924a
66685 F20101112_AAAAWM kim_s_Page_168.jpg
af53ef45333d855ce15e9ed5a85e99fe
d076d6731b39574c53f7dd73b907f50ceacd1d1e
64399 F20101112_AAAAVY kim_s_Page_154.jpg
dce482ad2489e5cca5f36e97917f0975
84c9a3b83e1e7270288714f1208a9e71983badba
108775 F20101112_AAABCG kim_s_Page_135.jp2
74c8caefc2d03b72957f7baf9282f51a
50ff37c5ccfe2703c803c847e5e1c6b03706edd1
6162 F20101112_AAACFJ kim_s_Page_117thm.jpg
f8e9e950c0b1a571e7c95db67fc810bb
20bc6fd3eae5c4a33987253b62f6ea250de6f885
22701 F20101112_AAACEV kim_s_Page_164.QC.jpg
d6d40cd1de29f9eeecad2b5d69de4084
c196acba8ee47b25b0cd66fbcbd840337929422c
25608 F20101112_AAABZP kim_s_Page_181.QC.jpg
9ef9bd7fe0cdd9abc516c7e58d7e188d
7533898c57e6d6d8a6ff57952bb17598eb091ee2
51899 F20101112_AAAAXB kim_s_Page_183.jpg
b249c2aa467b0ead05bb4aa2d0efcde5
5e01651f88b166de96028c66027b8878dc334db6
85505 F20101112_AAABBT kim_s_Page_122.jp2
d47f08df88ace235b332af6ae6b7c3d9
60d98c4974e7008c44eaaf2f1a3e15107f8707c1
69692 F20101112_AAAAWN kim_s_Page_169.jpg
7405dd3cfd59046a2a3eb69fb39ae79f
b0e8a8ccaa73a18d8b335b7d67085829688f6293
63565 F20101112_AAAAVZ kim_s_Page_155.jpg
1aa59a28605bbd0470802c39eb5cc7a8
69b0ad4e76cf5d6b3d5f9799e438f549c09b4676
92901 F20101112_AAABCH kim_s_Page_136.jp2
17bd7797a7582631eba0d27cd8a576bd
c2a3ba8a8ac76f2ca6046c604ce867ba17d03108
6354 F20101112_AAACFK kim_s_Page_169thm.jpg
18fb89d83552fa32bc3ba2658ce6edd8
92a5193d79a04ecb76734e7f50bfbe774bd60453
19120 F20101112_AAACEW kim_s_Page_070.QC.jpg
9c9ef273430bfb479770f36f7991d334
0b3a794f9696fa513c15f4a1d92b5e750335b73c
5204 F20101112_AAABZQ kim_s_Page_151thm.jpg
5e6b455da9df9659ea51abc8366c734e
ceafe1bb4d0498acd0350799fd2c6099d2abe319
23580 F20101112_AAAAXC kim_s_Page_001.jp2
8c796aa2472a6840419ae26bb68f6481
ffa38055344d6651a6383c0650944598ed1e1836
90264 F20101112_AAABBU kim_s_Page_123.jp2
ce5303d556e960f5a984cd9b842f283a
a728fc41bac869984841a5797211ed0db18b305b
70952 F20101112_AAAAWO kim_s_Page_170.jpg
1bbd6fcd3a7a71b765c2b213f01fa00a
e4639d2cde01ae19ff0ed4eb858e6a969c8e8977
101176 F20101112_AAABCI kim_s_Page_137.jp2
934285f7f114f242fcc2957fcf4a5947
d54e1f692dcb70f514a1c53c56eaf0b36c710e1f
20324 F20101112_AAACGA kim_s_Page_051.QC.jpg
4366c9810c93c229f7da612866d0ec10
3cff1288b7456ea62c9414f9ff2d05b6c30d2f62
5658 F20101112_AAACFL kim_s_Page_028thm.jpg
0015cad6230fb8fbeba643c155f31436
c49b013a53a78a45d45b9b86e9f07cc5f02b46ae
6199 F20101112_AAACEX kim_s_Page_056thm.jpg
572dec23f012f273bc3af163ae096aaa
7285ed9068b4e1f29269f9073b50b785adc1912f
22437 F20101112_AAABZR kim_s_Page_009.QC.jpg
ee3f0e3d61c6172732910e13e849b4e7
5581532d8a75249d55f7a12823a167371d59f516
5445 F20101112_AAAAXD kim_s_Page_002.jp2
21c0b5c25c70238833e7a28bd653c82e
45d97a93dbb772f76dfa14a911ec0adce5b85507
95608 F20101112_AAABBV kim_s_Page_124.jp2
ffa6c4f4bfa7b47df3e78bd8fea23869
65d926c207ec6c89fd06987f4ccdb9bd4394ad6e
71187 F20101112_AAAAWP kim_s_Page_171.jpg
e59b8e1880aaa98a2c775c71cd5deedb
92db7a765168f3143f3ba343152d0341aec689e6
107884 F20101112_AAABCJ kim_s_Page_138.jp2
b245e5b84c8c12df36f950860315fdf9
ba62aeed13b03d2dbf24ee8b1027d8508c0dffcf
5903 F20101112_AAACGB kim_s_Page_103thm.jpg
dd19cb7db59171a8197297723e8d9706
079bbddfa05593020a7c91de1e2579605d349436
19874 F20101112_AAACFM kim_s_Page_038.QC.jpg
a56b8037cbec9e4d8842bcad84fe7397
274151db2a12de05218daa011e426cd1c02b6f8a
24597 F20101112_AAACEY kim_s_Page_173.QC.jpg
414160fdf7913e460a193b3a0f6715f2
c7eff746973800d95337cd9e78fdeadb4123ee61
5967 F20101112_AAABZS kim_s_Page_068thm.jpg
bed74378a3d69f924f312f2ca4a0b499
0fd95edee23e9458a5ffcfc85587f9ef4403c3af
6627 F20101112_AAAAXE kim_s_Page_003.jp2
fdf72b7d150f23f9348635d27353f722
b4936a6cffb3eec6cc634af2eea8b40656646aa8
F20101112_AAABBW kim_s_Page_125.jp2
5facb5a5e8b76ce4bb5b1b10070bd81e
2d183efc3f91b98bbcedca0cd070e271ef501b64
73933 F20101112_AAAAWQ kim_s_Page_172.jpg
4dc6b540a5212faeaf519d3207341931
319daa5f13c1cae83593d10117d40f55866ff762
101900 F20101112_AAABCK kim_s_Page_139.jp2
94c3be96815c9cc80658dda6609d467e
ba6a4acc86a6d79cb9bd0b0f43cc97f5b834cc06
4354 F20101112_AAACGC kim_s_Page_014thm.jpg
918e71133d87d382723e89182d973102
0d0083b8f1300c1ffcce1739f54e2a6f8e66bae9
19125 F20101112_AAACFN kim_s_Page_148.QC.jpg
a79bf89af92480768f3a8bd9e507795c
4b21fdb5753d6f9cce7805a6419a30564ff7d458
22886 F20101112_AAACEZ kim_s_Page_108.QC.jpg
902d7d3f874561d628e5a4979e4d01be
055650e53d492f46307cb3ad4bbe185245371437
6568 F20101112_AAABZT kim_s_Page_084thm.jpg
54b95e039c48f3bfcd369d33d88dc205
0f7f53a32ef22790a7cddcdf01f21e04ff53717b
57546 F20101112_AAAAXF kim_s_Page_004.jp2
161770859ee6326840770dc050b4e64b
0bd90d106e4897ac1effed067d609d875acfed8d
106774 F20101112_AAABBX kim_s_Page_126.jp2
6b63b16b78581cc5602bbaa3b928b440
1cd634d4a66af5043c57f32d55ee77e88fde285a
75020 F20101112_AAAAWR kim_s_Page_173.jpg
dfb8e338a2360a6c7fa6f93de0bb1415
b0de5d819363be88f5b29e35f5ef27102f3ee43a
95077 F20101112_AAABDA kim_s_Page_155.jp2
99677ef8c2a391bbadb2ce586070c7f3
172c80c269d8b6e1dfdbec25b856e7acc854bc78
98360 F20101112_AAABCL kim_s_Page_140.jp2
0f0e3a10b953653daf53e0de35930fa1
caefcb3c4641bf6da7b7ba860ab7c760c9ffbdd1
5863 F20101112_AAACGD kim_s_Page_024thm.jpg
bff153296275e34c9ebe20b346059b05
6c792feb361921147abe949d56d731090b0bfe24
3282 F20101112_AAACFO kim_s_Page_002.QC.jpg
6ed0fd39af0d58d351adcc381f9b0afd
0bd24b23c24afbf045444415dfbc0001bca3a152
6591 F20101112_AAABZU kim_s_Page_125thm.jpg
e3f6e29f99bba5595ed9fa376ef1d22f
8c3cc5c8e38d9129df87e4f4f0e22d81d54a88ed
1051985 F20101112_AAAAXG kim_s_Page_005.jp2
cadcf408b997491ecdc09749f810fdc2
918fdbc4cb96ce90cf23419d6c8853707df4b0e9
107228 F20101112_AAABBY kim_s_Page_127.jp2
9aafdf10abdb82321b22d5122303e36f
d70925ec8265bf04483585e6b165b2bd7f5facdd
74941 F20101112_AAAAWS kim_s_Page_174.jpg
af15eb9557ab22aef321abf252084c33
4a1066e4dc9c0554ffc97ad5c5af2096908a3661
629170 F20101112_AAABDB kim_s_Page_156.jp2
a198093944d73353ea7c78d81df65d05
da46a9919201fcc51949aad327a8b1562bc7ff63
93258 F20101112_AAABCM kim_s_Page_141.jp2
9562b75b79fa074995fd5da2c7bc2c33
1f99be9f773845d5bf65d2aae74ca9667e443e7b
6029 F20101112_AAACGE kim_s_Page_154thm.jpg
53110d893032728cbb8f8c4a989a8e5f
470b00d28821adc662a5f85178ff3a9a2c12ed32
24330 F20101112_AAACFP kim_s_Page_174.QC.jpg
831eace3939c0339a2cacbee48581aef
1bef45372eb15b32e704ee18cb94e59b49029a42
20917 F20101112_AAABZV kim_s_Page_076.QC.jpg
b244447f3bb1bb1902568d101a016976
eb2969899dc970a1e0b80f2aff1be5da87a09964
1051977 F20101112_AAAAXH kim_s_Page_006.jp2
9c27cdd7b0c62fe5cdb51ef10e98d86f
f413f3af5e26b69df43e3adc51467ea3c8b2ed6a
1051959 F20101112_AAABBZ kim_s_Page_128.jp2
28c3bc26279be3b4b60635e9b26cd9f8
f3f6553503ead587ca1e481fef6349353d0d8e30
44194 F20101112_AAAAWT kim_s_Page_175.jpg
210b178df5ae2eb0798b112cd33638d0
433bca939e067ad047437ce22462800dfde5d982
993795 F20101112_AAABDC kim_s_Page_157.jp2
dc6c45b49baec6be10865e0bb5d6da90
4664b56392ba361412c07e72934c14225ade5371
94833 F20101112_AAABCN kim_s_Page_142.jp2
85d8bf15dd005c4fa702e214c85c309e
b4227990c825e4dc483321a2a4b43fa39f8363a9
26090 F20101112_AAACGF kim_s_Page_006.QC.jpg
d29cba30dc4a5b079a751fd2e0bf5bef
c2ebc75e35d61ab1bf8452f7bbb68bc6a2d6cd04
23478 F20101112_AAACFQ kim_s_Page_102.QC.jpg
385998734b7b12daaaf12c35102e9bbf
9998b0eba2841a85202b2f1769f0295664c50615
22427 F20101112_AAABZW kim_s_Page_020.QC.jpg
c829a397ec88e2117748e36863bd888a
35833fd61d9cd6acf15efa710bb2981cfa18833c
1051972 F20101112_AAAAXI kim_s_Page_007.jp2
c9f8d31d052e982dc65908e8bace28b3
814573f9f2693be1c7a2c2244af0424efdd5af5b
71143 F20101112_AAAAWU kim_s_Page_176.jpg
ec9c1b5811c1cc15fa17668775e27871
0e21ddd31dba518f8f40a44ca5b1b754ffccb2c7
958134 F20101112_AAABDD kim_s_Page_158.jp2
dec6bbff00b2b8d05b7b9afa8d26a540
3ee6980e0f0671828c0667e1384cfc9b1cc82ac3
113306 F20101112_AAABCO kim_s_Page_143.jp2
b663455e1f3507f5749eb6b89b190aa3
5752449769eef14588904555052cc14ccaab5618
2275 F20101112_AAACGG kim_s_Page_023thm.jpg
30ca7c80251c59986f5493d9c0a89b57
946e869e95b28b52c4affccbf584dc08fcaa285b
28026 F20101112_AAACFR kim_s_Page_011.QC.jpg
1a4af11d39760af422727ae1b529d5a5
68405f419aa3f9c2481024d04987b754c408b448
1389 F20101112_AAABZX kim_s_Page_003thm.jpg
3fdf4face3f549c4938e2acda1aec8fd
358ba3b1978dfaeb316d76b70437bc4bb5296eae
1051978 F20101112_AAAAXJ kim_s_Page_008.jp2
6943695f84547e075312802624119e3c
377dee14a79c8d2c8ea3520d3a0a2d8b24a7cf60
85412 F20101112_AAAAWV kim_s_Page_177.jpg
d4616daefefb523f887484723bd04b1e
2663b5c6b2c996e8e3416eb5efa74c8df30effdd
888593 F20101112_AAABDE kim_s_Page_159.jp2
78fe7e35948fb487e95d431c76098cec
8e85be1e740368f7be83a9ed723b81168eea7d92
107889 F20101112_AAABCP kim_s_Page_144.jp2
cf318a1bb64f23120ea39f51b9003a37
4dd66966297341b1b48f40a58239cae5b17ff4c8
F20101112_AAACGH kim_s_Page_025thm.jpg
627fed9ee7d48e5afd8020db03e8a03f
0bf9d57d7641a2cc617b505c85696f3ad912d79e
6082 F20101112_AAACFS kim_s_Page_140thm.jpg
220c2fcf61bcf4fb58254b1091ace198
04da6502c9159ef1c3df220d80e670600e11bf71
20747 F20101112_AAABZY kim_s_Page_149.QC.jpg
4cb38451bc801321a8e74042573d0dd5
017ac65f8a36fb783711bb006f7f6a87298a09c5
1051982 F20101112_AAAAXK kim_s_Page_009.jp2
48da0dc5384dbecd50f577daaf002fdb
62ef5cf1374bf86ee5ffd4008da135d59686f291
80493 F20101112_AAAAWW kim_s_Page_178.jpg
361bec1945b82a1bf6be4f21ac52455a
16893d0e401c3e7822ea42c674e635d702fd26f0
728691 F20101112_AAABDF kim_s_Page_160.jp2
aa2df0bf1b2721581b7505d5c895b519
fd354a9e250fca2a69f5caf3c514186f6fd029e1
68567 F20101112_AAABCQ kim_s_Page_145.jp2
81310e867594239fb6650c384bd77f19
4200913fa93458d13a3f0b59f77b96701a16ef9f
5545 F20101112_AAACGI kim_s_Page_120thm.jpg
fac8b534a2d849311f9a3d5afd98130c
594f589bc5c6417d7a957a7f54fa4237bff6e04f
19834 F20101112_AAACFT kim_s_Page_158.QC.jpg
a956f41de56fae06b93f4b879d03db78
5fed84b9ca74725941bdd20e155cddd7ab8944ec
6295 F20101112_AAABZZ kim_s_Page_016thm.jpg
1177ad8055e93bb8ecd5214ea97f9c6a
b8cc367fa88df42daac4e9b67d3db56746f7f644
1051983 F20101112_AAAAXL kim_s_Page_010.jp2
972cb587480e2d194de8f07db6dce00f
82f0b99d5bbca5883b131b71b0bf7a553ed544f6
87693 F20101112_AAAAWX kim_s_Page_179.jpg
1feca4de5bd011b31cd0e5bb6b9599b9
3bd4807c6e980c19a97a542a30a261540c750463
104723 F20101112_AAABDG kim_s_Page_161.jp2
6f723ab795ed294b34334e62411eaef3
942abfaa953bbda68f820c36b66759f17ef5791b
111496 F20101112_AAABCR kim_s_Page_146.jp2
18a2411cc7d7e523e7830679d58eb5dd
e40ab21bd54b63f9e746466bdfc3da0937234009
6506 F20101112_AAACFU kim_s_Page_104thm.jpg
47e04aad0dd2c44c371e834d738f43cf
13e927ff4355818eb3f2bcccbfb55c7b822aec61
1051976 F20101112_AAAAXM kim_s_Page_011.jp2
6c1602cf91f9841c51cbcc570aa17134
b2bc1f6c130245d17c00ccdf8908caa83db8a69f
86535 F20101112_AAAAWY kim_s_Page_180.jpg
f8e97a07c7d032a2b59721cae0f0066b
c768878c917967b97f6a65e5ba55cc53fb7dc72c
102587 F20101112_AAAAYA kim_s_Page_025.jp2
0c2102f2137a40494701df7805985072
8ec5b1ff90645f0d8c69759a6097fe2ad7eec6bd
108087 F20101112_AAABCS kim_s_Page_147.jp2
44db200623787d8ab06ae7c3d8cf61fd
bc8f03bd37733e2f6899a5186416ba793ac27aa7
20558 F20101112_AAACGJ kim_s_Page_061.QC.jpg
e3521a093c56122e88c874fecec5d8c3
fa3bbf33f3037e3610df5b8509ff80327696c146
6351 F20101112_AAACFV kim_s_Page_126thm.jpg
8aad4e8af5d60db945f4fa935c118fb0
baa843650a2c3f6a4408dba66a60ea13c37fc2fc
1009958 F20101112_AAAAXN kim_s_Page_012.jp2
6d8a220d2387c3c644afb98fa696b531
e618e1645bf1b8c74bd6464755cd7f8f0e47998e
90762 F20101112_AAAAWZ kim_s_Page_181.jpg
dbebd326e1f495f506f4c3a9623559b5
1ecb0f0c4ab74308e6e76bb10d900a7db53d022f
831580 F20101112_AAABDH kim_s_Page_162.jp2
c1ea581e945840d181b3e88056a841a2
5b746b36ab648348818ac144df6e37d5c0c5a520
822993 F20101112_AAAAYB kim_s_Page_026.jp2
dcd6c5e4b874d2d1ff864ce42b481bd6
b5f68150bb630a611a7da6695b2113bf3657438f
87613 F20101112_AAABCT kim_s_Page_148.jp2
5eb588ba02fa1f139402a8508a40ff05
ca2d6f819b259c312da61ec6c8a58f84264091ac
19271 F20101112_AAACGK kim_s_Page_013.QC.jpg
71b41d4b3a215ccc961635fd1dd7bd32
3bbd5cd082eb4749f737d8b0e72d660fbdfea92f
5250 F20101112_AAACFW kim_s_Page_130thm.jpg
77ae0fea216cdcb1c415a29efe246b1a
67ae59000ad03711be5decbb3eaf4d4cd9ad482b
86823 F20101112_AAAAXO kim_s_Page_013.jp2
d813c27d165d1c5de3dae4e667a4b173
217ac8502f25ad35c7eea9bccbead8d429796d4a
109794 F20101112_AAABDI kim_s_Page_163.jp2
17194b132dce37700c17e2fe724c6fef
d002bb5d49a18e1053305ce5f174d0e5af8a772a
825997 F20101112_AAAAYC kim_s_Page_027.jp2
d7f52a2f7e5e093cf82aee0919fafa84
3058023ca68a479884c4d2d00d2e65d9d72a6870
893428 F20101112_AAABCU kim_s_Page_149.jp2
26136868bdf19ce7cbdff44e15657be8
3123a2576bb1429201504b9ec4cb897f1a34fa7b
15737 F20101112_AAACHA kim_s_Page_145.QC.jpg
55605e3755fda2cfcb067eca28baf4d5
ffa648c8215e1886dc61619e37f27d749eb4ac85
F20101112_AAACGL kim_s_Page_161thm.jpg
03effd285b26e42dbaa86cf1ce431823
bd6b606570ed6fd90d00ae6af8386dc6aeced37e
5517 F20101112_AAACFX kim_s_Page_034thm.jpg
a59ad9e49082693374c5c6c705f596a8
b8e3eb0389aaffd56801a675103dded6b9a0d350
67575 F20101112_AAAAXP kim_s_Page_014.jp2
9644936adadadfd94e514d42e60ee0b9
94ba3f9fb8aaa976aa4d45af6feee874d2754366
107314 F20101112_AAABDJ kim_s_Page_164.jp2
eeb193e63ee8ed47d94f3e70d27fcde8
6606b06ef36de2e52e0f184325a09cfe9631a4e7
1051903 F20101112_AAAAYD kim_s_Page_028.jp2
891107e12811a502c84ef89ab0e89ab6
019baa25462fdcfb8457801bde197568b56d0a07
107844 F20101112_AAABCV kim_s_Page_150.jp2
bd675fa808d0203293bfcc7b10c11aac
5555709028e4296d332350895cbe7138a18d8872
5437 F20101112_AAACHB kim_s_Page_122thm.jpg
09da4c8992fecb990d610f18d7c020e2
3183baaafc1e1bd863c2be8e10264b8644899158
6505 F20101112_AAACGM kim_s_Page_105thm.jpg
c556f57987e5d8a34c297d300c56bd14
a2e2b575ecc396159d6c57e970d91cffb00e9698
23827 F20101112_AAACFY kim_s_Page_105.QC.jpg
9266fae7edfb8a58f60a44dbf987b992
6abe06202180a9921125d3412dce2f5db131924a
98616 F20101112_AAAAXQ kim_s_Page_015.jp2
821f15332f9e8e729f2dcbf3c629516d
ed11ea1f1a434f3918f6567dfa508722704f2e41
845657 F20101112_AAABDK kim_s_Page_165.jp2
71357170f9b83830c0e2de6ebd5d4b19
b69bc6968fc71474267cdb0d572300ca15e7ac3a
977854 F20101112_AAAAYE kim_s_Page_029.jp2
05f8bd4e9c4ff2e94c0ef5e5823c252b
44e3af8fcf7dbd5ec1fe6c15b4cdda1651852cb3
776453 F20101112_AAABCW kim_s_Page_151.jp2
6d94327884ff8f198a226a603dced723
80f9239beadeb921c46334ba11b988b44634a84e
6634 F20101112_AAACHC kim_s_Page_075thm.jpg
e64d24f72161bcf5e530aca131d17862
66abba9d25dfa6c357f2f1a5f1755354acc14673
22955 F20101112_AAACGN kim_s_Page_121.QC.jpg
e50a65e70faf0c31e8f75a23307617ac
d5f2aaf2dde4e688b005efe5c4e809982e930594
20984 F20101112_AAACFZ kim_s_Page_142.QC.jpg
5caf69969e9ee7bf0f77a4fcb0ee4a56
48c95ed4ef150abece63d37b3c4e7b0d3d748ea0
954141 F20101112_AAAAXR kim_s_Page_016.jp2
dec3ea464af719634f0aabc6b1241582
b0c8d77c2a0f7cc628b2189a69f8769687e6945a
136128 F20101112_AAABEA kim_s_Page_181.jp2
80861bb6cdb7e7072a9bfbba519ae840
f2d939fda93551756b1ce7c99c634dce57194184
106377 F20101112_AAABDL kim_s_Page_166.jp2
28a53c6f15c2d4bc1918be08dd2c936b
28fce20bfc0593afda2591994d8e11717b6b9e93
976266 F20101112_AAAAYF kim_s_Page_030.jp2
0561a8182e63afcf23d48456e0155330
9e9ee6cfc6db56d8e9f319f1a07455daa9431e1e
111313 F20101112_AAABCX kim_s_Page_152.jp2
111af1929e4fa60d9e5964406164b57c
5daac3fd9727e70af66a8481b8343e536289a1e1
19417 F20101112_AAACHD kim_s_Page_066.QC.jpg
52eeb40ba6eda9923918599f2ef12256
d0275f1b677542cda14bb7f603008b74a0835d9c
20503 F20101112_AAACGO kim_s_Page_119.QC.jpg
7485795ee62b8811887d54d5641deabf
d4dfe6fac2d9c6d07b5968f7b1738bab620dd2ee
103133 F20101112_AAAAXS kim_s_Page_017.jp2
a4a0b4000ff829bf9e30873e0e5272c6
4292bd77cb9ebff818c649972560fc8e08adddf6
127259 F20101112_AAABEB kim_s_Page_182.jp2
cbdef2ea826a7348bf8953e421f8887e
14704bdcaee32cb2b7eca0dd42cae147ecc6c91f
805817 F20101112_AAABDM kim_s_Page_167.jp2
4713aa407d0d3c49c8affaa2115b10b8
44040d6f4731059f10b670babe6a2aa882291e8c
544843 F20101112_AAAAYG kim_s_Page_031.jp2
fef368acedd7ce18b380bff33d300b69
be3789a60ed9a5cb0f97988a010003f9ea7f7fa8
F20101112_AAABCY kim_s_Page_153.jp2
b94e0185bccacd9d89ab0ad12dde46ad
3e489debc7a64b6b949cc8d0f3e57fd4cecd3fba
6626 F20101112_AAACHE kim_s_Page_163thm.jpg
691a035ff1cd87dadbbef4bdd2045e0b
6602fcbb9cb7a8fbdb8532477312173f0237e5fa
F20101112_AAACGP kim_s_Page_048thm.jpg
4aa9dd9b261fb7a3ee10e34ae8a778ac
1369f3485c84d8fa27c54c7d02c047a94a74f4eb
111068 F20101112_AAAAXT kim_s_Page_018.jp2
c4e9013c65eaf042d87101ce490ee97e
2c827cd7c27acd190156df5273b048251d617705
73754 F20101112_AAABEC kim_s_Page_183.jp2
195f66e982a0d4dc2f7662baa2e3b379
9c5c8f8ada8efca29c286936b0f2eb8eaa7f4951
98355 F20101112_AAABDN kim_s_Page_168.jp2
ffe9d5109ed60ccbdada74557280a3dc
00efe927b634104d9da4673fcaa75b6a843eee15
501647 F20101112_AAAAYH kim_s_Page_032.jp2
b5e424382e19ec0ca14825ce9d1c28b3
d4c50d63666d48c167b5038833859ce5c8102770
98487 F20101112_AAABCZ kim_s_Page_154.jp2
b76ce0d5177b19a44db6a9004afb6c1c
072b1efd1d45474e62de6f6ee0be76166c912ea7
21537 F20101112_AAACHF kim_s_Page_087.QC.jpg
74e4da42d86b6acd82099544f54e71f2
e2caddd8e62d3843e8229709ae6b1a6b1f3bdc63
23490 F20101112_AAACGQ kim_s_Page_144.QC.jpg
9b48fb7a243aaddb6540c699be5dd367
5bf0d4d87d0fa5365de932c0e1537984de0c2d89
102245 F20101112_AAAAXU kim_s_Page_019.jp2
a333e52d911c1d930bb06ea4416ce6b8
f128c434398b60092c696779158fb888d8a1d114
F20101112_AAABED kim_s_Page_001.tif
facc582a5c19bd5e39e4f150670e7ae6
0cf2f6abbfc225ff9fa0d3489fe24b76626f3372
106924 F20101112_AAABDO kim_s_Page_169.jp2
1ea77df7c3b9980c872971d6b009ad47
1ae5f2a49f0e8369e3c92558c094920889695bc3
74099 F20101112_AAAAYI kim_s_Page_033.jp2
b3b618fc14c3f86c4d584feff235da14
ba264a84e13d91043b7e4bd9be92d7e16823b732
23287 F20101112_AAACHG kim_s_Page_042.QC.jpg
8af1723e8c9d34f95d726a3fa163c6ac
7c061da18787e61ee49b0f5a2a31a5dc7b7a3ebc
6073 F20101112_AAACGR kim_s_Page_133thm.jpg
3c55a9fb2f951e4e796084682bceae43
ea74a921eef74d0751d525252c0275b8d254944a
101645 F20101112_AAAAXV kim_s_Page_020.jp2
43b11fb55e43741363e3e15675de3b31
cf6e78f0efedc5c71906c014574234201aa97d7c
F20101112_AAABEE kim_s_Page_002.tif
c4043492217a0f00dc7a09e631837f7d
ccf3aefb9e6bc36db906c1a0ef515909896aee10
108222 F20101112_AAABDP kim_s_Page_170.jp2
c3e8fbc2a4a00c83fecc89701d32f0ea
90d06c9eca9af54f5baea9e6867eb1b61d5bb99f
88967 F20101112_AAAAYJ kim_s_Page_034.jp2
0f417b36eecc3aed82800849e2489857
7e5a48e730dddfccff75752392824d77c8fe8eb6
6189 F20101112_AAACHH kim_s_Page_100thm.jpg
a51a4d62991a7bb95b17c83b8b839196
daa6985ace5238f2342c8719ce438ac0063abb6b
22825 F20101112_AAACGS kim_s_Page_157.QC.jpg
458a76cf2d523015056689e807be9d72
2b7afc51c013ac95c89f8e24f49271be1aa931dd
111101 F20101112_AAAAXW kim_s_Page_021.jp2
ca28e474409cdcadf611ab27e70f9095
fc3b65bc248c2c4ab3e3002daede01b49f93e8bb
F20101112_AAABEF kim_s_Page_003.tif
b0e48cb3d557adf5f593dff9aa3702c0
bacd9c2157af44e510c9f9f3e792ef368a5bad64
107137 F20101112_AAABDQ kim_s_Page_171.jp2
90ef1bfa0fa3b7780da5ef76bb8b3919
a5751ca7ab5062a14f00b6e74916369f923336f7
101132 F20101112_AAAAYK kim_s_Page_035.jp2
a4a40d54623ed8feb3cdfed91d23f99a
bccda8606a399e1488a24bbda023238e20d35cb6
22686 F20101112_AAACHI kim_s_Page_084.QC.jpg
d168c3b5abd45d2e115a2688d96bb981
e731396c45f9aa81dfef54c6727ad4cf1064e0b2
4444 F20101112_AAACGT kim_s_Page_156thm.jpg
00bc513da500f3acf8240789c17670bd
e3288d3d835e1c019ac002aa6e37e6dd22f1e186
112040 F20101112_AAAAXX kim_s_Page_022.jp2
af908896d3eb40f1ca16ee9360375ed8
a69e260cfa437bdd4eb2339eefac15d559f01234
F20101112_AAABEG kim_s_Page_004.tif
96152e2c473fa5784ef0724ca3405e14
d03fc5d3bc1f02679bdced020dd5c3f342acce78
112318 F20101112_AAABDR kim_s_Page_172.jp2
11eb4a2773513b34c2a1abfaef0566f7
68fe2455c3ff850618023994a3de2f128fd5e620
108273 F20101112_AAAAYL kim_s_Page_036.jp2
96216131aed324710506a0e713fd218a
3cad49e9af9d7d45d44b82d85ad0d3f6c02a0aec
23184 F20101112_AAACHJ kim_s_Page_036.QC.jpg
d2ef0f2f8eac8a734eed845d1e612f46
22c94aece1184340e6b676d95f1ca845d09e01f6
F20101112_AAACGU kim_s_Page_114thm.jpg
1d1c28f5ef04762a84761b74917490a1
2cf15602a3b1159386dc2bc638b5ea2698e271f7
22711 F20101112_AAAAXY kim_s_Page_023.jp2
ed6ed70be960d26ddfccb35e002c26e4
d2ca3aeb75922f9220c36d00eaf0e450fc9bb130
F20101112_AAABEH kim_s_Page_005.tif
b4183e6958919792c8a4eb90572cb221
1b5e7015c44f713314095b75573a206c678a78a1
795622 F20101112_AAAAZA kim_s_Page_051.jp2
6276f16ae9d1710cb6a1f98ab6a6fcea
fea94288721f355e8f465d6be610d1480fd379c8
114526 F20101112_AAABDS kim_s_Page_173.jp2
f06f39fd4e21c70c2ed2198feee7196b
2bacdfdf13fc55babb71f59db047be4413351e5e
75420 F20101112_AAAAYM kim_s_Page_037.jp2
b51436611de13d44517e017c86daa1ee
8e95eb1476b34d6e6f22e1f63ef737de198f4c17
23275 F20101112_AAACGV kim_s_Page_171.QC.jpg
b187692a3d8d578b4379aab629540856
392099512c5091d71e010c2dcc970074e808163f
95068 F20101112_AAAAXZ kim_s_Page_024.jp2
537e8eec14164a2112293f3089c5d79b
6bad747fee08ae085cf4c7cbe76c8465ce913b47
83901 F20101112_AAAAZB kim_s_Page_052.jp2
118a30b71a48e62bf1e382e266beb284
8bd04e7a5cbaa00f9e414d7aa31c224fa8bbfa99
113281 F20101112_AAABDT kim_s_Page_174.jp2
aaba24af02bf5efd4c5cc8eb6df6862c
426d1144b68621ec9e7dbd53e92cc4709ca3e7ea
91404 F20101112_AAAAYN kim_s_Page_038.jp2
b90b3eea512d07d30583d651b86a3ec7
b19fa22d06f7ad196369de7250c4af0518fe8463
5953 F20101112_AAACHK kim_s_Page_015thm.jpg
51537ad51dbf08ddeb4fc9b3ad0cd761
0da99d28c69e3f8c82ae3bdc5327a6666bc41e68
F20101112_AAACGW kim_s_Page_006thm.jpg
1798527ff6357579479cffa3ef31d017
23bd6041e374773d7eea1376a6017b3c299e744a
F20101112_AAABEI kim_s_Page_006.tif
461ecab794378d6e65498f57b5e87ec3
5209875753b49127b9afe2ccb0d570b1e37ffb10
75244 F20101112_AAAAZC kim_s_Page_053.jp2
de895612de181ee315b64ea260437698
e48ec2ff4f8d59f78ccfae107e502253eefa512f
63264 F20101112_AAABDU kim_s_Page_175.jp2
0e490d9fc9b489d2b1fbaf07222f8343
993dde52cced4a13ef0c04608e5fbba8437db184
105491 F20101112_AAAAYO kim_s_Page_039.jp2
6ffb423f4820f55c6c899ffe8973b004
507cacefc007b0adb5482a7360dc14f086774daf
6504 F20101112_AAACIA kim_s_Page_083thm.jpg
a62a2a5e32e7c770ad97b29585c09172
74a7721d8a25418446bbe1ee84b3c0a5c31be76b
5904 F20101112_AAACHL kim_s_Page_142thm.jpg
4d58f06dc2f0e7d7adacbbcc4b8d4172
fc4f3d90c15e45c608f2febdca2988cc9e124aea
21035 F20101112_AAACGX kim_s_Page_057.QC.jpg
fe74f7a68f9204ad5fc86d2c0fd5bf7d
4a798bcf312d6e6ccbc4850b2f2a68981b384b21
F20101112_AAABEJ kim_s_Page_007.tif
1d3854f67c07082a98a03410f00ae598
7064e4b50a210f55a4f003982b9c6e457043c3be
1051355 F20101112_AAAAZD kim_s_Page_054.jp2
481e7071c2f675e98cedad34ffc3e286
71b10ca3f7b0092d699ae0c63c0c91ea35078149
108780 F20101112_AAABDV kim_s_Page_176.jp2
a4c5903e1f92285e7db8dc77aeade30b
b6e88f5524771d62233000c989bbf3a730108407
1051956 F20101112_AAAAYP kim_s_Page_040.jp2
9b5e37e7d28970c0d9be67c882289350
92eaec28cb58c040cf494682745bc984c1eb7c3a
F20101112_AAACIB kim_s_Page_009thm.jpg
d6e179ee642fd11b161e8644db6fb590
3441b572855756622d20e967fe3ea2b12a12e9fe
18236 F20101112_AAACHM kim_s_Page_053.QC.jpg
b03efba4689df40b1c9b215a11589b4e
6b9627e300d2460b71f0752c7a8e5a4ecf7c7e61
5601 F20101112_AAACGY kim_s_Page_008thm.jpg
3319c87eaef2f2d0bc1a93fa205b5355
7ccc1f69da89180195239e458754f3ccde46cdec
F20101112_AAABEK kim_s_Page_008.tif
c0e0d262bba1a5972fe4a5ab1b11b730
c0e4390ff0f5586c917273ccca913ca070a1530c
1051980 F20101112_AAAAZE kim_s_Page_055.jp2
6753df5a9aabd9a7ba56cb64651545bb
70bc470c5dd12437d7999830a31200140fc94b90
122236 F20101112_AAABDW kim_s_Page_177.jp2
25f837b4ef9a5aad9e18b80b9553b9d1
c2a002e25425e37cedb48aa0a8d0044fdfbd22a5
946984 F20101112_AAAAYQ kim_s_Page_041.jp2
8c2aa8891b09c2a0541f5398a123af2c
e5aaebc1e24a27f7bcc36a65f380c5a0f9394bc6
20235 F20101112_AAACIC kim_s_Page_123.QC.jpg
719a00734bce8e28042269f80311fe2b
833c40d21b287930d850cdb860785d64498e9928
6050 F20101112_AAACHN kim_s_Page_047thm.jpg
2a96f764f0a997704ccf45ac217f4466
e94bd321d795f0d7a1b100b4ea571355a5551a7c
23642 F20101112_AAACGZ kim_s_Page_088.QC.jpg
da1bcb007d6a7fbdb72884de67ea9c1d
3f19d7568741e2ea53ae610a1ff0e4b60205ad82
F20101112_AAABEL kim_s_Page_009.tif
cccb9c2c0286f8cc8057749138c13e33
5d380f31731bd4490a511b4365d9d95a5b53a128
99933 F20101112_AAAAZF kim_s_Page_056.jp2
c1e8d8869c72e67528670e2e2d41e5c0
f6c55ba3fa4813405a44224e8f433106366481e0
114665 F20101112_AAABDX kim_s_Page_178.jp2
f4dd3b4db9b5ae607b5a243b82f4739d
6cbd0bf92c98bb7d9ae157980be0642ec7d05ee6
106783 F20101112_AAAAYR kim_s_Page_042.jp2
9ffcdd4a176aed50dd8a99ef405a6e01
b50f5ced6938d54d14cb3b62b9b190308f24cb0b
F20101112_AAABFA kim_s_Page_024.tif
2ef2c83432b98b386948997f5cd043b7
8753f8bbcf0be30fff97b94bd7758f4221020b5a
6610 F20101112_AAACID kim_s_Page_172thm.jpg
0e178a493d65c017c3133fdcbd0b3d2e
7d3474f50eb686c621e62f81bdd643c66f81be8b
3741 F20101112_AAACHO kim_s_Page_044thm.jpg
c972f3fd9dec582aeeff4f75a01da079
a36015cc31506109e25bc0797c091002f8a34bb7
F20101112_AAABEM kim_s_Page_010.tif
bcaf286d406b2959ee87d0883194a125
a42947230c85c45d21f610ac9a07a07a112fc1d1
91912 F20101112_AAAAZG kim_s_Page_057.jp2
51741ef2a6015c67e73eff2029a7075b
5aded739f30235fb5df3efcd23db318ed6481cb5
129998 F20101112_AAABDY kim_s_Page_179.jp2
a466096507fae715a6df86993a8424dc
8566c5fa6057d97486fa873561d85c27c53c7ef1
85586 F20101112_AAAAYS kim_s_Page_043.jp2
c7bd86209b2f5ce48ddedcd2f38e6941
6f38ce3add351bd601893f0c1857f7d7406ee350
F20101112_AAABFB kim_s_Page_025.tif
9c5cba653afeb0e0eab93da4a727abe9
71c4557847f51710b58b3e6ccad7a4c4358403f7
F20101112_AAACIE kim_s_Page_089.QC.jpg
b0ad2fd2b311453df4f64b2b9795f1ea
0f36e47713914fc258e9ea753f4d13790123a8c5
11182 F20101112_AAACHP kim_s_Page_007.QC.jpg
59e6f23ff67482acdc15504d25197415
82fb8aedec84edad51535c7ae27732c43fe1b024
F20101112_AAABEN kim_s_Page_011.tif
130cf9f4afb57bcc0753d7a59074dbfe
edc1fdff159614fdb16c832fca97308f0e3c2991
86317 F20101112_AAAAZH kim_s_Page_058.jp2
0637001c25fa08c61ba1607ad34c8090
3982a5a4e4ea683ff4220646423e00e3af08d896
126255 F20101112_AAABDZ kim_s_Page_180.jp2
f20f450d96ca82b58facbf945efe3352
cbab08023a0171be94b9f64cd6f7a6faa4ee5d41
45863 F20101112_AAAAYT kim_s_Page_044.jp2
ebdaebf9169f9e93c4681374437a4c12
4d7413a648685c513a04589467f9a784e808c9a4
F20101112_AAABFC kim_s_Page_026.tif
8fc0ba8e53bef748d93c65d3d23684fc
d6af97939a4f441d69fdd100ef821c709d5b6a1d
18324 F20101112_AAACIF kim_s_Page_130.QC.jpg
cbde920bdc0238f27d9efc9f45f6d7bf
7d02727e0ee635d06860b02fc84cc3a08385cf7a
21959 F20101112_AAACHQ kim_s_Page_137.QC.jpg
ca069c16b60a4b4ea93ed69193b49da5
63a833851e99fffcba81f9ae66a3b33bf0a5c5d6
F20101112_AAABEO kim_s_Page_012.tif
21e3350f38568d782902a77973b857c8
7e81ba7e6bfa123dc1c1321272d523efabf327ae
98510 F20101112_AAAAZI kim_s_Page_059.jp2
119e5fd4edbf97d7dac6ea6bcc0f6fc4
ef03e1ebc2269644db26fc9427d3d3a9812d53c4
95828 F20101112_AAAAYU kim_s_Page_045.jp2
b028fdea978da038dbe067118d7ea114
75ec775785500061637a78849bb1e9540818b959
F20101112_AAABFD kim_s_Page_027.tif
eca331f49b6bcf1e8c9d48fc72089541
46c89c02505f52ac1c6f34274faf4d45388809f2
22468 F20101112_AAACIG kim_s_Page_125.QC.jpg
58b6875933f3d7f374c37d43904b55e6
c4549359ee79206c2b5c7f75d0c8b6e9e4a7e183
6232 F20101112_AAACHR kim_s_Page_164thm.jpg
53bd18435d6f54a7337a7bb73a86fdbc
6e3074d739480ab9fd78833bb6eeff3a36ec28c0
F20101112_AAABEP kim_s_Page_013.tif
31aaaf106bbce7c8f69a676277d1f1ef
77630340cb32e2f435285ebe8478f37ee6ef6ac1
89097 F20101112_AAAAZJ kim_s_Page_060.jp2
b5b25693920c5a9762839f2d87faebf1
9d135e95e332e21ba3e3e00e9c8dda8540da143c
103969 F20101112_AAAAYV kim_s_Page_046.jp2
01fe09b45c629a0e379c1270c959e8e6
73656bb2526aff122bb503f9269a86f50f7b7957
F20101112_AAABFE kim_s_Page_028.tif
17347d5f6b9390943eaa5687b71d094f
81582ef3ad87a78e79b900df9090c41a5f2cf42b
23451 F20101112_AAACIH kim_s_Page_147.QC.jpg
c1fdf5977936a401b7c4f8aa47975c2d
f9a7be3f708e00c67e11803a2dd461399456cce9
18358 F20101112_AAACHS kim_s_Page_095.QC.jpg
066c7d3da1bcbd3f15ab9ee7715e086f
57752c8962142038173e71d1cff0c3a565c1a4ba
F20101112_AAABEQ kim_s_Page_014.tif
2168cbfe67e11a1186c64ffd92dc9b2b
dfa797209dbceac80435cfb545055de5564d0f95
93411 F20101112_AAAAZK kim_s_Page_061.jp2
67e55c1ef6a929c38bdf22907bd4670f
9fed483f3b4941bc8818d2c915b597349e353c7a
93372 F20101112_AAAAYW kim_s_Page_047.jp2
ea112aeb7552d84237a8e28248cd225d
ed141092844b16b2c3e6e3dcb1aca6c73e49df6d
F20101112_AAABFF kim_s_Page_029.tif
7ccb923fae25470c73537c127a915fcd
b4761f353029f395512d72d39438a1bcdb6fb7da
27037 F20101112_AAACII kim_s_Page_010.QC.jpg
279dbb0db492100020fe855e21a90608
9f5dbbc1f9af2b8f24a3b332988c305ed0c7237e
17170 F20101112_AAACHT kim_s_Page_183.QC.jpg
a6fb0b4ced5bd4f02738b59df4112935
35662594c2cdfe9c256de924490c70ba608070d0
F20101112_AAABER kim_s_Page_015.tif
81abce592a1fa75e8296efad1d24d1ac
63e29105cdfe364375801788c344e63a89005048
84178 F20101112_AAAAZL kim_s_Page_062.jp2
906425121a8ea66e58477e851cf2a41c
cc35998bf870c2cf27402149a437ec6b7f56bc48
85216 F20101112_AAAAYX kim_s_Page_048.jp2
f4120e79be3e1f429df2d0af8bc987cd
3553e3d0a1b480ba61af623ff25a349bd7053c57
F20101112_AAABFG kim_s_Page_030.tif
3fae1c34210db3a8bd4827e3cd485dfc
c039d9003a319fed574fdc30f78c2d6149aae441
5211 F20101112_AAACIJ kim_s_Page_037thm.jpg
302b69c93c91a90d5035148d6b3b83b3
9e85f46ff7168904bc0cbb466d63e6f5d7c9ad1b
5771 F20101112_AAACHU kim_s_Page_136thm.jpg
7cb606740de1292d4def772659992fb8
b48652f150c9bd890de2b2be9c25739fe08a950f
F20101112_AAABES kim_s_Page_016.tif
31d5807bcf1eb3b9760c9341343282dc
a968324aa559228ac75adee7263c90b33e856977
913937 F20101112_AAAAZM kim_s_Page_063.jp2
51a6c2d83b8ba7ff7f30e24a46c1c3a0
b7e976744c1008185d93a9748f675a2a6ec977a5
1051964 F20101112_AAAAYY kim_s_Page_049.jp2
4993bc66f4c1d426137182acdef28242
4f19c217a78e745f6826967f1f5876e09a10e4ac
F20101112_AAABFH kim_s_Page_031.tif
da197e093543227c5ba1d17a8b387d4d
c33402c897c1af6c26c47b6e54a3d56ae75bb4c6
4703 F20101112_AAACIK kim_s_Page_162thm.jpg
a40dcf1638aff7f2cedb0a2eba432e36
e9c9f260746aa141085455223e015a044c05c517
21670 F20101112_AAACHV kim_s_Page_100.QC.jpg
8182f4b07265a47a79f6acd6355244e3
c76412795dd1b06f400955c9961fc8f2d2be1c0d
F20101112_AAABET kim_s_Page_017.tif
6126f54c54d5cc14779207e521f6e288
6d72d543086fd672fe02bdf731189a67c11337c5
930592 F20101112_AAAAZN kim_s_Page_064.jp2
bf37e7a03f8dc0b48c288fddb43635e6
b5083e9b5093505c1ecda8a140030d7f409ae21a
112295 F20101112_AAAAYZ kim_s_Page_050.jp2
a032b5a04d930215931ab5583e5e7192
4497f5b756aaa05c7098947028d17671880dfdb6
F20101112_AAABFI kim_s_Page_032.tif
85a475243b75fbbc63a5e3085eddcf8b
e3a80027b3403ead9fdb8f5930a8abf011533743
6057 F20101112_AAACHW kim_s_Page_038thm.jpg
a7cd128d1d4e4aa1614718bffd8e1bbb
aac4bbc76fe804f987e6a2edd7d4b15b8e1cb73b
F20101112_AAABEU kim_s_Page_018.tif
54d3a964c1ebfa5f753bda5b2d0c0ba9
454aa663c0ae70bcb007e7defa947fcddbc21681
108448 F20101112_AAAAZO kim_s_Page_065.jp2
88e02f9ed4467343c8501483d44cbd45
5db51fe1da8b6b4f9f2725e5b584a0b25b4415e8
5861 F20101112_AAACJA kim_s_Page_096thm.jpg
c7e021a2e6678fd926f11095fdb7c946
d37d857ecbd1b70ceedc543af993f7f13c97f303
24416 F20101112_AAACIL kim_s_Page_152.QC.jpg
4e3bd1ab81b14a8f7ec84dc5932e5a01
8c214f27b5de18f9257997a3336ca9e9882099e9
6107 F20101112_AAACHX kim_s_Page_085thm.jpg
32cf19b0139a85677529c7f14b69674c
078e34b78840c62323f7f325546a0122a90db154
F20101112_AAABEV kim_s_Page_019.tif
7412c4921d5c62d4f652934a78e91cfa
ec6814e72dea6f6cfbdb49a3d3c1e23082798e43
847263 F20101112_AAAAZP kim_s_Page_066.jp2
cec44cf2afecc13d7c02df0cc003dbbc
78ac420cf9425c74322a70d13c93533723456e2e
F20101112_AAABFJ kim_s_Page_033.tif
faab67943738b647af9cb075543879c2
748f4a59f0cd2e77c5519209a39db5305a97bab4
21501 F20101112_AAACJB kim_s_Page_078.QC.jpg
8096f9d640f415134701a8dd9e33e9b1
7fd9958b46e725cf5480ed7a5fcd79903c4fea42
6087 F20101112_AAACIM kim_s_Page_017thm.jpg
801f074e0306895de03ffe85056a671c
46e5f8d068aca35eb4b0bbfecf0a4f9087f0d346
6278 F20101112_AAACHY kim_s_Page_134thm.jpg
628eded32da48679e859474834798222
1c258ead7f530771ca1033686e4176f64d214b5d
F20101112_AAABEW kim_s_Page_020.tif
f75bd72193ab8f6f88ffd30838a456bc
61d24812c9b49688b41bb4dc2dcb26b600f85548
92619 F20101112_AAAAZQ kim_s_Page_067.jp2
69917e19a757695fc994410eda8ee7c3
ff3bb75980fb06f1153397bab3376338238a247a
F20101112_AAABFK kim_s_Page_034.tif
c3010df4dad7d4c04b7100452995600b
30ef14645bf969da913aba40d0b644d043e8f6af
20001 F20101112_AAACJC kim_s_Page_107.QC.jpg
0e9010100e3d0537f2821ee763f885d1
283d313ca8c8f0de66635a7d38e1d94a4594eeaf
6440 F20101112_AAACIN kim_s_Page_089thm.jpg
196ecaca5c6466dd66736b36a15fe43c
d2f7973c994eff79c423b494e4e421ce4d383262
6636 F20101112_AAACHZ kim_s_Page_182thm.jpg
623d9c7ae0c83ae572d3aaaee7d19d17
c980794a8cce50d25cf7313003775b4ba3500395
F20101112_AAABEX kim_s_Page_021.tif
d1b188a1a7a202350b65e970313ea9f4
df6778b38e3c623ca31ecd620f4c316557fce482
95606 F20101112_AAAAZR kim_s_Page_068.jp2
e472ab0a40b0a6fd7075693f516a0d6d
97d4993c0821cac5d77d6462e5a20dceefeccdc5
F20101112_AAABGA kim_s_Page_050.tif
bddeef58e2ea5f45644f4eccaa5941ec
5fa39e839c9adda5d6b62999031f98bffb8b733e
F20101112_AAABFL kim_s_Page_035.tif
a6438dc29c36180232620f7c0bcfdd94
9324ee919bb03dca42e26fac5ec9423baa6d8430
23982 F20101112_AAACJD kim_s_Page_172.QC.jpg
2e0479612a888cb1f459a1fa90fce980
797a69c678fd4b069be8114165979a3a7977ed59
18150 F20101112_AAACIO kim_s_Page_077.QC.jpg
193236782e137f6e944e824c755b289d
e00699019594003a402aa60c9c2042e0694ddf56
F20101112_AAABEY kim_s_Page_022.tif
325dc1ecfca326b90ce7602d73442761
c5cda7b12033df1e3c9d01b000bc7abf32be3b25
103428 F20101112_AAAAZS kim_s_Page_069.jp2
7e28811c32a74987253059dfc382d9fa
a7283a23a68841e510a29e4a58962930a62b540a
F20101112_AAABGB kim_s_Page_051.tif
02a729c3a3ae6d5dc591980db929621c
239855ffc219977e4d507fe11dba9bb0833382bf
F20101112_AAABFM kim_s_Page_036.tif
6d2f25efe6fba7672159e7946ff6731a
8fb38526bb5c6867d438d75e6408695e83fed897
23997 F20101112_AAACJE kim_s_Page_081.QC.jpg
4b7da5c31ef8e89577231dc2e7b67675
c077103cb30c617f04ee665727264c29e1593715
15733 F20101112_AAACIP kim_s_Page_082.QC.jpg
b6b77d995de6ca6d58068b14cb8b89dd
498159923eba32906002a895527b75467622425d
81486 F20101112_AAAAZT kim_s_Page_070.jp2
804f728d797c0612753382f81876e089
b56a4dad259fc003ed9fc08bb27ffafeefdba845
F20101112_AAABGC kim_s_Page_052.tif
4e584b999598f68bb89040e7eeee496a
8c5ee80561811707ac263bd7cbc70c25aeb0c316
F20101112_AAABFN kim_s_Page_037.tif
98c7987bc46a296bdeb56a2fbf187b08
a61b26d3cbff0646ebcf8e7c36fd9fd2c6e33054
F20101112_AAABEZ kim_s_Page_023.tif
3e406a0a5f6f82fc894640046c935ced
cb002bee148966016fced8ccab4a5240c70230b6
14499 F20101112_AAACJF kim_s_Page_162.QC.jpg
042cc422a8ddc8c622eaa1d8611e0e90
4f8d37ef820966f70ee9118467c3b6038ef39f17
19336 F20101112_AAACIQ kim_s_Page_071.QC.jpg
20723e57c4122651223c2a4829adf2d3
103239332b888df571a0873af8ca4a690fe76062
86589 F20101112_AAAAZU kim_s_Page_071.jp2
db890ded6584e9624fdb66d72d190f9c
23f66c4dc94b0a248baf7c3e29f4c8ddbd5df35b
F20101112_AAABGD kim_s_Page_053.tif
37c8705328ce8e4cbb5883b25f818e8a
64bf147242137806ec8c19b699e74c89320ef08f
F20101112_AAABFO kim_s_Page_038.tif
c537def9f96af1a8b19025acff5f9e36
95debbba4228ede54c534ea121937e3f6a65bccd
6039 F20101112_AAACJG kim_s_Page_057thm.jpg
58376c1d225f55139971b9946ef18348
5c8880cf6661415bb03fbc9a996a40218544c462
5326 F20101112_AAACIR kim_s_Page_013thm.jpg
c3637338eb1124fe1232dc103bdbd745
d5ff46542f9a69ab525149c6bab57d96944c8ba7
F20101112_AAAAZV kim_s_Page_072.jp2
c59fde19f139d4f8054721608358a382
52dbdeade6605fe5b48af8f24b2969e2e46373d9
F20101112_AAABGE kim_s_Page_054.tif
9bef76ba3b6d43af825db3ce40a91847
dbed0369fd1eaec37ad4fb55b7e674ea72f8e613
F20101112_AAABFP kim_s_Page_039.tif
0b5ed19757f22a453729ca32d8d2d969
83fef469b0a2c8a3997b72f78411d4dadd9883bc
1371 F20101112_AAACJH kim_s_Page_002thm.jpg
334b984f848905f2854d57737c34d312
36075a5a4300519bbaf538eec9718dc2d9287893
5276 F20101112_AAACIS kim_s_Page_132thm.jpg
48f751974af2e0d7dde51f59ec717947
fbc43e783b69126b03b3fd961998c94668efb7b7
107317 F20101112_AAAAZW kim_s_Page_073.jp2
1dd0dd08c04da1e0832acf13d5918054
fa1153e58eb4aa35962b56129a754d31ce1ce0ed
F20101112_AAABGF kim_s_Page_055.tif
ef030a604850edf5d71abc54dc2a85b6
61a38530b784216946dec3466dc8eaf548453ec7
F20101112_AAABFQ kim_s_Page_040.tif
a29daed660ed518fa3a083bab1e8d2cb
f8983e7bd3c0fa36ba5b48f0b161f1aa12456424
5368 F20101112_AAACJI kim_s_Page_027thm.jpg
331e2284a281f1ef4949612e37de0029
0fde1fcc0e33b46d4e7bd19e554a2765a13e28a5
6423 F20101112_AAACIT kim_s_Page_088thm.jpg
11fa8cd9156a40db6da415f4c63e08c6
9baf29ffdad5ecd9132cba8ca13644127bbab205
101821 F20101112_AAAAZX kim_s_Page_074.jp2
2eb236478da343bf7342fef3dee955df
16b2bfc97f6fc379081b544177719deb3b6eba61
F20101112_AAABGG kim_s_Page_056.tif
95623ddb43cf0e42a10c02619074a253
306b482f35e1ed0c555dcc0aeef7843518b99c3b
F20101112_AAABFR kim_s_Page_041.tif
d8d9f109f0c5ff3372065a455017c905
9838f7eeb41cba1dfda8f003918dd4163a3f1ed5
22786 F20101112_AAACJJ kim_s_Page_039.QC.jpg
7225e09ba0e7529821410889ec120c8e
3d1f9aac0adc52d260966f5691160289d0b2e8b5
22918 F20101112_AAACIU kim_s_Page_128.QC.jpg
64e099cf9b2ae9735d9de3bd105f766c
c013a1a526fb0b784fc7b09110c550c61128d091
108221 F20101112_AAAAZY kim_s_Page_075.jp2
df296354e21e4e800aa0a9071961d505
a9bcdb67d0bc97dd3570bf9698154e23d8e60664
F20101112_AAABGH kim_s_Page_057.tif
687e98d476d7a6ae9c5c7fd0b56ea93b
d77d07c90431d6f9d12f92a152de0f98f3a6e6ea
F20101112_AAABFS kim_s_Page_042.tif
5158628464ea7b75a3d564f9e1b768f3
c074ede233dd8e4dc2b65f6af6733b5a7a8c8852
6605 F20101112_AAACJK kim_s_Page_101thm.jpg
6ad9af08077ff200e1bc0663f91a83a3
ab98f447909cdb05beaa6a30a2bd7d42962c416f
21337 F20101112_AAACIV kim_s_Page_168.QC.jpg
707cea82e93a57927b23bc7a5ad139d8
370ec34e302a0a997ce45e8f033e31c6b79f1bc9
94257 F20101112_AAAAZZ kim_s_Page_076.jp2
b5d816a2deeaf845ad0c69fb3a0384ac
6de01625677327dbd78530efd251bd632e078180
F20101112_AAABGI kim_s_Page_058.tif
6fccf1b60bdda79921e09973196565b8
6ac66a6ff9292b51731fe4ddc99b9cea314354d0
F20101112_AAABFT kim_s_Page_043.tif
9226f2f4c72a2b8a8304e6aa364e9649
df556fc640bed882808a7257e33d853ce5979ea5
6324 F20101112_AAACJL kim_s_Page_178thm.jpg
406814b67c8d5a2d7a83cd4ad7e3dd74
614d947e3375da708972d6c98e178cdc57a8a593
6242 F20101112_AAACIW kim_s_Page_115thm.jpg
ebf889ada3cfbf72e5de11412ec9ddde
709e7e99b218731589df169530c81b48b2d73d23
F20101112_AAABGJ kim_s_Page_059.tif
e52d8a755a010f7d6f2633220b16b4a7
d20e6d5b8f0872ecae06be5b9b07cb629866c288
F20101112_AAABFU kim_s_Page_044.tif
befe8f512e3a5d4374e9078440f059a3
2f8c6cb35a2ec595ed2f2e7350029e8fc536859c
14466 F20101112_AAACKA kim_s_Page_175.QC.jpg
926eafc384d84532bb0e5f5a74fcc00d
0a058620ab421a83f23efe0f4e10699d81481af1
6660 F20101112_AAACIX kim_s_Page_081thm.jpg
9400dfa886329dc201ef0ac011ad477c
d6f73843067bf2b3f51f9f8a3839d701b52ad968
F20101112_AAABFV kim_s_Page_045.tif
c7b2b5de82baccb7a478b19cb85a66fb
71384ef46c8bcfad250ce26fd557fd4d550ebee7
23223 F20101112_AAACKB kim_s_Page_065.QC.jpg
4e6b9d8cfd0302a9658741e03b66c8ba
09a23967448e6b326638f098126fb1381075b83b
21189 F20101112_AAACJM kim_s_Page_059.QC.jpg
d758d1058a3aa162c337fb1a28445af0
71385fc51275601b02701a5a445168c20475c077
5914 F20101112_AAACIY kim_s_Page_061thm.jpg
9172ff6d0385d99b3fb430713989a6a2
a02615de69b73711229028441fd3e5e2d14ec9bf
F20101112_AAABGK kim_s_Page_060.tif
ec956256cd487799fe71aded55ef8ab6
cbe5a8bda4ee1c54b5fd36e3379b76bbdcba9cbc
F20101112_AAABFW kim_s_Page_046.tif
743d4744b6dae8aa02591ef122bd66d8
7feb1ebe979fc42879289a9420d73163e4232072
6684 F20101112_AAACKC kim_s_Page_179thm.jpg
3197bdd6117111f84fc46c1fd20ac4cc
3002ea15974b472542ec20a78092201e80859cb2
6398 F20101112_AAACJN kim_s_Page_102thm.jpg
7573ebc03330b9ec37282254814011bf
e72031a432b81b19830a3895143d5add0e301f15
21230 F20101112_AAACIZ kim_s_Page_045.QC.jpg
28e1f2f063c3f5c559a45d1a29bd4468
745f2f7b8f3b470e9988727af732bb1634b78d93
F20101112_AAABGL kim_s_Page_061.tif
8fbcc37869627ea8c8fc86a3cd1541a5
3d554fc1858b3aaa039515e15acad3fb3bb5a141
F20101112_AAABFX kim_s_Page_047.tif
9ede1cbce1c37864e7ff48415bb7a226
8cd3fbc50cec823650032fa6a7c5446ddeecf758
F20101112_AAABHA kim_s_Page_076.tif
68d0a01997390754c62c6cc42fb1d941
4d0f02d642cad7315def4b2221934cd1f9fb606d
22276 F20101112_AAACKD kim_s_Page_129.QC.jpg
d7d54973abcff73d5b0721a8bfb5f554
d7329ed1600c860c5d63816e5322a35e73ac8cb7
10989 F20101112_AAACJO kim_s_Page_044.QC.jpg
ff0d2abb2aa61b18073b457fb604b9d0
9e1cb6e76fba7287519675831e6a78705a033074
F20101112_AAABGM kim_s_Page_062.tif
eb0407416aa9632c25371b2fc7cc2f73
27ed694d6d8f367f0e264751010a32e7dc375444
F20101112_AAABFY kim_s_Page_048.tif
8d33cdba6b7934c8bc316a1019eb156f
78e4677be6f960e380fba91c6001f0fab8fffb7e
F20101112_AAABHB kim_s_Page_077.tif
832a30a363966ce1c3855fca0bd48296
0fe0794f6b8d70ab416d43d29f7f897295726a5d
21865 F20101112_AAACKE kim_s_Page_111.QC.jpg
5b7df6b30c1b6dc33c062292b277fe72
02ea89cb392664e1e5aac8698b24934434987b16
4110 F20101112_AAACJP kim_s_Page_175thm.jpg
dac64688ba181949f1c7539590e7103c
5ece825c9a44c7a6555aee57f083d4780c462223
F20101112_AAABGN kim_s_Page_063.tif
39d7e961ead82b562d09233f5cba6185
0ab3b8d2a5e396b8780665a2a2520d9209eafd98
F20101112_AAABFZ kim_s_Page_049.tif
5fb95931d995dae9ae65ac5b8fb9ddae
4b4c99ff3bc0550fe5cd39e40362734c25c8ab72
F20101112_AAABHC kim_s_Page_078.tif
9d3082fe272da57c93b12f3a44f86872
85c0e37958bab751c4affc48c5bdaa00f92423f1
19593 F20101112_AAACKF kim_s_Page_008.QC.jpg
0b98a046e86fb6a4dee373d9f6775642
5597fac55365ae3097df9fc5652ed9e784bf8909
6137 F20101112_AAACJQ kim_s_Page_087thm.jpg
1cc992660b35e8acf641841a535dd317
1bae83d02568f9b3f6a9fdca8fbc32188f4bbd0c
F20101112_AAABGO kim_s_Page_064.tif
178c86ab54049b7561599d3a304bd668
840e18717e0150e8e3e674818d9fc89d0fb42e8f
F20101112_AAABHD kim_s_Page_079.tif
d27cfa329c7e6b971cc4647a0cbb8220
d3eeac38f089f4c1cb5302cd74f5847867320e83
6531 F20101112_AAACKG kim_s_Page_018thm.jpg
cea9b4dc7744ba292bb0dcfb1f6bcf78
c56f6aa6a31eeb8c4f6ed4601fe5815cf777d8db
15073 F20101112_AAACJR kim_s_Page_014.QC.jpg
6ccd62f51c6ab3af96c721093fd202a8
9cc20676c2e753f75b4afa1da569b302f5bc6771
F20101112_AAABGP kim_s_Page_065.tif
bfbc8bf31e76fc57243cc494639742ff
d11365774c1693a10f75c68f0ef78a6a216a562b
F20101112_AAABHE kim_s_Page_080.tif
0936b88ea84f21532dc70d32f93921ad
dd448f15cb7c133dde5dd720e6867ebad69a2e24
21714 F20101112_AAACKH kim_s_Page_165.QC.jpg
a8219276a98685c7045ada61e402aada
39b7483f30d2c732924361f08ef0cc0a94ae42dd
24083 F20101112_AAACJS kim_s_Page_021.QC.jpg
0f3c5cfa68fcf6422f9cf0dadf4d8d27
81134ef29098dff73c9e51be1a5389b773a431c0
F20101112_AAABGQ kim_s_Page_066.tif
8b6f03a6dc50c97fbac71516af876e6c
1eb1a20c17006ef5b2b514f43a242426c1d58ed5
F20101112_AAABHF kim_s_Page_081.tif
a1bd851be99018bfd22d2f72246ba618
64922f47fe7a4cb5e29563060120b4ba34c1e926
24326 F20101112_AAACKI kim_s_Page_146.QC.jpg
85e96ba0b77bba934e56678c9f91b382
1e65ec457746351c7bc30adf4ca48df0ebca84fc
6670 F20101112_AAACJT kim_s_Page_152thm.jpg
bc2216913afa46f0450b4f794ef5a26b
f0b17182f8ca49a26462bcb5f529a62361d9446c
F20101112_AAABGR kim_s_Page_067.tif
6559184387d8205f9213415ca9df0936
818073c2c8b471b138955afce9a0b9011d949d0b
F20101112_AAABHG kim_s_Page_082.tif
209bd7f83ece77fc011d654a5988ea27
264460dd6b0f02514db755b0b38fd0936b07e468
6760 F20101112_AAACKJ kim_s_Page_180thm.jpg
5fe0141a8c240a217a8c19430761569e
7f83fc987b8eecbc60f94694abd4774f7d17a3c2
6596 F20101112_AAACJU kim_s_Page_050thm.jpg
9b7883fe6dec819fe9201c3c829505f8
487110a4cff088f0af8ad672136a797218809d5b
F20101112_AAABGS kim_s_Page_068.tif
d2c8438e31170d3d3bbf7716338f944f
88421e4f5ed9b17738a17fac6288c003165c3b81
F20101112_AAABHH kim_s_Page_083.tif
094fa54cc5ee7bb5479e0d2c8c2a8c10
0b42f41d658d409593b4da6c5dc0c2ee2b557b43
17110 F20101112_AAACKK kim_s_Page_033.QC.jpg
584c714b0da94f88040ad52c5bbd3215
e1da3d4f30033343752b3d6dc38d97b20a3ca5c6
5524 F20101112_AAACJV kim_s_Page_053thm.jpg
a35db95ab10d56ca12e91cd339da4f83
8fa293427597130714ba394f72b8a0cad334bc70
F20101112_AAABGT kim_s_Page_069.tif
7abff0f1fff9e84ae914027dde2c9195
bf923fe465e9375eb029ce9c1664cf2928f79e4a
F20101112_AAABHI kim_s_Page_084.tif
fe6d46be72d64643e24f5f46c6301355
4a9d86fac9b5d95018b0e06a92ecb12d96709ba5
6030 F20101112_AAACKL kim_s_Page_063thm.jpg
1e307743ddc17481d07fbe70dc6cf110
ea1f54e4b6ecaff618b4afae24b698d836a3287b
5617 F20101112_AAACJW kim_s_Page_080thm.jpg
8dc6a47883147fd8e2e6c8be79b8ef84
986d43eeedfec6b324b9262d472f465ba5d15e14
F20101112_AAABGU kim_s_Page_070.tif
abd67780d828bbee1efc8da2d9e36eee
10ed5dbe1d197bdcecee8e7a2c63667eb9388285
F20101112_AAABHJ kim_s_Page_085.tif
8aa432404343085f9514e816fc5c892c
0dba876470d1fed50b89d9dc1fe9d69000add43d
20297 F20101112_AAACLA kim_s_Page_096.QC.jpg
7624f37a37b1aab4bda1ae73ba0ffe2f
c9fc6dee77ed5eef77e9b284e746d38ec97c6437
9872 F20101112_AAACKM kim_s_Page_112.QC.jpg
df5129273adeb3e8cb3fad78f4269c1e
c3df3ac45b0f1cec24b3bb8fb0cdd7058ddca53b
F20101112_AAACJX kim_s_Page_122.QC.jpg
f1d96f69aa7676452c0ca8c14b3ba378
3649434fba8162bcdcf644de2acfda6b03b3da1d
F20101112_AAABGV kim_s_Page_071.tif
2a69369e7fca3b62428aa040796a28c9
19b22c15b880fb75a8cf9a2e42df9f70e7b323d4
F20101112_AAABHK kim_s_Page_086.tif
3e8fa066a2aefcb5874ad26be0beeab9
0575f88c6fcd4f8bfa61ecacbf54d7d8d71913b7
6402 F20101112_AAACLB kim_s_Page_065thm.jpg
1ecb6a51ea1b2d9daf8e682c621ad8f2
ad7eb7ae49282f634b7d50d3e7278b758d7bf7d1
17178 F20101112_AAACJY kim_s_Page_027.QC.jpg
85a1704288bbbc336ea75b98e6a53728
7901551eb04498b7097ad119fcfab54c1f2cf6cc
F20101112_AAABGW kim_s_Page_072.tif
c5b7d6f5bfd9805de01481c434d4a2b3
3efd8d02033ea63792290473f93fbc3333d01fe5
21378 F20101112_AAACLC kim_s_Page_030.QC.jpg
1b72bbca7e127b8c6b37b4fae3eeba18
ee9dc4c10a12330e456140ee6121333e22fde9c6
20930 F20101112_AAACKN kim_s_Page_094.QC.jpg
c459ca0eacde2cc6a3351dd7f4d1e55f
93101ade5912312c5a3e427477972edf9498ee2e
19645 F20101112_AAACJZ kim_s_Page_047.QC.jpg
971a9d04ec69c709228be00689bf5331
87bb1e8198ff68d19159f6c742d8e1a2aa05964a
F20101112_AAABGX kim_s_Page_073.tif
7b18de5b92005ab6a8e5810af2c8060e
b64509935358b1830b45f58128408b3c21826076
F20101112_AAABIA kim_s_Page_102.tif
66d6a92cfe63f0f3da93221d22c5f4f9
049baa2e51416e730ce10014a35e9be3cb86cccc
F20101112_AAABHL kim_s_Page_087.tif
118be4bf4720612a012ee829ffc2111a
8e448f6c5d0455aeaa58454b73f7a33b009e50b6
20834 F20101112_AAACLD kim_s_Page_067.QC.jpg
9e4824c844eaf027a86bf72502113fe5
7f66cbc719cf880c65a4a87acf065fdf227acc3f
F20101112_AAACKO kim_s_Page_109thm.jpg
2d8fbe9ca120db694d9f61c175540d99
6579d238fe095562c270351043e9da124304d749
F20101112_AAABGY kim_s_Page_074.tif
cf0b4e213c4aa6bbe38649ee4c3b793d
1ba9b56d09859b2dfe793fe41860cc92472f2284
F20101112_AAABIB kim_s_Page_103.tif
9dc78a7cec7b2e2e120f735fa3cc5a78
7e235f1471b295768390e1f2c53c24c4061af813
F20101112_AAABHM kim_s_Page_088.tif
8e38a1861035d6ddb7813f42aaebee82
89e667f90da92aae4b5fe0b3ac6f2f53ae3af922
20556 F20101112_AAACLE kim_s_Page_159.QC.jpg
2d68f2c0d534de525a4ca210338c4873
f9e74fac56674fa24b56418cb36bddceff3ed535
19176 F20101112_AAACKP kim_s_Page_026.QC.jpg
12d871ebe99deb7d9fb3ade1e9eac5ce
6922a7ac150e195318b3944cd003fb5feacefaab
F20101112_AAABGZ kim_s_Page_075.tif
90c538dbf524a340a9bfbe20dfa76d75
d1e0ddf8d09f755941c03a47c8fb9716b35a08f0
F20101112_AAABIC kim_s_Page_104.tif
07a17d91b2ec76948260cc1344b7ea86
8910072f56896ec5efbcbbe84e9847166d4917cb
F20101112_AAABHN kim_s_Page_089.tif
b3b0a94b3a20667c2481d1df02dd8fbb
3515d445c61ae33e9de870e166c293d1574167aa
5587 F20101112_AAACLF kim_s_Page_148thm.jpg
6203ded083bd2636e3f84141b9871230
249a364be0d43e70d3f04cefe7fe90604504b87c
6781 F20101112_AAACKQ kim_s_Page_143thm.jpg
cdfa715318645d758f76285b382529d2
402f83ef36faf59e6b0f4f8ef7985ebcd386a6cf
F20101112_AAABID kim_s_Page_105.tif
3a8efcdf8e2c2a3d5c65b068b4f223c4
4413474ae9165a369185a26d2972b75c8ac682d1
F20101112_AAABHO kim_s_Page_090.tif
4b5a0bdfa925b9755c426e11be7d19a3
d38a5e31ef069f5379d06b2beea5ed1771da4c9f
6236 F20101112_AAACLG kim_s_Page_166thm.jpg
33bad63638125d7fac3b2baef31124cd
d03ec677d738771ef4ec94d9381277a95b07fb4d
18847 F20101112_AAACKR kim_s_Page_151.QC.jpg
4f454382ddb715dcdcc18f1b33ad9bae
ea15ee81ddd06e7c90efd35347988d8a767768ec
F20101112_AAABIE kim_s_Page_106.tif
0d33594a47623e7e1646e01e34ff7e09
58f84d74b563bd28fbeab854745b5cf2bdcb3cd9
F20101112_AAABHP kim_s_Page_091.tif
c59259cd6a7fca55baed382e7af55fef
50eb65937e9d6abadc2fd434bc6279ddf9925ed7
19145 F20101112_AAACLH kim_s_Page_062.QC.jpg
2b787c087622122df30c70daf87debb3
6eb8c6841ee4f2a526edc2c5e8fab2febe496ef2
5783 F20101112_AAACKS kim_s_Page_158thm.jpg
1858db16cac24e7237d8df7e791511db
ec37fb9fd461bc67b9a2a00b12d56553c008e4f0
F20101112_AAABIF kim_s_Page_107.tif
b8c51cc6b09ed866042bf2ab0ef1fcf4
4bae7c6d7a5a80071bdd683693fefbfadc26d71f
F20101112_AAABHQ kim_s_Page_092.tif
54367efd5dfe23291d2a29500e95eea1
61f2b3b1dca901a5a9f896dca6a6fe75ef9b9add
5663 F20101112_AAACLI kim_s_Page_092thm.jpg
b763eaa74e04d52f9869b71ed99b8c76
275b9da91f0ff0f0aab51bfb4ba46ac5d6ee0c70
21758 F20101112_AAACKT kim_s_Page_029.QC.jpg
6c08c2b406daf1f4b43cdd668e107bd9
830048ea7954588819d78acf0f404c7d0e57bafd
F20101112_AAABIG kim_s_Page_108.tif
d1329e5e3ba5988ce729503117c62c9d
1b6cecce4cdc9ec239577c98c51493d9b1a2c920
F20101112_AAABHR kim_s_Page_093.tif
bcf8b24f9a311e1b202674e1cca7b913
f154b85f2c0558cdcef655676e6153e63cbca27e
7292 F20101112_AAACLJ kim_s_Page_001.QC.jpg
049e48b2b92bbc8025fca59bdf6baa9a
f4d1f2298601eebd9afba7ab21f89fb2984eb3aa
22514 F20101112_AAACKU kim_s_Page_074.QC.jpg
9a5a47104d6991e4c89cd23b09bcb871
4ca6f7c8eb5f043a8055fa12d67fc8419e3bed66
F20101112_AAABIH kim_s_Page_109.tif
2755c22fa4202c41cb2b8f658bbb5930
32ba2542c2535369d02e4a57daf1873f8f4cab97
F20101112_AAABHS kim_s_Page_094.tif
44298b5735934baa30ee9e0e4c5f5a43
ecc6e089f1bcc9d7770e274353869eef00b96ace
21180 F20101112_AAACLK kim_s_Page_124.QC.jpg
f7bea8cd1ede5cc23faa8c9d8585a721
f9460e59256d57124cb9252664c40f5021cfff26
25237 F20101112_AAACKV kim_s_Page_180.QC.jpg
5f57185cb1ce720d861e02d6736b93cf
0223c8334c5b6588bc53f23522665c303d38f4d4
F20101112_AAABII kim_s_Page_110.tif
e73e3bb2f5fb7ed699ad60cff3d3be9d
4b5e7b6be1b1b483dea4ca6fbb01c63385b3e1b9
F20101112_AAABHT kim_s_Page_095.tif
4dee5ff2da78425eff7f147e4874a478
0859f5b66f4f20072b618d88fb7aac940f241bb0
6848 F20101112_AAACLL kim_s_Page_181thm.jpg
ef0f0afb760fba40b378269f2e5d70cf
d3d962fffa9086b0eff397b5cf2700daf42c50e1
6708 F20101112_AAACKW kim_s_Page_054thm.jpg
a267e5cec5920adf6a308ceb40465d85
9171e70f2f12b9c0997463903d2a612ec0250533
F20101112_AAABIJ kim_s_Page_111.tif
d7070b9f2419d164a248a4404315901e
21cbe74ba9af47c5384988d3b6c796a46f5b8cc0
F20101112_AAABHU kim_s_Page_096.tif
d961f6582400a58ec9786225a7e35d12
29e94734533489d14fb827e5d6b46afc5b47fbb7
6286 F20101112_AAACMA kim_s_Page_090thm.jpg
3b43a9e5409bebbd59dfd4d4c1000f18
c10a63a8fbb00b3c15e39a3cd90a34c92896e3bd
6416 F20101112_AAACLM kim_s_Page_023.QC.jpg
b86dd47652941c8db3ea8c54693423ae
7ef9a4be48cffeff2f77229c46ad0cbfb387b2f1
17333 F20101112_AAACKX kim_s_Page_005.QC.jpg
eb31c09f04df356a95b32cfbad64d5f3
090bb751a4704a60eb169abe857f1af0d63caa18
F20101112_AAABIK kim_s_Page_112.tif
60e0f16edd33c8cdcc7572d0af251dc2
ab3d9c52513de6aeb1257f44ffeff2f2f51d4f5e
F20101112_AAABHV kim_s_Page_097.tif
693964faa68e0c7e239836966a65dc10
a38a8a952df54ef9ae50ffb9c9dbb94fde1030cf
F20101112_AAACMB kim_s_Page_021thm.jpg
20c3b4fd13576e9f6b23a58c22bba00b
ef0a8524a2d720458828f3a27ceb5248734f6199
6244 F20101112_AAACLN kim_s_Page_108thm.jpg
80ad966d6286e2f4a15339e81f1cae0b
cf985516ec6575165a815dc734173c164c6a7620
22728 F20101112_AAACKY kim_s_Page_098.QC.jpg
ba8efbd29c9c83773ae3e37a4e04a1e7
72e53a3cbc2562c23c037997545310190c5309dc
F20101112_AAABIL kim_s_Page_113.tif
6042fd352e16fcf8acb3f919e0cdd3ca
c6cb402be6c55490f3e6e22833e9c370bc2bc379
F20101112_AAABHW kim_s_Page_098.tif
f22cf6719ca80923ef5c56b7f484cff1
4bc1979e6b8f9c6a460255739fd87ac773988e2c
6121 F20101112_AAACMC kim_s_Page_165thm.jpg
88cd13e78fb3323f04e2d0ce6c3b1d70
06bc5463f95e03f53f9e95837905c79a6498c6e4
6081 F20101112_AAACKZ kim_s_Page_141thm.jpg
f975cae171431145bcc4b25441aa4e4e
a1459bf90f0ae168884637c75258d211c75d1a79
F20101112_AAABJA kim_s_Page_128.tif
1ae58748a817157129f5f0e5f935ddbf
52095b1b70e0b780040c63f7def49eb9e90a3a32
F20101112_AAABHX kim_s_Page_099.tif
06872a946793f16b0a9a6bd1fc69690f
760f1f93dc1330d54bc5d703aee92684d597f6b3



PAGE 1

DESIGN AND ANALYSIS OF OPTIMA L DECODING MODELS FOR BRAINMACHINE INTERFACES By SUNG-PHIL KIM A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Sung-Phil Kim

PAGE 3

This document is dedicated to my mother and my wife.

PAGE 4

iv ACKNOWLEDGMENTS I would like to thank God for His endless love to give me the best all the time in my life. I would also like to thank my moth er, brothers and my wife for their love, supports, and solid belief in me. I would like to thank Dr. Jo se C. Principe for his un tiring support, advice, and guidance. I will never forget his inspiration for me to make me think as a researcher. I am very grateful to Dr. John G. Harris, Dr. Michael C. Nechyba, Dr. Karl Gugel, and Dr. Mark C.K.Yang for their support and advice for the brain-machine interfaces research. I am also exceptionally grateful to Dr. Ju stin C. Sanchez, Dr. Yadunandana N. Rao, Dr. Deniz Ergodmus, and Shalom Darmanjian for their sincere support and collaboration. I must acknowledge Dr. Miguel A.L. Nicolelis and Dr. Jose M. Carmena for the opportunity to conduct this re search with their support. Final thanks go to Yuseok Ko, Jeongho Cho and Dongho Han, who have always helped and encouraged me.

PAGE 5

v TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES...........................................................................................................viii LIST OF FIGURES...........................................................................................................ix ABSTRACT.....................................................................................................................xi ii CHAPTER 1 INTRODUCTION........................................................................................................1 Review of BMI Signal Processing................................................................................3 Approaches...................................................................................................................6 Outline........................................................................................................................ ..8 2 EXPERIMENTAL SETUPS FOR BRAIN-MACHINE INTERFACES...................10 Recording of Electrical Activ ity of Neuronal Ensembles..........................................10 Behavioral Tasks........................................................................................................11 Properties of Data.......................................................................................................12 Neuronal Firing Patterns......................................................................................13 Hand Movements.................................................................................................16 3 LINEAR MODELING...............................................................................................19 Linear Modeling for BMIs..........................................................................................19 The Wiener Filter........................................................................................................23 Stochastic Gradient Learning.....................................................................................27 Other Linear Modeling...............................................................................................28 4 REGULARIZED LINEAR MODELING..................................................................31 Dimension Reduction Using Subspace Projection.....................................................31 A Hybrid Subspace Projection............................................................................32 Design of a Decoding Model Using the Subspace Wiener Filter........................35 Parsimonious Modeling in Time Using the Gamma Filter.........................................37 Regularization by Parameter Constraints...................................................................41

PAGE 6

vi Review of Shrinkage Methods............................................................................43 Shrinkage methods.......................................................................................43 The relationship between subspace projection and ridge regression...........45 Comparison of shrinkage methods...............................................................45 Regularization Based on the L2-Norm Penalty....................................................47 Regularization Based on the L1-norm Penalty.....................................................50 5 NONLINEAR MIXTURE OF MULTIPLE LINEAR MODELS..............................54 Nonlinear Mixture of Linear Models Approach.........................................................55 Nonlinear Mixture of Competitive Linear Models..............................................55 Time Delay Neural Networks..............................................................................59 BMIs Design Using NMCLM....................................................................................59 Analysis......................................................................................................................6 0 Evaluation of Training Performance for NMCLM.............................................60 Analysis of Linear Filters....................................................................................62 6 COMPARISON OF MODELS...................................................................................64 Comparison of Model Parameters..............................................................................67 Performance Evaluation..............................................................................................69 Statistical Performance Comparison...................................................................70 7 MULTIRESOLUTION ANALYSIS FOR BMI........................................................73 Multiresolution Analysis of Neuronal Spike Trains...................................................76 Multiresolution Analysis.....................................................................................77 Multiresolution Analysis for the BMI Data.........................................................80 The Analysis of the Linear M odel Based on the Multiresolution Representation..................................................................................................83 Comparison of Models with th e Multiresolution Representation...............................85 Combination of Linear and Nonlinear Models...........................................................89 Nonlinear Modeling.............................................................................................91 Simulations..........................................................................................................93 Discussions.................................................................................................................95 8 DETERMINATION OF NEURONAL FIRING PATTERN S USING NONNEGATIVE MATRIX FACTORIZATION..............................................................99 Nonnegative Matrix Factorization............................................................................101 Factorization of Neur onal Bin Count Matrix............................................................103 Data Preparation................................................................................................103 3D food reaching data................................................................................103 2D target reaching data..............................................................................105 Analysis of Factorization Process.....................................................................105 Choice of the number of bases...................................................................106 How does NMF find repeated patterns?.....................................................106 Local minima problem...............................................................................110

PAGE 7

vii Case Studies A: 3D Food Reaching..................................................................110 Case Study B: 2D Target Reaching...................................................................113 Model Improvement Using NMF.............................................................................119 Discussions...............................................................................................................121 9 REAL TIME NEURONAL SUBSET SELECTION...............................................123 On-Line Variable Selection......................................................................................126 On-Line Channel Selection Method.........................................................................129 Determination of Selection Criterion........................................................................131 Determination of Threshold in LAR Using Surrogate Data..............................132 Conditional Selection Criterion.........................................................................138 Experiments of Neuronal Subset Selection..............................................................141 Discussions...............................................................................................................149 10 CONCLUSIONS AND FUTURE WORKS.............................................................154 LIST OF REFERENCES.................................................................................................162 BIOGRAPHICAL SKETCH...........................................................................................169

PAGE 8

viii LIST OF TABLES Table page 2-1 The distributions of the sorted neuronal activity for each monkey in motor cortical areas.......................................................................................................................... 11 4-1 Procedure of the LAR algorithm..............................................................................51 6-1 The generalization performances of lin ear models and nonlinear models for the 3D food reaching task..............................................................................................69 6-2 The generalization performances of linear models and nonlinear models for the 2D target reaching task...................................................................................................69 6-3 The t-test results for the difference of the magnitude of error vectors from the test dataset between the Wiener filter and other models.................................................72 7-1 The number of the selected neurons in each cortical area........................................85 7-2 The number of the nonzero weights.........................................................................87 7-3 The number of neurons selected by LAR for each models......................................88 7-4 Performance comparison between the multiresolution and the single resolution models......................................................................................................................88 7-5 Performance comparison between the co mbinatory model and the single linear model........................................................................................................................94 8-1 Comparison of important neurons; food reaching..................................................113 8-2 Comparison of important neurons: target reaching................................................119 8-3 Performance evaluation of the Wiener f ilter and the mixture of multiple models based on NMF........................................................................................................120 9-1 Procedure of the LAR algorithm: revisited............................................................126 9-2 The modified LAR algorithm fo r on-line variable selection..................................128

PAGE 9

ix LIST OF FIGURES Figure page 1-1 A system identification block diagram for BMIs.......................................................2 2-1 An experimental setup of 3D reaching task.............................................................12 2-2 An experimental setup of 2D target reaching task. The monkey moves a cursor (yellow circle) to a randomly placed target (g reen circle), and rewarded if a cursor intersects the target...................................................................................................13 2-3 An example of the binned data.................................................................................13 2-4 The plots of the average (dot) and the standard deviat ion (bar) for each neuron of three monkeys..........................................................................................................14 2-5 The trajectories of the estimated mean firing rates for movement (solid line) and rest (dotted line) over sequence of subsets...............................................................15 2-6 Illustrations of nonstationary propertie s of the input autocorrelation matrix...........16 2-7 Sample trajectories of (a) 3D f ood reaching, and (b) 2D target reaching movements...............................................................................................................17 2-8 The db6 continuous wavelet coefficients of trajectory signals of (a) 3D food reaching, and (b) 2D target reaching........................................................................18 3-1 The topology of the linear filter designed for BMIs in the case of the 3D reaching task........................................................................................................................... 20 3-2 The Hinton diagram of the weights of the Wiener filter for food reaching.............26 3-3 The Hinton diagram of the weights of the Wiener filter for target reaching............27 4-1 The overall diagram of the subspace Wiener filter..................................................34 4-2 The contour map of the validation MS E for (a) food reaching, and (b) target reaching....................................................................................................................35 4-3 The first three projection vectors in PCA for (a) food reaching, and (c) target reaching, and PLS for (b) f ood reaching, and (d) target reaching, respectively...............................................................................................................37

PAGE 10

x 4-4 An overall diagram of a ge neralized feedforward filter...........................................39 4-5 The contour maps of the valida tion MSE computed at each grid { Kj, i} for (a) food reaching, and (b) target reaching......................................................................41 4-6 Contours of the Lp-norm of weight vector for various values of p in the 2D weight space.........................................................................................................................4 6 4-7 Convergence of the regularization parameter ( n ) over iterations; (a) food reaching, and (b) target reaching..............................................................................................49 4-8 The histogram of the magnitudes of weights over all the coordinates of hand position, trained by weight decay (solid line) and NMLS (dotted line); (a) food reaching, and (b) target reaching..............................................................................50 4-9 An illustration of the LAR procedure......................................................................52 5-1 An overall diagram of the nonlinear mixture of competitive linear models............56 5-2 Demonstration of the localiza tion of competitive linear models.............................58 5-3 Frequency response of ten FIR filters; (left) pole zero plots, (right) frequency responses..................................................................................................................63 6-1 The actual hand trajectory (dotted red line) and the estimated hand trajectory (solid black line) in the x-, y-, and z-coordinate for the 3D food reaching task on a sample part of the test data...................................................................................................65 6-2 The actual hand trajectory (dotted red line) and the estimated hand trajectory (solid black line) in the x-, and y-coordinate for the 2D target reaching task on a sample part of the test data.......................................................................................66 6-3 The distributions of normalized weight magnitudes of four linear models over neuronal space for; (a) food reaching, and (b) target reaching................................68 6-4 Comparison of the CEM of the nine mode ls for (a) the food reaching task, and (b) the target reaching task.............................................................................................70 7-1 An illustration of the scaled convolut ion output from the Haar trous wavelet transform..................................................................................................................81 7-2 An example of the series of uj( k ) along with the corresponding hand trajectories................................................................................................................82 7-3 The demonstration of the relation between the neuronal firing activity representation at each scale (solid lines) and the hand position trajectory at xcoordinate (dotted lines)...........................................................................................83

PAGE 11

xi 7-4 The distribution of the selected input variables for (a) x-coordinate, (b) and ycoordinate of position, and (c) x-coordinate and (d) y-coordina te of velocity........86 7-5 The CEM curves of the single reso lution model (red dotted lines), and the multiresolution model (black solid lines).................................................................89 7-6 An example of the residual trajectory from a linear model (the x-coordinate)........93 7-7 An example of the output trajectories of the combinatory network and single linear model........................................................................................................................95 7-8 Tap outputs from two generalized feedforw ard filters for a neuronal bin count input with different delay: the gamma, and Haar wavelet.................................................97 8-1 Segmentation of the reaching trajectories : reach from rest to food, reach from food to mouth, and reach from mouth to rest position...................................................104 8-2 The NMF results for food reaching........................................................................111 8-3 The NMF results for target reaching......................................................................114 8-4 The hand position samples collected along w ith peaks in each NMF encoding (left), and the mean and variance of each set (right)........................................................116 8-5 The probabilities of the occurrence for ha nd position to be in each of sixteen angle bins.........................................................................................................................11 7 8-6. Tuning curve of neuronal firing pattern s encoded in each NMF basis for 16 angle bins.........................................................................................................................11 8 9-1 The diagram of the architecture of real time neuronal subset selection method....131 9-2 An illustration of the successive maximu m correlation over stages in the case of two variables (channels).........................................................................................134 9-3 Examples of the maximum absolute correlation curve in LAR.............................135 9-4 Neuronal subset selection examples.......................................................................137 9-5 Demonstration of filter outputs before subset selection; (top) synchronized data, (bottom) de-synchronized data...............................................................................139 9-6 Neuronal subset selection conditioned by the correlation between filter outputs and desired response.....................................................................................................142 9-7 Demonstration of the robustness of the algorithm to initial conditions.................143 9-8 An example of the outputs of two tracki ng systems with (solid line), and without on-line channel select ion (dashed line)..................................................................144

PAGE 12

xii 9-9 Neuronal subset selectio n for all three coordinates of food reaching movement..145 9-10 Neuronal subset selectio n over 2,000-second data; (a) subs ets in the early part, and (b) subsets in the late part of the data.....................................................................146 9-11 Neuronal subset selection for a 2D target reaching BMI.......................................148 9-12 2D hand trajectories in five sample data segments selected in Fig. 9-11...............148 9-13 Selection of individual neurons over a series of reaching movements..................151 9-14 The distribution of the subset size over a series of reaching movements..............153 9-15 Comparison of the average misadjustm ent per movement between the standard MIMO system learned by LMS and th e MIMO system with on-line channel selection..................................................................................................................153

PAGE 13

xiii Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy DESIGN AND ANALYSIS OF OPTIMA L DECODING MODELS FOR BRAINMACHINE INTERFACES By Sung-Phil Kim May 2005 Chair: Jose C. Principe Major Department: Electrical and Computer Engineering The role of decoding models in the design of brain-machine inte rfaces (BMIs) is to approximate the mapping from the firing activity of the cortical ne uronal ensemble to associated behavior. The linear model, that in a statistical signal processing setting is called the Wiener filter, has been the primar y vehicle to estimate th e mapping. One of the purposes of this dissertation is to conduct an extensive comparativ e study of multi-input, multi-output (MIMO) decoding models in tw o experimental BMI settings in which monkeys perform dissimilar behavioral tasks. The issues in decoding model estimation for BMIs include the large input dimensi onality, the spatio-temporal neural firing patterns, nonstationary, and th e adequacy of the linearity assumption. These issues lead us to concentrate our studies into four re search directions; the topology of the models (linear versus nonlinear), regularization both in space and time, preprocessing from discrete events to continuous input variables, and ways to cope with the nonstationarity present in the data. The comparison of the optimized linear and nonlinear MIMO models

PAGE 14

xiv with the Wiener filter based on generaliza tion performance shows that the improvement, although statistically significant, is mi nor with respect to the baseline. A second line of investigation deals with the analysis of motor cortex activity based on experimental BMI setups. Firstly, we pr opose an input based strategy called use nonnegative matrix factorization (NMF) to unc over spatio-temporal patterns in neuronal ensembles correlated to behavior. The specific spatio-temporal patterns of neural activity can be determined from the NMF basis ve ctors using only the i nput data, and their temporal relationships with behavior can be extracted from the NMF encodings. Secondly, a real time neuronal s ubset selection method is deve loped to find a subset of neurons that is most relevant to kinematic trajectories at every sampling time instance. The method based on an on-line implementati on of the LAR (Least Angle Regression) algorithm requires the availabil ity of the desired response. The experimental analysis demonstrates the nonstationary characteristics of the relationship be tween the activity of neuronal ensemble and behavior.

PAGE 15

1 CHAPTER 1 INTRODUCTION The direct control of machines by thought has been rather close to fiction until recent developments in neuroscience which seek direct interfaces between brain and machines. This emerging field has been calle d brain-machine interfaces (BMIs). One of the clinical demands driving BMIs is restor ing motor functions in ‘locked-in’ patients who suffer from paralysis caused by traumatic or degenerative lesions. In fact, there are more than 200,000 patients in the United States of America who live with partial or total permanent paralysis with 11,000 new cases e ach year [Nob99]. Eventually, BMIs may also impact the very paradigm of human computer interfaces. Several research groups have demonstrated that subjects can c ontrol robotic arms or computer cursors on screen by usi ng their brain activity [Car03, Cha99, Ken98, Mor99, Mus04, Ser02, She03, Tay02 and Wes00] These demonstrations in rodents, primates, and human patients show promising ways to bypass spinal cord lesions. In these experiments, up to a hundred electrodes ar e chronically implanted in motor areas in the cortex to record the electrical activiti es of hundreds of neurons. The control signals for external devices are extracted by a se ries of signal processing modules including spike detection/sorting algorithms and dec oding algorithms. This experimental BMI paradigm, which is illustrated in Fig. 1-1, re lies on three basic elements. Long-term and stable recordings enable us to obtain a ma ss of neuronal activity th rough microelectrode arrays. A mathematical model extracts the information of motor parameters from neuronal activity recordings in real time. A prosthetic device such as a robotic arm

PAGE 16

2 receives control signals from a mathematical model to coordinate the subject’s intended movement. Figure 1-1. A system identifica tion block diagram for BMIs. This dissertation mainly focuses on building mathematical models in BMIs. These models utilize spike trains provided by spike sorting algorithms as inputs, and desired response of movement parameters such as hand position, velocity, or the gripping force which are synchronously recorded by optical sensors during motor performance of the subject. The design of these models can be viewed as a system identification problem [Hay96a]. Recent investigations in BMI modeling have demonstrated successful estimation of the transfer f unction from motor cortex ne ural firing patterns to hand movement trajectory of primates, with a rela tively simple Wiener filter [Cha99; Mor99; Ser02 and Wes00]. If one thinks about the co mplexity of the motor system, starting from the intricate firing modulation of millions of cells in the cortex, passing through the added complexity of the spinal cord functi onality up to the spa tio-temporal firing of motor neurons that control each muscle fiber, it is rather surprising that a simple linear projection in the input space is able to capture the behavior of this complex system with correlation coefficients around 0.8 between the de sired and actual trajectories. This leads Spike binning . ) ( ˆ n d d ( n ) Wiener filter Spike sorting tra y food mouth

PAGE 17

3 us to look from an optimal signal pro cessing framework at the challenges and opportunities of this clas s of models for BMIs. There are several challenges of the application based on an idea of the BMI setup. First, the spatio-temporal patter ns in spike trains data are not fully known and thus cannot guide us in the proper way for designing the models. Second, this is a MIMO (multiple inputs multiple outputs) mapping problem, with a large dimensionality (i.e., for 100 neuronal inputs, the Wiener filter with 10 taps has 1,000 free parameters for each coordinate of outputs). Third, the statistics ar e not constant either in time or in space. Fourth, some neuronal firings are not related to the task and constitute therefore noise in the data. Fifth, there is no way of knowing if the true mapping is linear or nonlinear. In spite of all these difficult questions the linea r model learns the trajectory with a mean correlation coefficient of 0.6 ~ 0.8; therefore it is instructive to undertake a systematic analysis of the issues to de rive Wiener filters for BMIs. Review of BMI Signal Processing An approach to restore motor functions in paralyzed patients using direct interfaces between cortical motor areas and artificia l actuators was first proposed by Schmidt [Sch80]. He proposed to connect from the el ectrical activities of cortical neuronal ensemble to an actuator to bypass spinal cord injuries. Recently, Chapin and co-workers demonstrat ed that rats were trained to receive rewards of water drops by pressing a lever to control the rotati on of a robotic arm [Cha99]. A linear model learned by least square s utilized the activit ies of 21-46 neurons in primary motor cortex (M1) as inputs to pr edict the motion of robot. Rats turned out to learn to control the robotic arm using onl y neuronal signals without moving arms.

PAGE 18

4 Afterwards, other research groups joined the line of the study of experimental BMIs. Wessberg et al. [Wes00] in a join t research group including Duke University, SUNY, and MIT demonstrated a real time control of robotic arm using up to 100 neuronal activities. The Wiener filter or ti me delay neural network (TDNN) was designed to predict the 3D hand trajec tories of food reaching moveme nts using neuronal bin count data with a 100ms non-overlapping time windows embedded by a 10-tap delay line. Carmena et al. at Duke University also show ed that with a relati vely large number of cells (>100) monkeys could brai n control a robot arm to perf orm two distinct different motor tasks including reaching and grasping [Car03]. In these experiments, monkeys could control a real robotic actuator through a closed-loop BMI. They also reported the change of the contributions of neuronal populations during learning. Taylor et al. at Arizona State University presented a 3D cursor tracking BMI in their report [Tay02], where a monkey made ar m movements in a 3D virtual environment to reach a randomly placed target. Using 18 cel ls from primary cortical area (M1), they investigated the effect of visual f eedback on movements by comparing open-loop trajectories of hand controlled cursor move ments and closed-loop trajectories of brain controlled cursor movements. A co-adaptive movement prediction algorithm based on a population vector method, which was developed to track changes in cell tuning properties during brain controlled movement, iterativel y refines the estimate of cell tuning properties as a subject attempts to make a series of brain controlled movement. Other works on decoding algorithms in BMIs were revi ewed in Schwartz et al. [Sch01]. In this review, parametric linear models including th e population vector algorithm [Geo83] and the Wiener filter, and non-parametric methods including the maximum likelihood

PAGE 19

5 estimate, the principal component analysis (PCA) [Isa00], and self -organizing feature maps (SOFM) [Lin97] were introduced as motor-related information extraction algorithms from neural activity for BMIs. Serruya et al. in Donoghue laboratory at Br own University also demonstrated that monkeys tracked a continuously moving visual object in a video monitor by moving a manipulandum [Ser02]. The Wiener filter with 50ms bins embedded by 20 tap delay lines was used to predict hand position from 7~30 M1 cell activities. They also showed that time required to acquire targets using brain cont rol was very similar to hand control. Wu et al. in the same group proposed using a Ka lman filter as a dec oding model [Wu03] for finding probabilistic relations hip between motion and mean firing rates (for 140ms time windows). They extended this Kalman filtering framework to build a mixture of linear models using a switching Kalman filter model in which the hidden state variables were estimated by the expectation-ma ximization (EM) algorithm [Wu04]. Andersen and co-workers in Caltech implan ted microelectrode arrays in posterior parietal cortex (PPC) which is assumed to be responsible for planning of movements [And04, Mus04 and She03]. High-level signals related to a goal of movements were decoded using the maximum likelihood estimate of cursor positions from ~ 40 neuronal activities in PPC of monkeys. They demonstr ated that neuronal activities in PPC could provide information about movement plans; t hus they can be used for various neural prosthetic applications without moving limbs. Kennedy et al. first demons trated a human BMI by implanting a special electrode in the human neocortex to extract signals to control a cursor on a computer monitor

PAGE 20

6 [Ken98]. Using spike trains as input to a com puter, severely disabled patients could learn to move a cursor. Our group at the University of Florida in collaboration with Duke University has designed decoding models for 3D food reachi ng or 2D target reaching BMIs, including the Wiener filter and recursive multilayer perceptrons (RMLP) [San02a]. Based on the sensitivity analysis in the trained linear and nonlinear models, we improved the performance of models using only releva nt neuronal activities [San03b]. Further development of switching multiple linear m odels combined by a nonlinear network was proposed by Kim et al. to increase predic tion performance in food reaching [Kim03b]. Recently, Rao et al. demonstrated that echo stat e networks could be used as an alternative to nonlinear models such as RMLP or TDNN, with relatively uncomplicated training [Rao04]. Overall reviews for BMIs can be found in the following studies: [And04, Don02, Nic01, Nic03a and Sch04]. For overall reviews of brain-computer interfaces (BCIs), see Wolpaw et al. [Wol02] and Friechs et al. [Fri04]. Approaches In this dissertation, we will address the following issues: First, we will apply the Wiener filter algorithm [Hay96a] to the BMI a pplications and show its performance in two types of training data: food reaching a nd tracking reaching experimental datasets. This algorithm will be the golden standard for the other adaptive methods developed. Then we will compare other adaptive algorithms that reach the same solution in the statistical sense for stationary data, but may ha ndle the nonstationarity nature of the data better. We are referring to the least mean square algorithm (LMS) that will be implemented here in its normalized form (NLMS) [Hay96a].

PAGE 21

7 The issue of the number of free parameters of the model will be handled by three different techniques. The first is the subspace Wiener filter, which first projects the input data using principal component analysis (PCA) [Hay96b], and then derives a Wiener filter to the desired response. Although PC A has been used as a major subspace projection method, it does not or ient the projection to take advantage of the desired response structure. As an alternative, we propose a new idea of seeking subspace decomposition in the joint space through a hybrid subspace method, which combines the criterion of PCA and partial least squares (PLS) [Jon93 and Kim03a]. We also implement reduction in the number of degrees of freedom of the model by using a generalized feedforward filter based on the gamma tap de lay line [Pri93], which has the ability to cover the same memory depth of the tap delay line with smaller filter order. The third method implemented uses on-line regularization based on the Lp-norm penalty [Has01], which decreases the values of unimportant weights through training. The problem of finding the optimal parameter for the penalty function will be addressed. The next issue covered in this paper relates to the ad equacy of the linear modeling. We design a nonlinear mixture of switching, competitive linear models that implement a locally linear but globally nonlinear model [K im03b]. This structure can be thought as a time delay neural network (TDNN) [Hay96b] that is tr ained in a different way to conquer the difficulty of training thousands of parame ters with relatively small data sets. An important contribution of BMIs to brai n-related research fields is opening a new avenue for the experimental studies for the i nvestigation of real time operation of neural systems in behaving animals [Nic03a]. For in stance, using experimental BMIs, we may be able to explore the real-time nonstati onary operations of neuronal ensemble in

PAGE 22

8 association with behavior. Also, the cellula r contributions in a large neuronal population to the motor parameter encoding can be analyzed through BMIs. In a view of this respect, we investig ate the properties of neuronal ensemble synchronized with behavior in BMIs using several approaches. First, we will seek a way to represent neuronal activity more efficiently in the cont ext of BMI modeling. Through the multiresolution analysis [Mur04] for neural spike trains, we can construct a richer input space to possibly extract more enc oded information, thus enhancing prediction models [Kim05c]. The issue of designing suitable models in this extended input feature space will be addressed. Second, we will demonstrate an approach to determine neuronal spatio-temporal patterns using nonnegative matrix factorization [Lee99]. This mathematical procedure, which has been in troduced for image processing, can be utilized to extract spatio-temporal pa tterns of different neuronal populations without training of models [Kim05d]. Third, a real time neuronal s ubset selection algorithm is developed to find out which groups of neuronal activities exhibit relevance to a particular hand trajectory, and to investigate nonstationary ch aracteristics of neuronal ensemble in time [Kim05b]. This selection scheme is developed based on linear filters used for BMIs. Outline The dissertation is organized as follows: The experimental BMIs paradigms and the descriptions of the recorded datasets ar e presented in chapter 2. We revisit the applications of the linear adaptive filters incl uding the Wiener filter to BMIs in chapter 3. In chapter 4, several regularization methods are investigated to solve the problem of a large number of free parameters. In chapte r 5, the technique of a nonlinear modeling using competitive multiple linear models is introduced and discussed. The experimental results and the comparisons of all the models for the two different behavioral tasks are

PAGE 23

9 summarized in chapter 6. Further devel opments of BMI models based on the multiresolution analysis are demonstrated in chapter 7. Several analytical methods including NMF and on-line subset selection using experimental BMIs are introduced in chapter 8 and 9. Conclusions and future resear ch directions are disc ussed in chapter 10.

PAGE 24

10 CHAPTER 2 EXPERIMENTAL SETUPS FOR BRAIN-MACHINE INTERFACES The datasets that are used for the predicti on models were collected in experimental BMIs paradigm by Nicolelis lab at Duke Univ ersity. In this paradigm, the electrical activity of cortical neuronal ensembles from awake, behaving primates were recorded and used by statistical models for controlling a robotic arm in which the arm movements of primates was reproduced. In this chapter, we describe the recording of the activity of neuronal ensembles and the experimental para digm for behavioral tasks. The properties of the datasets are also presented. Recording of Electrical Activi ty of Neuronal Ensembles Multiple microwire arrays were chronically implanted in multiple cortical areas of one adult female owl monkey ( Aotus trivirgatus ) named as Belle, and two adult female Rhesus monkeys ( Macaca mulatta ) named as Ivy and Aurora. In an owl monkey, multiple low-density microelectrode arrays (MBlabs, Dennison, TX), each including 1632 50 m Teflon-coated stainless microwires, were implanted in the left dorsal premotor cortex (PMd), left primary motor cortes (M1) left posterior pariet al cortex (PP), right PMd and M1, and right PP cortex [Wes00]. In the first Rhesus monkey (Aurora), multiple high-density microelectrode arra ys developed at Duke University were implanted in the right PMd, right M1, right somatosensory (S1), right supplementary motor area (SMA), and the left M1 cortex. In the second Rhesus monkey (Ivy), multiple high-density microelectr ode arrays were implanted in the right PP, M1, and SMA cortex [Car03 and Nic03b].

PAGE 25

11 After surgical procedures, a multicha nnel acquisition processor (MAP, Plexon, Dallas, TX) cluster was used in the experiments to record the neuronal action potentials simultaneously. The spikes of single neuron from each microwire were discriminated based on time-amplitude discriminators a nd a principal component (PC) algorithm [Nic97 and Wes00]. Analog waveforms of the action potential and the firing time of each spike were stored. The firing times ar e binned within a 100ms nonoverlapping window, yielding a sequence of counts of the number of spikes in each bin. The distribution of the activity from the sorted neurons over cortex is presented in table 2-1 for each monkey. In this table, the indices of the sorted neurona l activity based on electr ode arrays are used for identification purpose. These indices will be used through the remainder of dissertation. Note th at in table 2-1, contra indicates the cortical areas in the opposite hemisphere to moving hand, ipsi does the areas in the same hemisphere. Table 2-1. The distributions of the sorted neuronal activity for each monkey in motor cortical areas. PPcontra M1contra PMdcontra S1contra SMAcontra M1ipsi PMd / M1-ipsi Belle 1-33 (331) 34-54 (21) 55-81 (27) 82-104 (23) Ivy 1-49 (49) 50-139 (90) 140-192 (53) Aurora 67-123 (57) 1-66 (66) 124-161 (38) 162-180 (19) 181-185 (5) Behavioral Tasks During a recording period, each primate wa s trained to perform particular motor tasks. In the first experimental setup, an owl monkey (Belle) performed threedimensional movements to reach for food randomly placed at one of four positions on a tray as depicted in Fig 2-1. In this ta sk, the monkey placed its hand on a platform 1 The number of the sorted neuron al activity in the cortical area.

PAGE 26

12 attached to the chair. When a barrier was open, the monkey reached and grabbed food. The location and orientation of the wrist of the monkey were conti nuously recorded using a plastic strip with multiple fiber optic sensors (Shape Tape, Measureand, Inc., Fredricton, NB, Canada) [Wes00]. Thes e signals were sampled at 200Hz. Figure 2-1. An experimental setup of 3D reaching task. In the second experimental setup, the Rh esus monkeys (Aurora and Ivy) performed a two-dimensional target reaching task (Fig. 2-2). In this task, the monkey was cued to move the cursor on a computer screen by controlling a hand-held manipulandum in order to reach the target. The monkey was rewarded when the cursor inters ected the target. The position of the manipulandum was continuous ly recorded at 1000Hz sampling rate. Properties of Data BMI models are designed to receive the bi nned spike counts as input signals and to predict hand position or veloci ty as desired signals. Before describing BMI models, it is informative to get the picture of the characteristics of input -output data. Therefore, we here present several characteris tics of the data which are used for all BMI models in the remainder of this dissertation. tra y food mouth MAP

PAGE 27

13 Figure 2-2. An experimental setup of 2D target reaching task. The monkey moves a cursor (yellow circle) to a randomly placed target (green circle), and rewarded if a cursor intersects the target. Neuronal Firing Patterns Firstly, the examples of the binned data ar e illustrated in Fig. 2-3 for six sample neurons collected from M1 cortex of Belle. We can notice that some neurons fire more frequently than others. Figure 2-3. An example of the binned data. Secondly, we examine the descriptive stat istics of the binned data over entire neurons. The first statistic that we evaluate is the sparseness of th e data measured by the ratio of the number of null bi ns (containing no spike) to th e total number of bins. As a

PAGE 28

14 result, the sparseness is 85.6% for Belle’s dataset, 65.2% for Ivy, and 60.5% for Aurora, respectively. Then, the average and the sta ndard deviation of the bin count for each neuron are evaluated in three datasets as depi cted in Fig. 2-4. It s hows the variance of statistics over neuronal space. (a) (b) (c) Figure 2-4. The plots of the average (dot) and the standard deviation (bar) for each neuron of three monkeys, (a) Belle, (b) Ivy, and (c) Aurora are illustrated In addition, the difference of firing rate s during movement and rest for a 3D reaching task is evaluated. In order to quantif y the difference, we estimate the mean firing rate during movement and rest separatel y. We collect 1300-second long contiguous data samples from Belle’s dataset, and manually select 81 subsets of movement from them. The remaining parts are referred to rest subset s. Then the mean firing rate of each subset

PAGE 29

15 for movement and rest is estimated by aver aging bin counts over en tire neurons and time period of a given subset, respectively. Figur e 2-5 shows the resulted estimates of mean firing rates for movement and rest. It shows th at neurons tend to fire more frequently in average during movement. However, due to th e uncertainty of the segmentation between movement and rest these average statistics are variable and subject to changing. It is also noteworthy that the mean firing ra te tends to reduce with time. Figure 2-5. The trajectories of the estimated me an firing rates for movement (solid line) and rest (dotted line) over sequence of subsets. Finally, the nonstationary characteristic s of input are in vestigated through observation of temporal change of the input autocorrelation matri x. The autocorrelation matrix of the multi-dimensional input data is estimated based on the assumption of ergodicity (see chapter 3 for details). In or der to monitor the temporal change, the autocorrelation matrix is estimated for a sliding time window (4000-sample length) which slides by 1000 samples (100 second). Fo r each estimated autocorrelation matrix, the condition number and the maximum eigenva lue are computed as approximations of the properties of the matrix. Th e experimental results of thes e quantities for three datasets are presented in Fig. 2-6. It is observed that there is temporal varian ce of the properties of the input autocorrelation matrix.

PAGE 30

16 Figure 2-6. Illustrations of nons tationary properties of the in put autocorrelation matrix. The dotted lines in the bottom pane l indicate the reference maximum eigenvalue which is computed over entire data samples. Hand Movements The hand movements of primates are main ly parameterized by the trajectories of hand positions. We treat these tr ajectories as our desired signal s to be predicted. Note that the hand positions which are sampled at 200Hz or 1000Hz are downsampled to 10Hz to be synchronized with the 100ms binned data. Before the investigation of the characteristics of desired signals, we first present the sample trajectories from two different tasks (food reaching of Belle and targ et reaching of Ivy) in Fig. 2-7. In the food reaching movement (Fig. 2-7a), the trajec tory approximately spans a hyper-plane in which three specific parts of movement such as reach to food, food to mouth, and mouth to rest are placed. Figure 2-7a describes th ree reaching movements. In Fig. 2-7b, a 2D trajectory in the target reaching task ove r 4 second time duration is depicted. The trajectory starts from the dot in the middle of the figure to the arrow. It demonstrates the trajectory in this task spans the entire given 2D space and is more irregular than in 3D food reaching.

PAGE 31

17 (a) (b) Figure 2-7. Sample trajectories of (a) 3D food reaching, and (b) 2D target reaching movements. Now, we seek to observe the nonstationa ry characteristics of these trajectory signals. The continuous wavele t transform based on the basi c wavelet function such as the Daubechies wavelet (db6 wavelet is used in this analysis) [Dau92] is performed to see the frequency change over time. 10000-samp le trajectory data from both 3D food reaching and 2D target reaching are used for wavelet analysis. The absolute values of wavelet coefficients are plotte d in Fig. 2-8. From this wavele t transform, we can clearly see the nonstationarity of the tr ajectory signals for both tasks.

PAGE 32

18 (a) (b) Figure 2-8. The db6 continuous wavelet coefficien ts of trajectory signals of (a) 3D food reaching, and (b) 2D target reaching. Dark er pixels in coefficients indicate larger values.

PAGE 33

19 CHAPTER 3 LINEAR MODELING In this chapter, we will present the desi gn of adaptive linear filters for BMIs and the standard methods to estimate the parameters. Linear Modeling for BMIs Consider a set of spike counts from M neurons, and a hand position vector d C ( C is the output dimension, C = 2 or 3). The spike count of each neuron is embedded by an L -tap time-delay line. Then, th e input vector for a linear model at a given time instance n is composed as x ( n ) = [ x1( n ), x1( n -1) … x1( n L +1), x2( n ) … xM( n L +1)]T, x L M, where xi( n j ) denotes the spike count of neuron i at a time instance n j A linear model estimating hand position at time instance n from the embedded spike counts can be described as c L i M j c ji i cb w j n x y 1 01) ( (3-1) where yc is the c -coordinate of the estimate d hand position by the model, wji c is a weight on the connection from xi( n j ) to yc, and bc is a bias for the c -coordinate. The bias can be removed from the model when we normalize x and d such that E [ x ] = 0 0 L M, and E [ d ] = 0 0 C, where E [ ] denotes the mean operator. Note that this model can be regarded as a combination of three separate linear models estimating each coordinate of hand position from identical input. In a matrix form we can rewrite (1) as x W yT (3-2)

PAGE 34

20 where y is a C -dimensional output vector, and W is a weight matrix of dimension ( LM +1)C Each column of W consists of [ w10c, w11c, w12c…, w1L-1c, w20c, w21c…, wM0c, …, wML-1c]T. Fig. 3-1 shows the topology of the linear model for the BMI application, which will be kept basically unchanged in the reminder of this diss ertation. The most significant differences will be in the number of parameters and in the way the parameters wji of the model are computed from the data. All the models are applied to estimat e the 3D or 2D hand positions using L = 10 taps, M = 99 (Belle) neurons (after eliminating the ones that do not fire during training parts of recordings) for the food reaching task and M = 192 (Ivy) or 185 (Aurora) for the target reaching task. The length of the time delays ( L ) is determined based on the preliminary BMI study of the correlation be tween time lags and hand movements in Wessberg et al. [Wes00], where the neuronal fi rings up to 1 second before current hand Figure 3-1. The topology of the linear filter designed for BMIs in the case of the 3D reaching task. xi( n ) are the bin counts input from ith neuron (total M neurons) at time instance n and z-1 denotes a discrete time delay operator. yc( n ) is the hand position in the c -coordinate. wc ji is a weight on xi( n j ) for yc( n ), and L is the number of taps. x1( n ) x M ( n ) z-1 z-1 … z-1 z-1 … … xw10yw10x Lw1 1 y x (n) yy(n) y z (n) z MLw1

PAGE 35

21 movement are significantly correlated with movement. The sizes of the training and the testing sets are 10,000 samples (~16.7 minutes ) and 3,000 samples (~ 5 minutes) for all the models and three datasets, respectively. The size of the training set is empirically chosen by consideration of the compromise between nonstationarity and the quality of estimation: a longer training set can improve estimation of parameters, but increases a chance of entering more nonstationary characteri stics of data in estimation. The weights are fixed after adaptation, and the outputs of the model are produced for novel testing samples. Performance of the model is ev aluated based on these testing outputs with respect to generalization. The following quantitative performance measur es are used to evaluate the accuracy of the estimation: 1. Correlation coefficient (CC) quantifies th e linear relationship between estimated and actual hand trajectories defined as y d dys s C CC (3-3) where, Cdy denotes the covariance between two variables d and y, and sd (or sy) denotes the standard deviation of d (or y). In our evaluation, Cdy is the covariance between actual hand trajectory (d) and its estimation by model (y). 2. The signal to error ratio (SER) is the rati o of the powers of act ual hand trajectory signals and the error of a model defined as K k K kk e k d SER1 2 1 2) ( ) ( (3-4) where d(k) and e(k) are the actual hand signal and the error at a time instance k, and K is the size of the window in which SER is computed. 3. The cumulative error metric (CEM) estimat es the cumulative distribution functions of the error radius defined as ) Pr( ) ( r r CEM e. (3-5) So, CEM(r) is the estimated probability that the radius of the error vector is less than or equal to certain value r.

PAGE 36

22 We compute CC and SER for a short sliding time window in order to see if a given model predicts better for a pa rticular part of trajector y. The size of the window is determined empirically. For the food reaching data, the size is set to 4 seconds which single reaching movement approximately take s. However, the duration of movement cannot be estimated for target reaching data si nce there is no apparent rest period between consecutive reaching movements. Therefore, the size of the window for the target reaching data is set long enough (1 minute) to make the computation of CC and SER reliable. For comparison between different models, the averages of CC and SER from every window are computed respectively. These co mputations are conducted separately for each coordinate of hand position. Furthermore, we divide the evaluation results of food reaching into two modes: movement and rest In each mode, the averages of CC and SER over three coordinates are used for evaluati on instead of individual CC and SER in each coordinate. For target reaching, where the se paration between movement and rest is not apparent, evaluation is executed se parately for each coordinate. The three performance measures introdu ced here complement one another; CC measures linear covariance between actual and estimated trajectories, thus providing the evaluation of tracking ability. But it lack s measuring the bias of estimation. This shortcoming is supplemented by SER that is based on error measurement. However, SER bears a problem such that it can be inclined to coordinate system which is calibrated artificially. For instance, with the similar error power, SER becomes relatively large when the magnitude of actual trajectory becomes large, thus increasing signal power. However, the magnitude of hand position does not possess any practical meaning. This

PAGE 37

23 problem can be counterbalanced by CEM in wh ich only the radius of error vector is concerned. It also provides a st atistical tool for performance measure which is especially useful for statistical compar ison of model on average. Hence, we can state that three measures jointly allow more comprehens ive performance evaluation than using individual measures separately. The Wiener Filter The transfer function from the neural bi n count to hand position can be estimated by linear adaptive filters among which the Wiener filter pl ays a central role [Hay96]. The weight matrix in the Wiener filter for th e case of MIMO system is estimated by the Wiener-Hopf solution as P R W1 Wiener. (3-6) R is the correlation matrix of neural spike inputs with the dimension of (L M)(L M), MM M M M Mr r r r r r r r r R 2 1 2 22 21 1 12 11, (3-7) where rij is the L L cross-correlation matrix between neurons i and j (i j), and rii is the L L autocorrelation matrix of neuron i. P is the (L M) C cross-correlation matrix between the neuronal bin count and hand position as MC M C Cp p p p p p P 1 2 21 1 11, (3-8) where pic is the cross-correlati on vector between neuron i and the c-coordinate of hand position. The estimated weights WWiener are optimal based on the assumption that the

PAGE 38

24 error is drawn from white Gaussian distributi on and the data are stationary. The predictor WWiener Tx minimizes the mean square error (MSE) cost function, y d e e ], [2E J (3-9) Each sub-block matrix rij can be decomposed as ) 0 ( ) 2 ( ) 1 ( ) 2 ( ) 0 ( ) 1 ( ) 1 ( ) 1 ( ) 0 ( r L r L r L r r r L r r rij ij ij ij ij ij ij ij r, (3-10) where rij() represents the correl ation between neurons i and j with time lag These correlations, which are the second order mo ments of discrete-time random processes xi( m ) and xj( k ), are the functions of the time difference ( m k ) based on the assumption of wide sense stationarity ( m and k denote discrete time instances for each process). Assuming that the random process xi( k ) is ergodic for all i we can utilize the time average statistics to estimate correlation. In this case, the estimate of correlation between two neurons, rij ( m k ), can be obtained by ) ( ) ( 1 1 )] ( ) ( [ ) (1k n x m n x N k x m x E k m rj N n i j i ij M j i , 1 (3-11) The cross-correlation vector pic can be decomposed and estimated in the same way. rij() is estimated using equation (3-11) fr om the neuronal bin count data with xi( n ) and xj( n ) being the bin count of neurons i and j respectively. From equation (3-11), it can be seen that rij() is equal to rji(-). Since these two correlation estimates are positioned at the opposite side against th e diagonal entries of R, the equivalence between rij() and rji(-) leads the symmetry of R. The symmetric matrix R, then, can be inverted effectively by using the Cholesky factorization. This factorization reduces the computational

PAGE 39

25 complexity for the inverse of R from O ( N3) to O ( N2) where N is the number of parameters. Notice that R must be a nonsingular matrix to obtain the solution from (3-3). However, if the condition number of R is very large, causing R to be close to a singular matrix, then the WWiener may be inadequately determined. This usually happens when the number of samples is too small, or the input variables are linearly dependent to each other. In such a case, we can reduce the c ondition number by adding an identity matrix multiplied by some constant to R before inversion. This procedure is called ridge regression in statistics [Hoe70], and the solu tion obtained by this pr ocedure turns out to minimize a cost function which linearly combin es the one in (3-9) and a regularization term. The details will be discussed in chapter 4. In our estimation of the Wiener solution, however, we do not employ this regularization scheme Figure 3-2 and 3-3 display the Hinton diagra ms of the weights of the Wiener filter obtained by (3-6) for food reaching and target reaching, respectively. Each column of WWiener (i.e., the weight vector of the Wiener f ilter for each coordinate) is rearranged in a matrix form to show spatio-tem poral structure of we ight vectors. In this matrix form, the neuron indices are aligned in the x-axis and the time lags are in the y-axis. Note that the first row of the matrix corresponds to the ze ro lag (for the instantaneous neuronal bin counts), followed by the successive rows co rresponding to the increasing lags (up to nine). In the Hinton diagram, white pixels denote positive signs, while black ones do negative signs. Also, the size of pixels indicates the magnitude of a weight. From the Hinton diagram, we can probe th e contribution of i ndividual neurons to the output of the Wiener filter. For this purpose, the weights represented in the Hinton

PAGE 40

26 diagrams are yielded from the input in which each neuronal bin count time series xj( n ) is normalized to have unit variance. Then, the value of a weight can represent the sensitivity of the filter output to the corresponding input [San03a]. Also, we can see the sign of the correlation between a particular neuronal input and the output. For instance, the weights for neurons indexed by 5, 7, 21, 23, and 71 e xhibit relatively larg e positive values for food reaching (see Fig. 3-2), indicating that those neuronal activities are positively correlated with the output. On the other hand, the weights for neurons 26, 45, 74, and 85 exhibit large negative values indicating the negati ve correlation betw een neuronal inputs and the output. There are also some neurons for which the weights have positive and negative values (e.g. 14 and 93). It is possi ble from these diagrams to examine the significant time lags for each neuron in terms of the contribution to the filter output. For instance, in the case of neuron 7 or 93, the re cent bin counts seem to be more correlated with the current output. Howeve r, for neuron 23 or 74, the de layed bin counts seem to be more correlated with the current output. Sim ilar observations can be made for target reaching in Fig. 3-3. Figure 3-2. The Hinton diagram of the weights of the Wiener filter for food reaching. X Y Z10 20 30 40 50 60 70 80 90 100

PAGE 41

27 Figure 3-3. The Hinton diagram of the weights of the Wiener filter for target reaching. Stochastic Gradient Learning The underlying assumption of the Wiener filter is that the statis tics of the data are time-invariant. However, in the nonstationary environment where the statistics of the data vary in time, the Wiener filter only uses th e average statistics to determine weights. The normalized least mean squares (NLMS) algorithm, a modified version of the least mean squares (LMS) algorithm, can train we ights effectively for nonstationary inputs by varying the learning rate [Hay96]. It utilizes the stochastic estimation of the power of input signals to adjust the learning rate at each time instance. The weights at a given time instance n are updated by NLMS as ) ( ) ( ) ( ) ( ) 1 (2n n e n n nc c NLMS c NLMSx x w w (3-12) where satisfies 0<<2, and is a small positive constant. ec( n ) is an error sample for the c -coordinate and x( n ) is an input vector. If we let ( n ) /(+||x( n )||2), then the NLMS algorithm can be viewed as the LMS algorithm with a time-varying learning rate such that, ) ( ) ( ) ( ) ( ) 1 ( n n e n n nc c NLMS c NLMSx w w (3-13) X Y 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190

PAGE 42

28 Although the weights in NLMS converge to the sa me solution as the Wiener filter in the statistical sense for stationary data and a time-varying learning rate, the solution will be different for the nonstationary data. The weights of the linear filter for BMIs are estimated by NLMS with the settings of = 0.01 and =1. In the empirical analysis of the resulted outputs of this filter, we observed that for food reaching the accuracy of the estimation is improved compared to the Wiener filter, especially dur ing rest (see the details of results in chapter 6), It means that the weights found a better compromise be tween the two very different characteristics of movement and rest. This improvement ha s been achieved because of the update rule (3-12) where the weights in NLMS are update d with a relatively high learning rate during rest since total firing count in creases during movement (see Fi g. 2-5). Thus, for the class of motor behaviors in which movement periods are separated by rest, the NLMS algorithm captures more information about rest positions than the Wiener filter. Other Linear Modeling For comparison with other linear models being proposed for BMIs, a Kalman filter is designed and its prediction performance is evaluated for the same data used in this dissertation. The Kalman filter, which estimat es the internal state for a linear dynamical system [Kal60] and produces a generative model for the data, has been proposed to learn the dynamical nature of the biological mo tor system in BMIs [Wu03, San02b]. In the Kalman filtering framework, the system stat e includes the hand position, velocity and acceleration, and the observation includes the neuronal bin count. Based on the assumption of the linear relationship (with ad ditive Gaussian noises) between the state and the observation, as well as the states at current and previous time instances, the

PAGE 43

29 Kalman filter recursively estimates the ha nd kinematics in real-time from cortical neurons. Although the system parameters repr esenting the linear relationship are fixed after training, the Kalman filter can adjust its gain to track the time-varying nature of motor systems. We briefly review the method of the Ka lman filter used for BMIs. The linear dynamic equation for the state is given by ) ( ) ( ) 1 ( n n n Az z (3-14) where z ( n ) is a state vector for the hand kinematics such that z ( n ) = [ px( n ) py( n ) vx( n ) vy( n ) ax( n ) ay( n )]T; pc( n ) denotes hand position for the c -coordinate, v ( n ) velocity, and a ( n ) acceleration, at a time instance n For food reaching, pz( n ), vz( n ), and az( n ) are added to the state vector, respectively. ( n ) is a process noise vector following the Gaussian distribution with a zero-mean vector and a covariance matrix The state-output mapping equation is given by ) ( ) ( ) ( n n n Hz x (3-15) where x ( n ) is the instantaneous neuronal bin count vector (binned by a 100ms nonoverlapping time window). Note that Wu et al. designed the same Kalman filter with different window size (70ms) [Wu03]. ( n ) is a measurement noise term following the Gaussian distribution with a zero-mean vector and a covariance matrix Q Given the training set, A and H are determined by the least squares (LS) which solves the following optimization problems, 1 1 2) ( ) 1 ( min argN nn n Az z AA (3-16) N nn n1 2) ( ) ( min arg Hz x HH (3-17)

PAGE 44

30 Given A and H the estimate of covariance matrix and Q can be obtained by 1 1)) ( ) 1 ( ))( ( ) 1 ( ( 1 1N n Tn n n n N Az z Az z (3-18) N n Tn n n n N1)) ( ) ( ))( ( ) ( ( 1 Hz x Hz x Q (3-19) With the model ( A H , Q ) obtained, the Kalman filter estimates the state of the hand kinematics from the novel neuronal bin count vect ors (the test data) in real time. The state estimate z ( n ) and the Kalman gain matrix K ( n ) are updated at each time instance by the following recursion, A AP P Tn n ) 1 ( ) ( (3-20) 1) ) ( ( ) ( ) ( Q H HP H P KT Tn n n (3-21) )) 1 ( ) ( )( ( ) 1 ( ) ( n n n n n HAz x K Az z (3-22) ) ( ) ) ( ( ) ( n n n P H K I P. (3-23) Note that the error covariance matrix P and the state vector estimate z must be initialized before starting this recursion.

PAGE 45

31 CHAPTER 4 REGULARIZED LINEAR MODELING In chapter 3, we have demonstrated th e design of linear filters which can be adapted for BMI applications. Despite the intr insic sophistications in the BMI system, the simple linear filter (whi ch merely combines the weighted bin count inputs) could estimate the primate’s hand position fairly well, espe cially showing the ability of tracking lowfrequency trajectory. Based on this fact, we seek an opportunity to improve the performance of linear models by importing a dvanced learning techniques. Among those, a class of regularization methods is pr eferred since it yields smoother function approximation in order to improve the ge neralization performance for BMI models. In this chapter, we propose to use thr ee different regularization approaches. The first approach reduces the input space dimension using subspace projection and subsequently operates the linear filter in the subspace. The second approach reduces the filter order in each neuronal channel by employing the gamma delay line. The third approach places constraints on the model para meter space to reduce the effective number of parameters. We will discuss the methodol ogy, implementation and analysis of these regularization approaches in this chapter. Dimension Reduction Using Subspace Projection One of the challenges in the design of decoding models for BMIs is that some neurons’ firings are not substa ntially modulated during task performance, and they only add noise to the multi-channel input data. In addition, some neurons’ firings are correlated with each other; t hus it may be advantageous to blend these inputs to improve

PAGE 46

32 model performance. Subspace projection, which can reduce the noise and blend correlated input signals together, may curt ail unnecessary firing signals by a proper projection matrix. It also reduces the number of degrees of freedom in the multi-channel data, and consequently decrea ses the variance of the model. Here, we introduce a hybrid subspace projection method which is derived by combining the criteria of principal component analysis (PCA) and partial least squares (PLS). Then, we will design the subspace Wiener filter based on this hybrid subspace projection for BMIs. A Hybrid Subspace Projection PCA, which preserves maximum variance in the data, has been widely adopted as a projection method [Hay96b]. The projection vector wPCA is determined by maximizing the variance of the pr ojection outputs as w R w w x w ww s T T PCA PCAE J ] [ ) ( max arg2 (4-1) where Rs is the input covariance matrix computed over the neuronal space only (it is an MM matrix where M is the number of neurons). x is an M1 instantaneous neuronal bin count vector. It has been well known that wPCA turns out to be the eigenvector of Rs corresponding to the largest eigenvalues. Then an MS projection matrix which constructs an S -dimensional subspace consists of S eigenvectors corresponding to the S largest eigenvalues. However, PCA does not exploit information in the joint space of both input and desired response. This means that there may be directions with large variance that are not important to describe the correlation betw een input and desired response (e.g., some neuronal modulations rela ted to the monkey’s anticipation of reward might be substantial, but less useful for the direct estimation of movement parameters), but will be preserved by the PCA decomposition.

PAGE 47

33 One of the subspace projection methods to construct the subspace in the joint space is PLS, which seeks the projection maxi mizing the cross-correlation between the projection outputs and desired respon se [Jon93]. Given an input vector x and a desired response d a projection vector of PLS, wPLS maximizes the following criterion, p w x w w x w ww T T T PLS PLSd E d E J )] ( [ ] ) [( ) ( max arg, (4-2) where, p is defined as an M1 cross-correlation vector between x and d The consecutive orthogonal PLS proj ection vectors are computed using the deflation method [Hay96b]. There have been efforts to find a better pr ojection which can combine properties of PCA and PLS. The continuum regression (CR), introduced by Stone and Brooks [Sto90], attempted to blend the criteria of the ordi nary least square (OLS), PCA and PLS. Recently, we have proposed a hybrid criterion function similar to the CR, together with a stochastic learning algorithm to estimate th e projection matrix [Kim03a]. The learned projection can be either PCA, PLS, or combination of th em. A hybrid criterion function combining PCA and PLS is given by w w w R w p w wT s T TJ 1 2) ( ) ( ) (. (4-3) where, is a balancing factor between PCA and PLS. This criterion covers the continuous range between PLS ( = 1) and PCA ( = 0).1 Since the log function is monotonically increasing, the cr iterion can be rewritten as, ) log( ) log( ) 1 ( ) log( )) ( ˆ log(2w w w R w p w wT s T TJ (4-4) 1 The CR covers OLS, PLS and PCA. However, since we are only interested in the case when subspace projection is necessary, OLS can be omitted in our criterion.

PAGE 48

34 We seek to maximize this criterion for 0 1. There are two learning algorithms derived in [Kim03a] to find w (one is based on gradient des cent, and the other is based on the fixed-point algorithm), but we opt to us e the fixed-point learni ng algorithm here due to its fast convergence and independen ce of learning rate. The estimation of w at the k +1th iteration in the fixed-poi nt algorithm is given by ) ( ) ( ) ( ) 1 ( ) ( ) ( ) 1 ( ) 1 ( k k k k T k T ks T s Tw R w w R p w p w w (4-5) with a random initial vector w (0). T (0< T <1) is a balancing para meter to remove the oscillating behavior near convergence. The convergence rate is affected by T that produces a tradeoff between the convergence speed and the accuracy. Note that the fastest convergence can be obtained with T = 1. The consecutive projection vectors are also learned by the deflation method to form in each column of a projection matrix W After projection onto the subspace by W we embed the input signal at each channel with an L -tap delay line and design the Wiener filte r to estimate the hand position. Figure 4-1 illustrates the overall diagram of the subspace Wiener filter. Figure 4-1. The overall diagram of the subspace Wiener filter. y( n ) denotes the estimated hand position vector. There are L -1 delay operators (z-1) for each subspace channel. Subspace p rojection S x ( n ) . M Wiener filter y ( n ) . z-1 z-1. . . z-1 z-1 + d ( n )

PAGE 49

35 Design of a Decoding Model Using the Subspace Wiener Filter The hold-out cross-validation method [Bis95] is utilized to determine both the optimal subspace dimension (S) and simultaneously. 10,000 consecutive data samples are divided into 9,000 training and 1,000 validation samples for both the food reaching and target reaching tasks, respectively. Th e MSE over validation samples is computed after training for each set of (Si, j), where Si {20, 21, …, 60} and j {0, 0.1, …, 1}. In Fig. 4-2, the contour map of the computed MSE is depicted. The minimum MSE is found at (S, ) = (37, 0.9) for food reaching and (S, ) = (44, 0.6) for target reaching, respectively. The validation MSE also tends to be smaller for larger in the lower subspace dimension while the MSE levels are rather flat in the higher subspace dimension. This indicates that PLS plays a more important role in building a better subspace Wiener filter for the lower subspace dimension. (a) (b) Figure 4-2. The contour map of the validati on MSE for (a) food reaching, and (b) target reaching. The darker lines indicate lower MSE levels. To investigate the difference between the subspaces by PCA and PLS further, the first three projection vectors are estimated by setting = 0 or 1 in (4-5) as presented in Fig. 4-3. Note that PLS yields separa te vectors corresponding to each hand position coordinate since it utilizes desired re sponse, while PCA needs only one projection

PAGE 50

36 regardless of coordinates. In food reaching, the projection vector s of PCA have large weights on the neurons that fire frequently. For instance, the neurons indexed as 42, 57, and 93 are empirically discovered to have the largest firing counts. Since the neural firing data is sparse, PCA attempts to build a subsp ace with frequently firing neurons in order to preserve the variance. On the other hand, the weights in PLS pr ojection have larger values on different neurons which do not fire ve ry frequently such as the neurons indexed as 7 and 23. From the Hinton diagram described in previous chapter (s ee Fig. 3-2), these neurons were discovered to significantly cont ribute to the output of the Wiener filter designed for BMIs. Therefore, PLS is able to utilize the information from important neurons that do not even fire very frequen tly by exploiting the information in the joint space. For target reaching, we can also obser ve that more neurons are involved in the projection vectors in PLS than PCA. The neurons with larger weights in the PCA projection, again, are observed to fire more frequently. It is interesting to observe that for target reaching, the subspace dimension obtaine d from the cross-validation is of the same order as the number of neurons obtained in the neuron dropping analysis performed in Sanchez et al. [San03b]. In fact, the numb er of important neurons, for which the correlation coefficient between model outputs and desired hand trajectories is maximized, is 35, which is close to the subspace dimension of 44. The empirical measurements of performan ce for the test data using the subspace Wiener filter with the above parameters dem onstrate that the generalization performance of the subspace Wiener filter for both tasks r each slightly higher level than those of the Wiener filter or the linear filter trained by NLMS (see chapter 6). We expect, however, much higher improvements using the subspace projection methods for larger datasets

PAGE 51

37 (a) (b) (c) (d) Figure 4-3. The first three projection vectors in PCA for (a) food reaching, and (c) target reaching, and PLS for (b) food reaching, and (d) target reaching, respectively. (more than 200 neurons; Carmena, J.M., Lebedev, M.A., & Nicolelis, M.A.L., unpublished observations ) and anticipate that these techniques will be important in the foreseeable future when the number of simultaneously recorded neurons surpasses 1,000. Parsimonious Modeling in Time Using the Gamma Filter The large number of parameters in de coding models is caused not only by the number of neurons but by the number of time de lays required to capture the history of the neuron firings over time. Although we use a 10-tap delay line in this st udy, the size of the delay line can be variable depending upon the bin size (e.g., if we use a 50ms time bin,

PAGE 52

38 then the number of time lags increases to 20) Hence, it is desirable to represent the temporal patterns of neuronal data through more efficient way to reduce the number of taps. A linear filter described in previous chapters can be decomposed into multiple finite impulse response (FIR) filters arra nged to every neuron. An FIR filter has advantages of trivial stability and easy adap tation. However, the length of the impulse response and the filter order are equivalent in an FIR filter. Hence, when a problem requires a deep memory and a small number of parameters, an infinite impulse response (IIR) system is more likely appropriate. Howeve r, the stability issue in the adaptation and the non-convex error surface of an IIR filter yi eld nontrivial challenges for practical use. A generalized feedforward filter provides a signal processing framework to incorporate both FIR and IIR characterist ics into single system by employing a local feedback structure [Pri93]. As shown in Fig. 4-4, an input signal is delayed at each tap by a delay operator defined by specif ic transfer function G ( z ). Note that when G ( z ) = z-1, it becomes an FIR filter. The transfer function of an overall system H ( z ) is stable when G ( z ) is stable since K k k kz G w z H0) ( ) (, (4-6) where K is the number of taps. It has been shown that a generalized feedforward filter can provide a trivial stability a nd easy adaptation while decoupling the memory depth from the filter order.

PAGE 53

39 Figure 4-4. An overall diagram of a generalized feedforward filter [Pri93]. x0( n ) is an instantaneous input, and y ( n ) is a filter output. The gamma filter is a special case of th e generalized feedforward filter with G ( z ) = /( z -(1 )) where is a feedback parameter. The impul se response of the transfer function from an input to the kth tap, denoted as gk( n ), is given by ) ( ) 1 ( 1 1 ) 1 ( )) ( ( ) (1 1k n u k n z Z z G Z n gk n k k k k (4-7) where, Z-1() indicates the inverse z-transform and u ( n ) the step function. When = 1, the gamma filter becomes an FIR filter. The stability of the gamma filter in adaptation is guaranteed when 0 < < 2 due to a local feedback structure. The memory depth D with a feedback parameter in the Kth-order gamma filter is given by K D for < 1, or 2 K D for > 1. (4-8) If we defined the resolution R the property of the gamma delay line can be described as R D K for < 1, or ) 2 ( R D K for > 1. (4-9) This property shows that the gamma filter decouples the memory depth from the filter order by adjusting a feedback parameter ( ). In the case of = 1 (i.e., the FIR filter), the G ( z ) y ( n ) x 0( n ) w0 G ( z ) x 1( n ) w1 … x 2( n ) G ( z ) x K( n ) wK + … +

PAGE 54

40 resolution is maximized whereas the memory depth is minimized for a given filter order. But this choice sometimes results in overfitting when a signal to be modeled requires more time delays than the number of descrip tive parameters. Therefore, the gamma filter with the proper choice of a feedback para meter can avoid overfitting by the decoupled memory structure. The tap weights can be updated using NL MS, and therefore the computational complexity is of the same order of FIR filters. A feedback parameter can also be adapted from the data. However, instead of adaptively learning we can serach the best combination of K and by using the cross-validation. In the same way as performed in previous section, the MSE in a valida tion set is computed for each set of Kj and i, where Kj {2, 3, …, 10} (note that we ignore the case of Kj=1, which implements memoryless process) and i {0.1, 0.2, …, 1.9}. The number of samples is 9000 and 1000 for training and validation, respectively. Th e contour of the validation MSE is shown in Fig. 4-5. The minimum MSE is achieved for ( K ) = (4, 0.3) for food reaching and ( K ) = (10, 1.2) for target reaching, respectively. The memory depth estimated by this empirical method becomes D 13 for the food reaching task and D 12.5 for the target reaching task. The savings in the number of parameters are 60% (3120 1248) for the food reaching task. It appears that the temporal resolution of the filter ( R ) for target reaching is la rger than that for food reaching: R = 0.3 for food reaching and 0.8 for ta rget reaching, respectively. It might indicate that relatively irregular target reaching movement requires finer temporal resolution. The generalization performance of the gamma filter with the optimized K and

PAGE 55

41 Figure 4-5. The contour maps of the va lidation MSE computed at each grid { Kj, i} for (a) food reaching, and (b) target reaching. The dark er lines denote lower MSE levels. is evaluated through the novel test data. Th e empirical results show that the gamma filter exhibits slightly better performance than both the Wiener filter, and the FIR filter trained by NLMS (see chapter 6). Regularization by Parameter Constraints There have been numerous efforts about m odel selection to d eal with the biasvariance dilemma [Gem92]. One of them is a pruning technique which seeks to diminish unnecessary parameters by imposing some cons traints in the model parameter space (see [Ree93] for review). Among many pruning tech niques, weight decay has been widely used due to its simplicity and fair perform ance [Kro92]. Weight decay is based on an error cost function to which an additional penalty term of parameters is added. This penalty restricts the L2-norm of a parameter vector, and is balanced with the MSE cost by a regularization parameter. Although weight d ecay has originated in a neural networks field, it shares the same cost function with a statistical method called ridge regression [Hoe70]. A difference is that ridge regression provides an analytical solution, where weight decay provides an iterative soluti on. Hence, understanding ridge regression may

PAGE 56

42 give us a better appreciation of weight decay One of the interesting features of ridge regression is its link to subspace projection, es pecially PCA. This f eature leads us to see in which directions in the input space ridge re gression (or weight decay) prunes more. This property will be reviewed in more details shortly. Ridge regression belongs to a class of shri nkage method in statistical learning. As stated earlier, it employs the L2-norm penalty. However, rece nt studies of statistical learning have revealed that the L1-norm penalty sometimes provi des a better solution as a shrinkage method in many applications than the L2-norm penalty [Has01]. LASSO (Linear Absolute Shrinkage and Selection Operator) has been a prominent algorithm [Tib96] among the L1-norm based shrinkage methods. However, its implementation is computationally complex. LAR (Least Angle Regression) has been recently proposed by Efron et. al. providing a fr amework to incorporate LA SSO and forward stagewise selection [Efr04]. With LAR, the computational complexity in the learning algorithm can be significantly reduced. It is notable that we have already applied this class of regularization for BMIs using NLMS since NLMS can be viewed as the solution to the constrained optimization problem [Hay96a]. In fact, the NLMS algorithm described in (3-12) is the solution to the following problem: 0 ) ( ) 1 ( ) ( ) ( ) 1 (2 n n n d to subject n n MinimizeTx w w w (4-10) for a given desired response d ( n ) and an input vector x ( n ) [Dou94]. It has also been shown in [Slo93] that NLMS can be the so lution to the following optimization problem: 2 2 2) ( ) 1 ( ) ( 1 1 ) ( ) 1 ( ) ( ) 1 ( n n n n n n n MinTw w x x w d w (4-11)

PAGE 57

43 where is the step size. In the NLMS algorit hm, the weights are updated such that the change of weight vectors is minimized. The NLMS algorithm can be therefore viewed as the solution to the error minimization probl em with the constraints on the difference between successive weight updates. In this section, we will re view statistical shrinkage methods and its relationship with subspace projection. Then the application of ridge re gression and weight decay for BMIs will be investigated. Finally, the properties of the LAR algorithm and its application to BMIs will be discussed. Review of Shrinkage Methods Here, we review the basic concepts in coefficient shrinka ge methods. The link between subspace projection and shrinkage methods is then illustrated. Various shrinkage methods are finally illustrated both in a ge ometric view and in a Bayesian framework. Shrinkage methods Consider a constrained minimization problem for given an input vector x and an desired output d such that 2 2], [ min arg ˆ w x w w w to subject d ET, (4-12) where w is a linear model parameter vector, and is the optimal solution to it. This modeling technique is called ridge regression. When ther e are many correlated input variables in a linear model, the estimated weights can be poorly determined with high variance. For instance, the eff ect of a large positive weight on an input variable can be canceled by a large negative weight on another correlated input variable If we restrict the size of weights as in (4-12), such a problem can be effectively prevented. The other

PAGE 58

44 motivation of ridge regression is to make the input autocovariance matrix nonsingular even if it is not of full rank. Let X be an N L input matrix in which each row represents an observation vector ( x in equation 4-12), and d be an N 1 desired output vector. N indicates the number of observations, and L is the input dimension. We assume that each column of X is normalized to have zero-mean. Then, the optimal solution in (4-12) by ridge regression is P I R w1) ( ˆ RR, (4-13) where I is an L L identity matrix. R and P represent XTX and XTd, respectively. Notice that the matrix R + I is invertible even if R is a singular matrix. We can obtain some insights in the prope rties of ridge regr ession by the singular value decomposition (SVD) of X. The SVD of X is given by TU X (4-14) where U and V are N L and L L unitary orthogonal matrices, and is an L L diagonal matrix with diagonal entries 1 1 …, L 0 called the singular values. Then, the prediction outputs yielded by ridge regression can be written using the SVD as L i T i i i i RR 1 2 2 2 1( ) ˆ d u u d U I) U d X I X X(X w XT 1 T T (4-15) where ui is the ith column of U. From (4-15), we can see th at ridge regression finds the coordinates of d with respect to each orthonormal basis ui. Then it shrinks coordinates by 2 2 i i ( > 0). Therefore the coordinate with a smaller will be shrunk more. It is easy to show that the singular values {i} indicate the vari ance of the principal components of

PAGE 59

45 X [Has01]. Hence, the smaller singular valu e corresponds to the direction of smaller variance which is shrunk more by ridge regression. Now we consider LASSO as an L1-norm based shrinkage method. The fundamental difference between ridge regression and LASSO is the penalty in the cost function as, L i i T lassow to subject E1 2] [ min argx w d w w (4-16) where wi is the ith element of weight vector w. The solution to this minimization problem is no longer linear in d, and a quadratic programming al gorithm is usually used to compute the solution. The L1-norm penalty in (4-16) can ma ke some weights be exactly zero; thus LASSO is able to select a subset of inputs. The relationship between subspace projection and ridge regression We have seen that ridge re gression shrinks all directio ns of principal components of X, with different rates of shrinkage de pending on the variance of each direction. Subspace projection with PCA, on the other hand, selects S (subspace dimension) highvariance directions while ignor ing the rest. PLS tends to sh rink low-variance directions, while also reduce high-variance directions de pending on the environments [Fra93]. It is obvious from these facts that a hybrid s ubspace projection utilized for BMIs would behave in a similar fashion as PCA and PLS. Hence, we can see that ridge regression and the hybrid subspace projection manipulate solu tions in similar manner: they tend to shrink low-variance of principal directions more. The differen ce is that ridge regression shrinks smoothly while subspace project ion shrinks in discrete steps. Comparison of shrinkage methods Generalization of ridge regression and LASSO creates a criterion as,

PAGE 60

46 L i p i Tw E1 2] [ min arg ˆx w d w w (4-17) The penalty is Lp-norm for p 0. In Fig. 4-6, the contours of i p iw are illustrated in the two-dimensional weight space. Note the di fference of contour shapes between ridge regression and LASSO. Since the contour for LASSO has corners, it is possible that the performance surface hits the corner, causing one weight to be zero. If the dimension of the parameter space increases, the contour shape for LASSO becomes a rhomboid, and has more corners, flat edges and faces. Then, there are more chances to generate zero coefficients. This geometric description illustrates why LASSO provides a sparser solution, including zero coeffici ents, than ridge regression. Figure 4-6. Contours of the Lp-norm of weight vector for various values of p in the 2D weight space. Now let us look at the criteri on (4-17) in a Bayesian framework. The penalty term can be considered to represent a logprior probability density function for wi, with zeromean and variance of 1/ [Nea96]. The prior distribution of wi is different depending on p. The L0-norm simply counts the number of non zero parameters. Th is corresponds to subset selection of input variables [Fur74]. The L1-norm penalty has a Laplacian prior. The L2-norm penalty has a Gaussian prior. He nce we can consider ridge regression, LASSO, and subset selection as Bayesian estim ate of solution to (4-17) with different priors for the weight. (a) p = 4 (b) p = 2 (c) p = 1 (d) p = 0.5

PAGE 61

47 Regularization Based on the L2-Norm Penalty So far, the basic properties of shrinka ge methods including ridge regression and LASSO have been investigated The applications of these methods to BMIs models will be discussed in the remainder of this chapter. We have seen that an additional identity matrix scaled by white noise power to the input autocovariance matrix a voids singularity and help shri nk input variables in the direction of the eigenvectors corresponding to smaller eigenval ues. However, it is an open problem to determine the noise power, or a so-called regular ization parameter ( in equation 4-13). Even if we want to determ ine the regularization parameter empirically, we need to follow a systematic procedure. One of the most popular procedures is the cross-validation, but it expenses a separate va lidation set and is not adequate in real-time procedure. For the real-time implementation of BMIs, therefore, we need a different procedure without generating an explicit validation set. One feasible approach is to maintain the balance between the noise power represented by the regularization parameter and the input signal power estima ted by eigenvalues. In this approach, the input signal to noise power ratio (SNR) is estimated by ] [ R tr SNR (4-18) where tr[R] denotes the trace of the input covariance matrix R. From this estimation, we can approximate as, SNR tr ] [ R (4-19) for a desirable SNR. For instance, if we want to ensure that the input SNR is kept greater than 30dB with tr[R] computed as 0.1, then the regularization parameter is determined to

PAGE 62

48 be 10-4. This estimation procedure for the regular ization parameter will be particularly useful in BMI implementation when we seek th e analytical estimate of the parameters of a linear filter in real-time with a large number of neurons for which the inversion of the input autocorrelation matrix is not guaranteed. Weight decay can be viewed as a simp le on-line method to minimize criterion function in (4-17) using the stochast ic gradient, updating the weights by ) ( ) ( ˆ ) ( ) 1 (n n n nww w w (4-20) where, ) ( ) ( ) ( ˆ2n n E n w e and w is a learning rate for the weight vector. Instead of determining by the input SNR, we opt to use an adaptive procedure to estimate the optimal value from data. Larsen et al. [Lar96] proposed that can be optimized by minimizing the generalization error with respect to Following this procedure, we utilize the K-fold cross-validation [Gei75], which divides the data into K randomly chosen disjoint sets, to estimate th e average generalization error empirically as K k kK11 ˆ (4-21) where k is the validation MSE for the kth set. Then, the optimal regularization parameter is learned by using gradient descent as, ) ( ˆ ) ( ) 1 (k k k, (4-22) where ) ( ˆ k is the estimate of ˆ at the kth iteration, and is a learning rate for the regularization parameter. The deta il procedure of estimation of ) ( ˆ n using weight decay is given in Larsen et al. [Lar96].

PAGE 63

49 In the experiment, we set K = 10, = 10-6 and update until the difference ) ( ˆ ) 1 ( ˆ n n becomes less than 10-3. The number of training samples is 9,000 and the number of validation samples is 1,000. The term) ( ˆ n in (4-20) is estimated by NLMS. During training, converges to 1.3610-5 for food reaching and 1.0210-5 for target reaching, respectively, as depicted in Fig. 4-7. Then, we train the filter with fixed using the entire training samples (10, 000) to obtain the regularized model. The histogram of the weight magnitude computed over all the coordi nates of hand position is depicted in Fig. 4-8 to demonstrate the effect of weight decay. Note that the number of weights that have smaller magnitudes increases with weight decay For instance, the number of weights that are close to zero is approximately 345 for wei ght decay versus 75 for NLMS in Fig. 4-8a, and 460 for weight decay versus 150 for NLMS in Fig. 4-8b. It shows that more weights are pruned by weight decay, thus the effective degree of freedom of the model reduces. The reduced degree of freedom can help generalization as examined by measuring performance in the test data. Empirical performance measures in the test dataset show that regularization using wei ght decay improves the generaliz ation performance over the (a) (b) Figure 4-7. Convergence of the regularization parameter ( n ) over iterations; (a) food reaching, and (b) target reaching.

PAGE 64

50 (a) (b) Figure 4-8. The histogram of the magnitudes of weights over all th e coordinates of hand position, trained by weight decay (solid line) and NMLS (dotted line); (a) food reaching, and (b) target reaching. linear model trained only by NLMS (see chapter 6). Regularization Based on the L1-norm Penalty The least angle regression (LAR) algor ithm has been recently developed to accelerate computation and improve performanc e of forward model selection methods. It has been shown in Efron et al. that simp le modifications to LAR can implement the LASSO and the forward stagew ise linear regression [Efr 04]. Essentially, the LAR algorithm requires the same order of computa tional complexity as the ordinary least squares (OLS). The selection property of LAR, which leads to zeroing coefficients, is preferable for identification of sparse systems when co mpared to regularization methods with the L2-norm penalty. Also, the analysis of the sele ction process often provides better insights into the unknown system than the L2-norm based shrinkage methods. The LAR procedure starts with an all zero coefficients initial condition. The input variable having the most correlation with desi red response is selected. We proceed in the direction of the selected input with a step size which is dete rmined such that some other

PAGE 65

51 input variable becomes to have as much correl ation with the current residual as the first input. Then, we move in the equiangular di rection between these two inputs until the third input has the same correlation. This pr ocedure is repeated unt il either all input variables join the selection, or the sum of coefficients meets a preset threshold (constraint). Note that the maximum co rrelation between inpu ts and the residual decreases over successive selec tion step in order to de-correla te the residual with inputs. Table 4-1 summarizes the details of the LAR procedure [Efr04]. An illustration in Figure 4-9 (cited from Efron et al. [Efr04]) would help understand how the LAR algorithm proceeds. In th is figure, we start to move on the first selected input variable x1 until the next variable (x2 in this case) has the same correlation Table 4-1. Procedure of the LAR algorithm Given an NM input matrix X (each row being M -dimensional sample vector), and an N1 desired response matrix Y, initialize the model coefficient i = 0, for I = 1,…, M and let = [1, …, M]T Then the initial LAR estimate becomes, 0 ˆ X Y. Transform X and Y such that 0 11 N i ijx N, 1 11 2 N i ijx N, 0 11 N i iy N for j = 1,…, M. (a) Compute the current correlation ) ˆ (Y Y X Tc (b) Find Cmax = j jc max, and a set A = {j: |cj| = Cmax}. (c) Let Xa = {…, sign(cj)xj, …} for j A. (d) Let = Xa T Xa, and = (1a T-11a)-1, where 1a is a vector of one’s with a length equal to size of A. (e) Compute the equiangular vector = Xa(-11a) that has the unit length. Note that that Xa = 1a (angles between all inputs in A and are equal). (f) Compute the step size, j j j j A jc C c CC max max, min where min+ indicates consideri ng only positive minimum values over possible j. (g) Compute j which is defined as the inne r product between all inputs and such as, j = XT (h) Update Y Y ˆ ˆ Repeat a-h until all inputs join the active set A, or j j exceeds the given threshold.

PAGE 66

52 with the residual generated by x1. 1 is the unit vector in this direction as computed in table 4-1(e). The amount of movement along 1, denoted as 1, is computed by equation in table 4-1 (f). y1 denotes the OLS estimate of desired response y with input x1. Note that the estimate by LAR (1) moves toward y1, but does not reach it. The next direction 2 intersects the angle between x1 and x2 (equiangular vector in the two-dimensional space of x1 and x2) such that the angle between x1 and the updated residual (r1 = y2 11) is same as the one between x2 and r1. Since every input is stan dardized such that the correlation, which is measur ed by the inner product of x1 and x2, can be estimated by the angel between x1 and x2, these two variables have the sa me absolute correlation with r1 following the equation in Table 4-1(a). The coefficient 2 is computed again following Table 4-1(f) such that x3 has the same absolute correl ation with the next residual r2 = y3 – (11+ 22) as x1 and x2. So, the next direction 3 intersects the angle between x1, x2, and x3. This procedure is repeated until the L1-norm of coefficients re aches a given threshold. LAR can be easily modified to implement LASSO; when some coefficients cross zero at a given step, those are forced to be zero. A nd the corresponding inputs are Figure 4-9. An illustration of the LAR procedure. x1 x2 x3 0 1 11 y1 y2 22 2 y3 3 33

PAGE 67

53 removed from the selected joint set. The LAR procedure can be continued with the remaining inputs since they still have the same absolute correlation with the current residual. There are two major considerations in the implementation of LAR. First, LAR assumes the linearly independen ce between input variables. Se cond, the determination of threshold for the L1-norm of coefficients is an op en problem and dependent upon data. The performance of the linear model lear ned by LAR can be greatly influenced by a choice of this threshold. If we attempt to apply LAR to the linear model in BMIs, difficulties lie in the fact that the embedded inputs are likely to be co rrelated with each other (although they might be linearly independent), so that LAR might not be able to operate optimally. Also, finding an optimal threshold will be a nontrivial task.2 Despite these difficulties, we test the pe rformance of the linear model learned by LAR with the food reaching and the target reaching datasets. The threshold is determined by the hold-out cross-validation. The performanc e measures computed in test data show that LAR performs in the similar level as weight decay. It may indicate that the difficulties in the implementation of LAR could prevent it from improving generalization further compared to weight decay. We w ill skip the presentation of the numerical performance results of the linear models with LAR since they are very similar to those with weight decay. 2 We can utilize the cross-validation as in the case of the gamma filter or subspace projection. However, the range of the search for threshold will become much broader.

PAGE 68

54 CHAPTER 5 NONLINEAR MIXTURE OF MULTIPLE LINEAR MODELS In the design of decoding models for BMIs, there have been a number of approaches including linear and nonlinear models; e.g. the Wiener filter, the Kalman filter, time delay neural networks (TDNN) recursive multilayer perceptrons (RMLP), and so on. These modeling frameworks ha ve successfully pr edicted target hand trajectories only using neuronal activity signals. However, an important consideration in designing BMIs is the feasibility of the approach taken. The target applications necessitate real-time implementations with minimal computational and hardware require ments. On one hand, linear models are usually the best in terms of their computati onal requirements. On the other hand, a simple linear model is often insufficient to accu rately capture the complex input-output relationships between neural activity a nd hand position. Recently, a performance comparison has been conducted between linea r and nonlinear modeling approaches, and the latter was found to be favorable [San02a]. In this chapter, we aim to demonstrate that the target mappi ng between the neural activity and the hand trajectorie s can be discovered using a divide-and-conquer approach. In this approach, we combine the simplic ity of training linear models with the performance boost that can be achieved by nonlinear methods. Speci fically, a two-stage structure is used where the first stage cons ists of a bank of competitively trained linear filters and the second stage consists of a single-hidden layer multilayer perceptrons (MLP) (see Fig. 5-1). Model comparison in the next chapter will demonstrate the

PAGE 69

55 outstanding performance of this approach among various models for the food reaching BMI data. Nonlinear Mixture of Linear Models Approach In this section, we describe the mode ling approach using nonlinear mixture of competitive linear models (NMCLM). A brief description of TDNN will also be provided for comparison purpose. Nonlinear Mixture of Competitive Linear Models The overall architecture of NMCLM is id entical to a single hidden layer TDNN as shown in Fig. 5-1. However, the training procedure undertaken he re is significantly different. This modeling method uses the divide-and-conquer approach. Our reasoning is that a complex nonlinear modeling task can be elucidated by dividing it into simpler linear modeling tasks and combining them pr operly [Far87]. Previous ly, this approach was successfully applied to nonstationary signal segmentation, assuming that a nonstationary signal is a combination of piecewise stationary signals [Fan96]. Hypothesizing that the neural activity will demonstrate varying characteristics for different localities in the space of the hand tr ajectories, we expect the multiple model approach, in which each linear model specializes in a local region, to provide a better overall input-output mapping. Howe ver, the problem is different here since the goal is not to segment a signal but to segment the joint input/desired signal space. The topology allows a twostage training procedure that can be performed sequentially in off-line training; first, competitive learning for the local linear models and then error backpropagation learning for the MLP. It is important to note that in this scheme, both the linear models and the MLP ar e trained to approximate the same desired response, which is the hand trajectory of a primate.

PAGE 70

56 Figure 5-1. An overall diagram of the nonlin ear mixture of competitive linear models. The training of the multiple linear models is accomplished by competitively (hard or soft competition) updating their weights in accordance with previous approaches using the NLMS algorithm. The winning model is determined by comparing the (leaky) integrated squared errors of all competing m odels and selecting the model that exhibits the least integrated error for the corresponding input [Fan96]. The leaky integrated squared error for the ith model is given by M i n e n ni i i, 1 ), ( ) 1 ( ) 1 ( ) (2 (5-1) where, M is the number of models and is the time constant of the leaky integrator. Then, the jth model wins competition if j(n) < i(n) for all i j. If hard competition is employed, only the weight vector of the wi nning model is updated. Specifically, if the jth model wins competition, the update rule for the weight vector wj(n) of that model is given by Linear model Linear model x(n) d(n) (n) d ˆ + Minimize MSE Competitive learning Select the least integrated squared error + d ( n ) e1( n ) e1 2(n) 1(n) + eM(n) eM 2(n) M(n) ( )2 ( )2 -… … …

PAGE 71

57 2) ( ) ( ) ( ) ( ) 1 (n n n e n nj j jx x w w (5-2) where, ej(n) is the instantaneous error, and x(n) is the current input vector. represents a learning rate and is a small positive constant used for normalization. If soft competition is used, a Gaussian weighti ng function centered at the winn ing model is applied to all competing models. Every model is then update d proportional to the weight assigned to that model by this Gaussian weighting function such that M i n n n e n n ni j i i i, 1 ) ( ) ( ) ( ) ( ) ( ) 1 (2 x x w w (5-3) where, wi is the weight vector of the ith model. Assuming the jth model wins competition, i,j(n) is the weighting function defined by ) ) ( 2 exp( ) (2 2 ,n d nij j i (5-4) where, dij is the Euclidean distance between indices i and j, which is equal to |j-i|, (n) is the annealed learning rate, and 2(n) is the Gaussian kernel width decreasing exponentially as n increases. The learning rate al so exponentially decreases with n. Soft competition preserves the topology of the input space, updating the models neighboring the winner; thus it is expected to result in smoother transitions between models specializing in topologi cally neighboring regions (of the state space). However, the empirical comparison using BMIs data between hard and soft competition update rules shows no significant difference in terms of model performance (possibly due to the nature of the data set). Therefore, we prefer to utilize hard competition rule for its simplicity.

PAGE 72

58 With the competitive training procedure, each model can specialize in local regions in the joint space. Figure 5-2 demonstrates the specializa tion of 10 trained models by plotting their outputs (black dots) with the common input data (40 seconds long) in the 3D hand trajectory space. Each model’s outputs are simultane ously plotted on top of the actual hand trajectory (red lin es) synchronized with the co mmon input. The figure shows that the input-output mappings learned by each model display some degree of localization, although over laps are still present. These ove rlaps may be consistent with a neuronal multiplexing effect as depicted in Carmena et al. [Car03], which suggests that the same neurons modulate for more than one motor parameter (the xand y-coordinates of hand position, velocity and griping force). Figure 5-2. Demonstration of the loca lization of competitive linear models. The competitive local linear models, however, require additional information for switching when applied to BMIs, since the desi red signal that is necessary to select a winning model is not available af ter training in practice. A gate function as in the mixture of experts [Jac91] utilizing input signals needs to be trained to select a local model. Here, we opt for a MLP that directly combines th e predictions of all models. Therefore, the overall architecture can be conceived as a nonlinear mixture of competitive linear models

PAGE 73

59 (NMCLM) [Kim03b]. This proced ure facilitates training of each model compared to the TDNN, since only one linear model is trained at a time in the first stage, while only a relatively small number of weights are trai ned by error backpropa gation [Hay96b] in the second stage. Time Delay Neural Networks In the TDNN, the mapping between neural activity and hand trajectories is estimated by nonlinearly combining bin counts (and their past values) from each neuron. The tap delay lines in the input layer pr eset the memory to account for temporal dependencies in neural activity. This architec ture has a single hidde n layer with sigmoid nonlinearities, and the output layer with linear processing elements (PEs). The output of the TDNN is given by y(n) = W2f(W1Tx(n)+b1)+b2, where the weight matrices and bias vectors W1, W2, b1, and b2 are trained by the error backpropagation algorithm. BMIs Design Using NMCLM NMCLM is trained with the same sets of data used for the Wiener filter in chapter 3. The topology consists of 10 competitive lin ear models for each coordinate and a single hidden layer MLP with M inputs (M = 10C, C is the output dimension: 2 or 3), 30 hidden PEs with the hyper-tangent (tanh) functions, and C linear output PEs to predict each hand position coordinate. Each linear model has the same topology as the one used in chapter 3. The number of multiple models and the number of hidden PEs were chosen empirically (although it was not optimized). The hard competition learning rule is utilized along with NLMS for the training of linear m odels and the conjugate gradient algorithm is used to train the MLP. The training of the MLP is repeated with 100 random initial conditions and the network with the least MSE is selected. Th e time constant of the leaky integrator () is determined by the hold-out crossvalidation method. The data is divided

PAGE 74

60 into 9000-sample training set and 1000-sample validation set. The resulting values of are 0.3 for the food reaching task and 0.6 for the target reaching task. TDNN is trained with the same input a nd desired response as in NMCLM. The 30 PEs in the hidden layer use tanh nonlinearities. All the weight s and biases are trained by the error backpropogation algorithm. Even with the simpler training approach there are over 30,000 parameters in NMCLM to be trained. Each linear model w ith around 3,000 parameters is trained with a fraction of the total number of samples (only the ones pertaining to its local area of the space), which is considered too high for the re stricted number of training samples. With linear models built from gamma filters, we can reduce significantly the number of parameters in the first layer of NMCLM, while preserving the same level of computational complexity in training. As will be shown in the next chapter, NMCLM results in superior generalization performance compared to other linear m odels and the TDNN for food reaching. Substitution of the gamma filters for the FIR filters also improves the performance further. Due to the difficulty of training a large number of parameters in the TDNN with error backpropagation, its perf ormance suffers compared even with the linear models. However, these nonlinear models do not exhib it any significant improvement for target reaching. This will be discussed in the following chapter. Analysis Evaluation of Training Performance for NMCLM Now, we demonstrate the advantage of training in NMCLM compared to the TDNN using the food reaching data. The t opology proposed in NMCLM is basically equivalent to a three-layer network: the first layer of weights consists of the competitive

PAGE 75

61 model coefficients, the second and third layer of weights are simply the weights of the following MLP. In this topology, the first hi dden layer and the out put layer have linear PEs, whereas the second hidden layer has nonlinear PEs. In the NMCLM approach, the first layer weights are trained competitively to predict the desired signal, whereas the MLP is optimized using error backpropagation. In order to quantify the performance of this training procedure from an information-theoretic point-of-view, we evaluate the mutual information [Cov91], I(zC,d), between the outputs of the competitive models, zC, and the desired output, d. Using a Parzen window estimator for the mu tual information [Erd02] on ten arbitrary segments of the hand trajectory (each of le ngth 1000 samples), the average and standard deviation of I(zC,d) is found to be 8.97 nats ( 1.21 nats). The maximum mutual information allowed by this model and data, obtained by estimating I(zC,d ˆ) is 9.83 nats ( 1.19 nats). Percentage-wise, the information contained in the competitive model outputs pertaining to the desire d output is thus 92 % ( 6 %). From this, we conclude that the information loss in the first layer is just 8 % ( 6 %). For comparison, another network with the same topology is trained as follows: The MLP weights are borrowed from the second hi dden layer and the output layer of the above network (in order to ensure identical information loss at this stage). The first layer weights are then trained us ing standard backpropagati on through these MLP weights, instead of using competition. This network, therefore, uses the minimum MSE solution for the first layer weights. Similarly, the mutual information I(zB,d) between the output of the first layer of this network zB, and the desired output d is calculated to be 7.42 nats ( 1.35 nats). For this network, the maximum mutual information is 10.90 nats ( 0.40 nats).

PAGE 76

62 These correspond to an informationtransfer percentage of 68 % ( 11 %). Therefore, the information loss in the first layer of the second network is 32 % ( 11 %). In summary, the mutual information between the desire d output and the competitive model outputs is la rger than the first layer ou tputs of the equivalent TDNN (all the weights are trained only by error backpropagation), which shows that the training in NMCLM is more efficient. Analysis of Linear Filters It is intriguing to pose a ques tion of what is the value of adapting the parameters in the input layer where most of the weights resi de. For this, we analy ze the pole zero plots of the trained FIR filters for each neuron from multiple linear models. In this analysis, we verify that there are only minor variations in the pole zero plot no matter what is the neuron or the adaptation procedure. Figure 53 shows the frequency responses of the 10 linear filters (with 10-tap delay line) in NMCLM for th e food reaching task for a specific neuron. These frequency responses indicate th at they are all lowpass filters and the locations of the zeros (denoted by different markers for different models) for all models are similar. It means that the role of the filters is to lowpass filter (smooth) the input. As depicted, the zeros tend to be placed at equal intervals very close to the unit circle. The major difference seems to be the gain at DC. Hence, one can synthesize an alternate adap tive filter that displays a very similar response and has only two free parameters, as 1 101 1 ) ( z z G z H (5-5) where the two free paramete rs encode the gains (G) and the locations of the pole of the

PAGE 77

63 Figure 5-3. Frequency response of ten FIR filters; (left) pole zero plots, (right) frequency responses. filter for each neuron (), imposing the constraint | | < 1. The number of NMCLM weights with this filter for the estimation of one output coordinate can then be reduced from 30,630 to 6,870. The performance eval uation of this simplified model shows a slightly low level compared to the original performance (the performance profile is similar to the Wiener filter for the prediction of movement, while superior to the linear models for rest). 1 This indicates that a variable gain control and a variable integration over time per neuron seem sufficient to derive optimal models for BMIs. These characteristics can be obtained by a multitude of systems that can be much easier to implement and do not even require adaptation. Further work will be pursued along this line. 1 The numerical results are followings; CC(move)= 0.76 ( 0.18), SER(move)= 4.61 dB ( 2.31 dB), CC(rest)= 0.01 ( 0.26), SER(rest)= 7.67 dB ( 4.43 dB). See chapter 6 for the comparison of these results with others.

PAGE 78

64 CHAPTER 6 COMPARISON OF MODELS In this chapter, we summarize the evalua tion of the generalization performance for all models introduced so far in this disse rtation. We emphasize, however, that the comparison is done for the datasets of 100-200 simultaneously recorded neurons for which the standard Wiener filter algorith m yielded very good performance. With the increase of the number of simultaneously recorded neurons, task complexity, and complexity of predicted motor parameters, what we will see only tendencies in this comparison may become important for BMI designs. Before presenting comparison results, we first demonstrate the outputs of every model along with the actual hand trajectories for food reaching in Fig. 6-1 and for target reaching in Fig. 6-2, respectively. Sin ce our approaches have been developed by assigning the Wiener filter as a golden standard observations in these figures are likely to be made mainly by comparing trajectories of mode ls with that of the Wiener filter. First, we can observe that NLMS can predict better fo r rest positions than the Wiener filter in Fig, 6-1b. This explains how a time-varyi ng learning rate in NL MS can help track a nonstationary data. Next, we can see that regularized models yield smoother output trajectories than the Wiener filter especia lly during rest. Also, it can be easily captured that NMCLM provides the most accurate pr ediction in Fig. 6-1f. NMCLM shows its ability to stay in rest position with little jitters, and to track rapid changes of hand trajectory during movements. This may be due to the nonlinear stru ctures in NMCLM. On the other hand, in Fig. 6-2, all mode ls show similar pred iction performance for

PAGE 79

65 (a) (b) (c) (d) (e) (f) Figure 6-1. The actual hand traj ectory (dotted red line) and the estimated hand trajectory (solid black line) in the x-, y-, and zcoordinate for the 3D food reaching task on a sample part of the test data; (a) the Wiener filter, (b) the linear filter with NLMS, (c) the subspace Wiener filter, (d ) the gamma filter, (e) the linear filter regularized by weight decay, and (f) NMCLM.

PAGE 80

66 (a) (b) (c) (d) (f) (f) Figure 6-2. The actual hand traj ectory (dotted red line) and the estimated hand trajectory (solid black line) in the x-, and y-coordi nate for the 2D target reaching task on a sample part of the test data. (a) the Wi ener filter, (b) the linear filter with NLMS, (c) the subspace Wiener filter, (d ) the gamma filter, (e) the linear filter regularized by weight decay, and (f) NMCLM.

PAGE 81

67 target reaching. None of m odels outperforms visually in the output trajectories. Performance measures presented later will demonstrate this similarity of performance (although there are statistical di fferences between models). Comparison of Model Parameters We now compare the weights of four linear models: the Wiener filter, the linear model trained by NLMS, the gamma filter, and the linear model regularized by weight decay. Since the number of tap delays is di fferent among models, the weights must be represented based on neurons ( not every tap of different tim e lag). Hence, we compute the average value of the weight magnitudes over tap delays and over three (or two) output dimensions. Then, the standard deviation of each neuronal data estimated from the training set is multiplied by the average magnitude to obtain a measure of neuronal contribution; that is, the average sensitivity of the output to individual neurons [San03a]. Figure 6-3 shows the calculated sensitivities in each model for both food reaching and target reaching. Note that we rescale the sens itivity values to be in [0, 1] in order to facilitate the visual comparison. It can be observed in Fig. 6-3a that the normalized weight magnit ude distributions are similar among models except for the gamma filter. The weight distribution of NLMS follows that of the Wiener filter. But, it exhibits smaller magnitudes when the corresponding neurons do not contribute much This may explain the regularization property of NLMS with the cons traint on the weights as pres ented in chapter 4. Weight decay also prunes weights, generating the sp arse weight distribution, which can enhance generalization. The weight distribution of th e gamma filter might differ from others since it utilizes the different time scale. It wei ghts more on neurons indexed as 57, 84, 87 and 94, where neuron 57 is the neuron with highes t firing rate, and neuron 94 is one of the

PAGE 82

68 highest sensitivity neurons according to the analysis in Sanchez et al. [San03b]. For the target reaching task as shown in Fig. 6-3b, all models present similar weight magnitude distributions, which may explain the similar performance of all models. (a) (b) Figure 6-3. The distributions of normalized weight magnitudes of four linear models over neuronal space for; (a) food reach ing, and (b) target reaching.

PAGE 83

69 Performance Evaluation Tables 6-1 and 6-2 summarize the generaliz ation performances of all models using measures introduced in chapter 3. For food r eaching, there are ten reaching movements in the test data for which the pe rformances are measured. The CEM curves of all models are presented in Fig. 6-4. Since the CEM curve measures the probability that the distance between the estimated and actual hand positions is less than a given quantity represented Table 6-1. The generalization performances of linear models and nonlinear models for the 3D food reaching task. Measures # of weights CC (move) SER (move) (dB) CC (rest) SER (rest) (dB) Wiener 2973 0.76 0.19 4.76 1.87 0.03 0.22 2.40 2.80 NLMS 2973 0.75 0.20 4.85 2.11 0.06 0.22 3.40 2.76 Gamma 1191 0.78 0.19 5.25 1.97 0.07 0.21 3.59 3.11 Subspace 1113 0.77 0.18 4.84 2.06 0.09 0.20 3.78 2.57 Weight decay <2973 0.77 0.18 4.73 2.04 0.07 0.22 3.76 2.78 Kalman 1017 0.78 0.20 4.32 1.97 0.05 0.25 2.26 3.85 TDNN 29823 0.77 0.17 4.87 2.56 0.02 0.22 3.29 5.67 NMCLM (FIR) 30753 0.81 0.15 5.90 3.00 0.03 0.22 5.64 4.00 NMCLM (Gamma) 12933 0.81 0.19 6.08 3.19 0.06 0.23 6.23 5.23 Table 6-2. The generalization performances of linear models and nonlinear models for the 2D target reaching task. Measures # of weights CC (x) SER (x) (dB) CC (y) SER (y) (dB) Wiener 3842 0.66 0.02 2.42 0.54 0.48 0.10 1.08 0.52 NLMS 3842 0.68 0.03 2.42 0.55 0.50 0.08 0.90 0.49 Gamma 3842 0.70 0.02 2.81 0.69 0.53 0.09 1.55 0.43 Subspace 882 0.70 0.03 2.80 0.83 0.58 0.08 1.90 0.57 Weight Decay<3842 0.71 0.03 2.79 0.92 0.57 0.08 1.75 0.46 Kalman 1188 0.71 0.03 2.77 0.65 0.58 0.10 1.63 0.76 TDNN 57691 0.65 0.03 2.24 0.59 0.51 0.08 1.10 0.39 NMCLM (FIR) 58622 0.67 0.03 2.62 0.53 0.50 0.07 1.23 0.40 NMCLM (Gamma) 58622 0.67 0.02 2.55 0.61 0.47 0.07 0.95 0.40

PAGE 84

70 at the x-axis, the closer the curve is to th e upper left corner the better the corresponding model performs. To visualize the performance clearly, we give an instance of the CEM profile for certain distance; the m odels are listed in the order of Pr(|e| 20mm), where the top model exhibits the highest probabi lity. Figure 6-4 shows that the differences among models are more distinguishable in the f ood reaching task than in the target hitting task. Also, NMCLM demonstrat es superior performances for the food reaching task, while it does not improve performa nce for the target hitting task. 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 100 Probab ility (%) mm Subspace Gamma Wieght Decay Kalman NMCLM(FIR) TDNN NMCLM(Gamma) Wiener NLMS Wiener NLMS Subspace Gamma Weight Decay Kalman NMCLM(FIR) NMCLM(Gamma) TDNN (a) (b) Figure 6-4. Comparison of the CEM of the ni ne models for (a) the food reaching task, and (b) the target reaching task. Statistical Performance Comparison To quantify the performance evaluations obtained above, we te st the statistical difference between the Wiener filter and all th e other models [Kim05a]. We first assume that the average magnitude of the error vector (E[|e|]) on the test data is a sufficient measure of model performance. To compare th e performance of different models, we test the difference between the distribu tions of E[|e|]. E[|e|] is locally estimated in individual 4-second non-overlapping time windows through the test data (approximately 3,000second long). Since a summation is used to estimate the mean, the set of E[|e|] can be

PAGE 85

71 assumed to be drawn from a Gaussian dist ribution based on the central limit theorem (CLT). Also, the use of non-overlapping windows can approximately satisfy the independence condition between the estimates of E[|e|] from different windows. Therefore, the t-test can be applied to the set of E[|e|]. In order to setup a test environment betw een one model with th e Wiener filter, we first define as the difference between E[|e|] for the Wiener filter and for one of the other models, ) ( ) ( ) (k E k E kW Me e (6-1) where E[|e|]M(k) denotes the average magnitude of error vectors in the kth window for the model under comparison and E[|e|]W(k) for the Wiener filter. Note that is a Gaussian random variable since the linear comb ination of two Gaussian variables E[|e|]M and E[|e|]W is also a Gaussian variable. Then, we apply the t-test to with the realizations {(k). The hypotheses for the one -tail t-test then become, 0 ] [ : 0 ] [ :0 E H E HA (6-2) Given the significance level of if the null hypothesis is rejected we can claim with the confidence level of (1-) that the compared model performs better than the Wiener filter. The t-test results are presented in table 6-3. For the food reac hing task, every model performs better than the Wi ener filter except the TDNN. Note that the TDNN shows higher level of mean SER during rest, but with a relatively la rge variance. For the target reaching task, however, only three linear mode ls pruned by regularization are shown to

PAGE 86

72 outperform the Wiener filter. These results are fairly consistent with results in tables 6-1, 6-2, and Fig. 6-4. Table 6-3. The t-test results for the differen ce of the magnitude of error vectors from the test dataset between the Wiener filter and other models. Food reaching Target Reaching Significance level 0.01 0.05 0.01 0.05 NLMS 1 1 01 0 Gamma 1 1 1 1 Subspace 1 1 1 1 Weight Decay 1 1 1 1 Kalman 0 0 0 1 TDNN 0 0 0 0 NMCLM (FIR) 1 1 0 0 NMCLM (Gamma) 1 1 0 0 1The test result of 0 indicates the acceptance of null hypothesis; while 1 indicat es the rejection of null hypothesis.

PAGE 87

73 CHAPTER 7 MULTIRESOLUTION ANALYSIS FOR BMI Most designs of decoding algorithms for BM Is including our mode ls have used the estimate of the local firing rate of neurons th at is estimated by binni ng neural spikes with a non-overlapping sliding time window of the length ranging from 50ms up to 100ms [Cha99, Ser02, Tay02 and Wes00] These representations of the firing rate have been used for modeling of the relationship with re sponsive motor parameters. Adaptive models (including linear and nonlinear ones) based on this estimate have predicted motor parameters with a correlation coefficient be tween 0.6 ~ 0.8. However, it has also been shown in previous chapter that all the mode ls reached the same basic performance level especially for the target reaching task, which may not be sufficient for more involved real applications. These results lead us to revisit our approaches for designing decoding models; extracting advanced features from neural data followed by developing an adequate mathematical decoding algorit hms and topology may bring us a better decoding model. Extracting desirable features in complex, high-dimensional neuronal data is, though, an open problem, requiring intensive studies. Yet, we present here a simple approach by considering the representational space for neuronal firing activity, which will demonstrate how extracting featur es in input can help to im prove the model performance. In our approach, we revise the present re presentation of a local firing rate, encoded in a series of bin counts within a fixed wi dth time window. Since a local firing rate can represent the local frequency of a neural spik e train, the features can be extracted based

PAGE 88

74 on local frequency. One of the methods fo r the representation of local frequency information is the multiresolution analysis [Mur04], usually realized in wavelet transform. With the multiresolution analysis, it is possible to repres ent the time-frequency characteristics of a signal. Basically, we can obtain as many local frequency components as we want at a given time instance. Hence, the multiresolution analysis of neural spikes may provide richer information about neurona l behavior compared to the binning using a fixed width time window. If we consider the multiresolution analysis for spike trains, it is easy to see that binning process is nothing but discrete wave let transform (DWT) using a Haar wavelet [Dau92]. However, since the original DWT is basically a non-causal process, a wavelet transform featuring causality should be cons idered. For this purpose, we employ the trous wavelet transform [She92] to implement a causal DWT. With this procedure, the multiresolution analysis for spike trains can be regarded as binning spike trains with the multi-scale windows. Hence, the decoding models, which have been designed upon the bin count data, need not be fundamentally modified to the multiresolution data. With the multiresolution data, however, the regularization of decoding models must be considered due to the increased input di mensionality and the collinearity between input channels. Among the number of regulariza tion techniques used in data mining, the method based on the L1-norm penalty will be more suitable since it is able to generate a sparser model than others using the L2-norm penalty. It also en ables us to understand the association of neuronal activit ies with behavior, by selecti ng more correlate d channels. Similar works for the multiresolution analysis for neural spike trains have been done in various research groups. Lee has es timated the cross-spectrum using wavelet

PAGE 89

75 analysis between simultaneously recorded spike trains, revealing the phase-locked oscillation between spike trains [Lee02]. La ubach has demonstrated the wavelet-based processing of spike trains from the motor cort ex of a behaving rat [Lau04]. He utilized discriminant pursuit (DP) [Buc95], which is based on wavelet analysis, to improve the discriminant analysis methods for better stat istical predictions of temporally localized events. Cao has worked on the Haar wavelet an alyses of spike trai ns to understand the characteristics of spike trains and enhance the decoding models in neural prosthetic systems [Cao03]. This work seems to be most relevant to our approach presented here. However, one of the major differences is th at he pruned wavelet coefficients by using information theoretic measures (e.g., the mutual information) between each neuron and behavior, followed by building decoding models (e.g., Bayesian classifiers) with those pruned coefficients. On the other hand, we c ontain all wavelet coefficients in the input channel of the linear model and prune inputs by a regularization techni que. Therefore, in our approach we can select wave let coefficients which explicit ly contribute to the output of the designated model architecture, while the selected coefficients by the mutual information method may not dir ectly contribute to the specif ic decoding model. We also propose to use the trous wavelet transform instead of standard DWT to link the multiresolution analysis with binning proce ss for real-time applications, which has not been explicitly shown in Cao’s works. In this chapter, we design a linear mode l with the multiresolution input data for BMIs, which is learned by the re gularization method based on the L1-norm penalty. The multiresolution input for each neuron is com posed of the instantaneous spike count binned by multiple time windows of various wi dths. We investigate the trained linear

PAGE 90

76 model using the multiresolution input for the analysis of neuronal firing activities. Next, a comparison of the multiresolution based model with the single resolution model is demonstrated. For this comparison, each channel of the multiresolution input is embedded by a time delay line in the same wa y as the single resolution model is formed (see Figure 3-1 for this stru cture). The performances of two models are evaluated. Finally, a combination of linear and nonlinear ne tworks is considered to investigate the possibility of the performance improvement over linear models. With the optimally designed linear model using th e multiresolution input, an a dditional nonlinear network is added in order to further redu ce residuals from the learned li near model. This approach will help us to understand how much a nonlin ear structure can help to a linear model when we utilize the multiresolution input. We would like to remark here that the data used in this chapte r are collected from Aurora, which is different from previous chapters. Multiresolution Analysis of Neuronal Spike Trains An overall procedure of the multiresolution analysis in BMIs is as follows: The multiresolution analysis based on the Haar wa velet is applied to spike trains of 185 neurons recorded in the cort ical areas of a Rhesus monke y (Aurora); see chapter 2 for data descriptions. The Haar trous wavelet transform [Zhe99] is utilized to perform the multiresolution analysis for individual spike trains. The resulted wavelet coefficients (or equivalently, the multi-scale bin count data) are used as the input data to a linear model. The linear model is learned by a regularization method to pred ict the hand trajectories. The analysis of model paramete rs is performed to investigate the association of single neurons with target reaching movements.

PAGE 91

77 Multiresolution Analysis The multiresolution analysis of a neural spike train can be performed via the wavelet transform. To facilitate the computa tion, we apply the disc rete wavelet transform with the dyadic Haar wavelets This dyadic Haar wavelet is basically utilized in the trous wavelet transform which can be implem ented very effectively in hardware. The Haar wavelet transform is the simplest form of wavelet and was introduced in the earliest development of wavelet tran sform [Dau92]. Here, we only introduce the functional form of the Haar wa velets. Details in the Haar wavelet tr ansform can be found in [Dau92]. Let us first define the Haar scaling function as, otherwise x if x0 ) 1 0 [ 1 ) ( (7-1) Let Vj be the set of functions of the form, k j kk x a) 2 ( (7-2) where ak is a real number and k belongs to the integer set. ak is nonzero for only a finite set of k. Vj is the set of all piecew ise constant functions whos e supports are finite, where discontinuities between these functions belong to a set, 2 2 2 1 0 2 1 2 2 ,j j j j. (7-3) Note that 2 1 0V V V. The Haar wavelet function is defined by, ). 1 2 ( ) 2 ( ) ( x x x (7-4) If we define Wj as the set of functions of the form k j kk x a) 2 (, (7-5) then, it follows that

PAGE 92

78 0 0 1 1V W W W Vj j j (7-6) where denotes the union of two orthogonal sets. The discrete wavelet transform (DWT) using a dyadic scaling is often used due to its practical effectiveness. The output of the DWT traditionally forms a triangle to represent all resolution scales. This form is resulted from decimation (holding one sample out of every two), and has the advantage of reduction in computat ional complexity and storage. However, it is not po ssible to obtain repres entation with different scales at every time instance with the decimated output. This problem can be overcome by a nondecimated DWT [Aus98] which requires mo re computations a nd storage. The nondecimated DWT can be formed in two ways; 1) the successive resolutions are obtained by the convolution between a gi ven signal and an incremental dilated wavelet function. 2) the successive resolutions are formed by sm oothing with an incremental dilated scaling function, and taking difference between successive smoothed data. The trous wavelet transform follows the latter procedure to produce a multiresolution representation of the data. In this transform, successive convolutions with a discrete filter h is performed as l j j jl k v l h k v), 2 ( ) ( ) (1 (7-7) where, v0(k) = x(k), the original discrete -time series. In its first introduction [She92], the filter h was defined as a B3 splin e; (1/16, 1/4, 3/8, 1/4, 1/16). Then, the difference between successive smoothed outputs is computed as ) ( ) ( ) (1k v k v k wj j j (7-8)

PAGE 93

79 where wj represents the wavelet coefficients. It is clear that the original time series x(k) can be decomposed as S j j Sk w k v k x1) ( ) ( ) ( (7-9) with S being the number of scales. The computa tional complexity of this algorithm is O(N) for the data length N. Note that the trous wavelet transform does not acc ount for a causal time series where the future data are not available in th e present computation of wavelet transform. To apply the trous wavelet transform for such a case, the Haar trous wavelet transform can be used [Zhe99]. The Haar trous wavelet transform can be regarded as the merge of the non-decimated DWT (by the trous wavelet transform) with the Haar wavelet transform. A difference in the Haar trous wavelet transform from the original trous wavelet transform is that h is now replaced by the filter with (1/2, 1/2). For a given discrete-time series x(k) (= v0(k)), the first resolution is obtained by convolution v0(k) with h such that )) 1 ( ) ( ( 2 1 ) (0 0 1 k v k v k v. (7-10) And the wavelet coefficients are obtained by ) ( ) ( ) (1 0 1k v k v k w (7-11) For the jth resolutions, )) 2 ( ) ( ( 2 1 ) (1 1 1 j j j jk v k v k v (7-12) ) ( ) ( ) (1k v k v k wj j j (7-13)

PAGE 94

80 Hence, the computation in this wavelet transform at time k involves only information at k-l with l being a nonnegative integer. The Haar trous wavelet transform can provide a set of features from the time series data. One possible feature set can be extracted from the decomposition described in (7-9), where the wavelet coefficients {w1(k), …, wS-1(k)} and the last c onvolution output vS-1(k) are selected. However, if we seek to associate the Haar trous wavelet transform with the binning process for spike trains, the set {v0(k), …, vS-1(k)} can be translated into the bin count data with multiple bin widths. To yield the multi-scale bin count data using (7-10), we only have to multiply vj(k) by 2j such that uj(k) = 2jvj(k), for j = 0, …, S-1. (7-14) Hence, the convolution output in the Haar trous wavelet transform can provide the feature set related with binning. In the follo wing models for BMIs, we will utilize the scaled convolution outputs {uj(k)} for j = 0, …, S-1, or equivalently, the bin count data with different widths, as the input features. Multiresolution Analysis for the BMI Data In order to apply the multiresolution analys is to the BMI data, we must choose the suitable set of scales. Although it is not straightforward to determine a set of scales for the Haar trous wavelet transform of spike trains, we may take the characteristics of neuronal data collected from our BMI paradigm into consideration for the determination of scales. Basically, the smallest scale must be larger than 1ms because of the refractory period of neuronal firing. Also, the largest scale may not exceed 1sec since it has been reported that the past neuronal activity up to 1 second is correlat ed with the current

PAGE 95

81 movement [Wes00]. In our experiments, we select eight scales starting at 5ms up to 640ms with the dyadic scaling; 5, 10, 20, 40, 80, 160, 320, and 640ms1. With the selected scales, the Haar trous wavelet transform is performed on each neuronal spike train in Aurora’s dataset. Instead of performing the wavelet transform directly on raw spike trains, we first genera te the basic bin count data with a 5ms nonoverlapping window for ev ery neuronal channe l. Next, the Haar trous wavelet transform is applied to the 5ms bin count da ta at each neuronal channel, yielding the convolution output vj(k) for j = 0, …, 7 following the equation (7-12). Each series vj(k) is then multiplied by 2j to generate uj(k). An illustrative example of the generated uj(k) at specific time instance k0 is presented in Fig. 7-1. Figure 7-1. An illustration of the scaled c onvolution output from the Haar trous wavelet transform; uj(k) for a given spike train at a time instance k0. The number in each box denotes the value of uj(k0) for j = 0, …, 7. Note that the sampling rate in uj(k) is 200Hz for any j. In terms of a binning process, uj(k) can be interpreted as th e bin count data for a give n spike train with a 52jms 1 The minimum 5ms scale is chosen by empirical observ ation such that the bin count data is significantly different from raw spike trains containing 1’s and 0’s. However, it must be remarked that a more rigorous procedure of choosing the minimum scale may be necessary in the future study. k0 0 1 2 4 6 21 10ms 20ms 40ms 80ms 160ms 640ms 12 320ms 5ms 0

PAGE 96

82 time window that slides over time by step of 5ms. Therefore, uj(k) with a larger j will contain more overlaps between successive bins, uj(k) and uj(k-1). Such overlaps will then yield the smoother te mporal patterns of uj(k) with larger j. The top panel in Fig. 7-2 demonstrates an example of uj(k) of a specific neuron for 5-second period. uj(k) for each j is normalized to have the maximum value of 1. Darker pixels denote larger values. The set of uj(k) are temporally aligne d with the associated hand trajectories plotted in the bottom pane l. In order to view the correlation of uj(k) with the movement for each j, uj(k) is separately plotted on top of the hand trajectory (the xcoordinate) in Fig. 7-3 (both uj(k) and the hand trajectory are scaled to be in the similar dynamic range for visualization pur pose). It demonstrates that uj(k) with larger j is more Figure 7-2. An example of the series of uj(k) along with the corresponding hand trajectories; (top) a matrix of uj(k) in which each row represent the scale j for j = 0, …, 7 (i.e. 5ms ~ 640ms bin widt hs), and the columns represent time indices over 5-second duration. (bottom) the trajectories of hand position and velocity at x-(solid), an d y-(dotted) coordinates.

PAGE 97

83 correlated with the hand trajectory than smaller j. Figure 7-3. The demonstration of the rela tion between the neuronal firing activity representation at each scale (solid lines) and the hand position trajectory at xcoordinate (dotted lines). The Analysis of the Linear Model Based on the Multiresolution Representation For the further investigation of the relationship between the multiresolution representation of neuronal firing activities a nd target reaching movements, we develop a linear model using uj(k) as inputs. A discrete-time series uj(k) for each j is normalized to have zero-mean and the unit maximum magnitude such that a model can avoid biasing to the larger-scale inputs. 185 neurons with 8 scales yielde d the input dimension of 1480. The multiresolution representation for the 320sec training dataset (containing 320200 = 64,000 samples) generates an input data matrix X (64,0001,480), where each row represents the input feature vector at a given time inst ance. Then, a linear model is designed to predict the desired response (t he x-, or y-coordina tes of hand position or velocity) vector d (64,0001) with the linear combination of X such that

PAGE 98

84 e Xw e d d ˆ (7-15) where w is the model weight vector and e is the error vector. Note that the desired responses are normalized to have zero-mean so that the estimation of the y-intercept is not necessary. Learning the model weight vector w can be achieved by a variety of methods. However, we must consider regularization in this model due to the very high input dimensionality (>1,000). We have introduced se veral regularization methods in chapter 4. Among those techniques, the L1-norm based algorithm may be suitable since it generates a sparser model and enables th e selection of input variable s, which is useful for the analysis in neuronal population. Here, we utilize the LAR algorithm which learns w by the stagewise selection of input variables with constraints on the L1-norm of w. Recall that this algorithm is based on the assumpti on that the input cha nnels (or columns in X) are not linearly dependent to each other2. To determine the threshold for the L1-norm of weight vector in the LAR algorithm, we utilize the hold-out cross-va lidation. We hold out the last 10% of the training data as the validation set. The threshold is determ ined by minimizing the MSE for the validation set. The LAR algorithm stops learning when the L1-norm reaches this threshold. The LAR algorithm selects a different subs et of input channels for each desired response (there are four res ponses including the x-, and ycoordinates of hand position and velocity). From the trai ned weight vectors, we sele ct neurons that have nonzero weights for at least one scale (recall that there are eight s cales per neuron). Then, we 2 Although more thorough analysis must be executed, we can empirically test if the rank of X is equal to the number of channels. And, the empirical resu lts shows that at least for the matrix X used in this study, input channels are not linearly dependent.

PAGE 99

85 examine the distribution of the selected neur ons over multiple cortical areas. The number of selected neurons and its portion for each ar ea are shown in table 7-1 (see table 2-1 for the description of cortical areas in Aurora’s dataset). In this table, we can observe that more neurons are selected in the case of predicting velocity. Although the biological analysis of this result must be complement ed, it might be caused by the fact that the trajectory of velocity changes more rapidly th an that of position, thus requiring finer resolution inputs. Table 7-1. The number of the selected neurons in each cortical area. PMd M1 S1 SMA M1_ipsi. Position-x 18 (27%3) 27 (47%) 9 (24%) 7 (37%) 2 (40%) Position-y 20 (30%) 26 (39%) 15 (39%) 7 (37%) 0 (0%) Velocity-x 50 (76%) 46 (81%) 30 (79% 15 (79%) 5 (100%) Velocity-y 40 (61%) 42 (74%) 30 (79%) 13 (79%) 3 (60%) Figure 7-4 describes the sele ction results for each de sired response. The black pixels denote the selected vari ables aligned in neuronal space (x-axis) with the scales in the y-axis. These graphs show that LAR prefer s selecting inputs with larger scales since the temporal trajectories of larger scales exhibit more correlation with movement as shown in Fig. 7-3. Comparison of Models with the Multiresolution Representation We now seek to answer the following questions; Can the multiresolution repr esentation of the neuronal firing activity improve the prediction performance of decoding mode ls for BMIs compared to the single resolution representation? If so, how much does it improve performance? Two linear models are desi gned with different input datasets; the first model receives the single reso lution data, i.e., the bi n count data with a fixed width window of 3 The ratio of the number of selected neurons to the tota l number of neurons.

PAGE 100

86 Figure 7-4. The distribution of the selected input variables fo r (a) x-coordinate, (b) and ycoordinate of position, and (c) x-coordina te, and (d) y-coordinate of velocity. 80ms as inputs and the second model receiv es the multiresolution data with eight resolution levels (scales) from 5ms up to 640ms4. Normalization and embedding are applied to every channel in both inputs (si ngle resolution and multiresolution input data); each input channel is normalized to have zero-mean and the unit maximum magnitude and a 6-tap time delay line is used to embe d the bin count data at each channel. This embedding results in an 1110 (6185) dimensi onal input space for the single resolution model and an 8880 dimensional input space for the multiresolution model, respectively. The same training dataset as above (320-sec da ta) is used for both models. However, the number of training samples is different betw een models since two i nput data are binned with different windows: the si ngle resolution data are gene rated by binning with a 80ms non-overlapping window, yi elding 4,000 samples for 320 seconds, and the 4 The 80ms bin width is chosen since it belongs to a set of scales. It means that the single resolution representation can be a special case of multires olution representation using only one scale.

PAGE 101

87 multiresolution data are generated at 200Hz rate, yielding 64,000 samples. Hence, the first model uses the desired signals sample d at 12.5Hz, and the second model uses the ones sampled at 200Hz, respectively. Both mode ls are trained to predict 2D hand position and velocity by the LAR algorithm. The threshold for the L1-norm of the weights in the LAR algorithm is determined by the hold-out cross-validation. The number of nonzero weights after training is listed in table 7-2. It is noteworthy that the weights for the multiresolution da ta are more pruned compared to single resolution. It may indicate th at there is redundancy between large-scale in puts and smallscale inputs. So, the LAR algor ithm, which exploits the co rrelation of in puts with the hand trajectories, is inclined to select a large-scale input to preserve large correlation at the cost of losing temporal resolution. On the other hand, the LAR algorithm with single resolution inputs may need much more inputs to reduce the correlation of the selected inputs with residuals. Table 7-2. The number of the nonzero weights. Position-x Position-y Velocity-x Velocity-y Single resolution 568 (51.1%5) 675 (60.8%) 359 (32.3%) 379 (34.1%) Multiresolution 344 (3.9%) 449 (5.1%) 379 (4.3%) 580 (6.5%) Next, we examine which neurons are selected by LAR in both models. We collect neurons which are assigned with at least one nonzero weight by LAR for entire time lags (and scales for the multiresolution model). In table 7-3, the number of the selected neurons in each model and the number of neur ons which are selected in both models are depicted for four different desired responses The last row in the table represents the number of neurons which are commonly select ed for both models. We can see in this table that neuronal subsets select ed for both models are very si milar. It may indicate that 5 The ratio of the nonzero weights to the total number of weights.

PAGE 102

88 the multiresolution representation may not ch ange the linear model to exploit the different input information from the case of the single resolution bin data. Table 7-3. The number of neurons selected by LAR for each models. Position-x Position-y Velocity-x Velocity-y Single resolution 143 159 121 122 Multiresolution 117 144 124 160 Common 107 138 105 116 We use three performance measures in troduced in chapter 3 for both model outputs; correlation coefficien ts (CC), signal-to-error pow er ratio (SER), and the cumulative error metric (CEM). These measures are evaluated in the test dataset to assess generalization performance as summarized in table 7-4. The evaluation reveals the superior performance of the linear model with multiresolution data compared to the one with the single resolution data. The CEM curves are also shown in Fig. 7-5, by which we can observe that a probability that the length of the error vector is less than certain positive number is higher for the multiresolu tion model than with the single resolution model. However, it is also notable that the increase of pe rformance due to the multiresolution representation is rather margin al. To assess the statistical difference of performances between two models, we perf orm the t-test based on MSE following the procedure presented in chapte r 6. The null hypothesis of no difference between model Table 7-4. Performance comparison between the multiresolution and the single resolution models. Single Resolution Multiresolution Measures CC SER CC SER Position-x 0.730.02 3.370.73 0.780.03 4.110.84 Position-y 0.680.06 2.000.60 0.710.05 2.330.67 Velocity-x 0.710.03 2.800.82 0.730.03 3.110.84 Velocity-y 0.760.02 3.780.39 0.770.03 4.000.46

PAGE 103

89 (a) (b) Figure 7-5. The CEM curves of the single resolution model (red dotted lines), and the multiresolution model (black solid lines ); (a) for hand position and (b) for hand velocity. performances is rejected at significance level of 0.05/0.01 (p<0.001) for both position and velocity, statistically proving a superior performance of the mu ltiresolution model. Combination of Linear and Nonlinear Models We have observed that the neuronal fi ring activity features extracted by the multiresolution analysis could improve pr ediction performance of a linear decoding model, learned by a regularizat ion method. However, the margin of this improvement over the model receiving the sing le resolution input is slight. Although there are plenty of possibilities to design better linear models we rather opt for adding supplementary nonlinear structures to the linear model for the following reasons; We can preserve the trained parameters in linear models such that the model is improved upon the linear model. A nonlinear structure may be able to predict some parts of trajectory which the linear model is unable to track. It can provide an opportunity to answer a general question; can nonlinear models help improve performance ove r linear models for BMIs?

PAGE 104

90 This approach of combining nonlinear models to a linear model is motivated by the cascade correlation network proposed by Fahlma n [Fah91]. This network was originally developed in order to solve the problems of MLP such as; how to optimize the number of hidden PEs and how to avoid the “herd” effect of hidden PEs. In the cascade correlation network, a linear mapper between input and outpu t is first learned. Then, the weights of the trained linear mapper freeze and a new hidden nonlinear PE is added to a linear network. The weights on connections from inputs to the added PE are trained to maximize the correlation between PE’s output and the residual from the linear mapper. After adapting these weights, the PE is treated as a new input node, and the linear mapper including all existing inputs and this PE is learned again. Then, a ll the weights are fixed again, and the next PE is cascaded to the first PE. The weights from all inputs and the first PE to the newly added PE are adjusted again with the same criterion. Then, the second PE is added to the linear mapper to r econfigure weights of the linear network all over again. This procedure is continued until some stopping criterion is met. One of the features in cascade correlation is that it sets a basi s on the linear mapper and adds up nonlinear PEs to explain the resi dual of desired response that the linear network cannot predict by itself. This can be viewed as a se quential construction of a set of basis where an initial set of inputs compose a basic subs et and a set of nonlinear PEs form additional cascade nonlinear bases. Theref ore, the cascade correlation can provide a framework in which we can examine if ther e is room for improve ment after fitting a linear model using nonlinear bases. However, this architecture requires quite a number of computations due to its repetitive learning of linear connections from entire inputs and addi tional PEs to output,

PAGE 105

91 especially when the input dimensionality is very high. Also, the le arning technique must be delicately designed for linear connections w ith additional PEs when generalization is a special issue. Therefore, the direct applica tion of cascade correlati on may not be suitable to BMI modeling based on the multiresolution data. As an alternative, we choose to add a non linear neural network instead of nonlinear PEs to explain the residual from the linear mapper at once. This is motivated by the empirical observation that si ngle nonlinear PE (sigmoid nonl inearity is used here) can hardly yield the output from a large number of neuronal inputs which is significantly correlated with the residu al. Although a nonlinear netw ork can bring many modeling issues such as topology, l earning methods, and the networ k size, it can reduce the computational burden of learni ng, while still demonstrating whether nonlinear models improve performance or not. Nonlinear Modeling Among a number of approaches to build ne ural networks, we need to determine which model suits better to our environment. Main decision factors can be listed as the ability to handle the high input dimensional ity, the low computational complexity, and a suitable nonlinearity for the explanation of the residual for the BM I data. There are two major approaches for the global function approximation using neural networks; multilayer perceptron (MLP) and radial basis functions (RBF) [Hay96b]. Major differences between two appro aches are the nonlinearity in ba ses and training procedure; MLP usually utilizes the sigmoid nonlinear functions while RBF utilizes a radial basis function such as a Gaussian kernel. Also, th e weights in all layers of MLP are trained simultaneously by a learning algorithm such as error backpropagation. But, the weights in RBF are trained sequentially such that the firs t layer weights are adjusted as centers for

PAGE 106

92 clusters in the input space and then the second layer we ights are trained using the algorithms for linear regression. Hence, once the centers for each basis are determined, the learning of the second la yer weights becomes a linear regression problem which is relatively simpler than training entire ne twork by error back propagation through the nonlinearities in MLP. Also, the shape of nonlinearity in RBF ma y be more favorable to explain the residual from the linear mapper than MLP if we look at residual. Figure 7-6 shows an example trajectory of the residual generate d by the linear model designed in previous section using the embedded multiresolution neuronal data for the prediction of hand position. As we can see, the residual trajectory is similar to a sinusoidal and it is rather smooth (although we expect the resi dual is close to white noise it is actually not). This smooth trajectory may be caused by the fact that the regularized linear model yields a smooth output trajectory that tends to track the low-fre quency components of the hand trajectory. So, the linear model is inclined to miss high frequency components including a lot of peaks and momentary changes in moveme nts. This fact results in a sinusoidal-like residual trajectory since it is largely impacted by peaks. In order to estimate this residual traject ory by linearly combining nonlinear bases, the radial basis which forms peaky bell-shape nonlinearity may be more suitable. Also, the preliminary results showing that nonlinea r models using sigmoid nonlinear functions (e.g. NMCLM, TDNN) for prediction in a 2D target reaching BMI performs similar to linear models may lead us to alte r our choice of n onlinear functions6. 6 We empirically evaluated the generalization performances of MLP and RBF for the estimation of the residual from the linear model, and RBF exhibited slig htly better performance (although, these results were not produced from the optimized models).

PAGE 107

93 0 8 16 24 32 40 -30 -20 -10 0 10 20 30 time (sec) Figure 7-6. An example of the residual trajectory from a lin ear model (the x-coordinate). Simulations Since the first layer of RBF is learned to find clusters in the input space, the high dimensionality of input may impair localizat ion of clustering, thus yielding improper centers for radial bases (note that the input dimensionality used in the linear model is 8880). Even if we utilize only the selected inputs by LAR, there are still 684 input variables. To reduce the input dimensionality fu rther, we select a subset of the selected inputs which are assigned with relatively large weights. Also, only the instantaneous multiresolution inputs are considered (without time delays). Sorting inputs by the magnitude of weights, we sele ct the ones with the largest ma gnitudes the sum of which is approximately 90% of the sum of total ma gnitudes. This process selects 140 input variables. The centers of basis with these input data are le arned by a simple clustering kmeans algorithm [Bis95]. The kernel width for ev ery basis functions are equally set to 0.8 with which the output of basis functions exhibits smooth trajec tories varying at a similar rate to that of residual. The number of basi s functions is empirically determined to be

PAGE 108

94 400. Note that the smaller number of basis f unctions will not be sufficient for estimating residual, and the larger number will suffer poor generalization. Then, the second layer weights are learned with the least squares me thod (including a bias term) to predict 2D residual signals. After training an RBF network, its outputs with novel test data are added to those of the linear model. Then, its generalizati on performance is evaluated using the same measures as in table 7-4. A comparison of performance measures of the combinatory outputs from the linear models and RBF with the linear model only is presented in table 7-5. It shows superior performance of th e combinatory network. A statistical test proposed in the previous ch apter results in the rejection of null hypothesis of no significant difference between performances of two models using MSE measures, with p = 0.002. Table 7-5. Performance comparison between the combinatory model and the single linear model. Single Resolution Multiresolution Measures CC SER CC SER Position-x 0.780.03 4.110.84 0.800.02 4.430.74 Position-y 0.710.05 2.330.67 0.720.05 2.520.62 Figure 7-7 demonstrates comparison of out puts from the combinatory network and the single linear model. It shows example tr ajectories of both mode ls along with actual hand trajectory, for x-coordinate. We can s ee that the combinator y network can track peaks of hand trajectory slightly better than the linear model. It indicates that an additional nonlinear network can he lp reach peaks more accurately. This experiment demonstrates that it is possible to improve further over optimally designed linear model, by employing nonlinea r structures. Although we present here

PAGE 109

95 simple examples showing the slight improvement of perfor mance in a target reaching BMI (without optimization), finer designs of nonlinear networks may be able to increase performance further. Figure 7-7. An example of th e output trajectories of the co mbinatory network and single linear model. Discussions Although it has been demonstr ated that the multiresolu tion representation of neuronal spikes enables a bett er decoding model for BMIs, the consequent performance improvement is marginal. Even with a more sophisticated model which combines the linear model and the nonlinear ne ural networks, prediction pe rformance is still far from the desirable level for practical use of BMIs. It means that there are still a lot of extents for performance improvement. However, it does not mean that the multires olution analysis study is not useful in BMI modeling. In fact, it reveals the rela tionship between neurona l firing rates and the associated behavior for i ndividual neurons. Also, the in creased performance by the 0 8 16 -50 -40 -30 -20 -10 0 10 20 30 time (sec) A ctual Linear Combinatory

PAGE 110

96 multiresolution input data over the single resolu tion (fixed width bin) data using exactly the same linear model may inspire us to cons ider data mining of ne uronal firing activities in order to design more accurate BMI decoding models. It is quite interesting to view mu ltiresolution analysis using the Haar trous wavelet transform in the frame of the ge neralized feedforward filter [Pri93]. The relationship between the generalized feedfo rward filter and the wavelet transform was analyzed in [Che98] where the continuous wavelet transform was implemented by the Laguerre filter. This study showed that the difference of the adjacent tap outputs could implement the wavelet decomposition of the inpu t signal. As introduced in chapter 4, in the generalized feedforward f ilter, an instantaneous input signal is delayed by a delay operator of G(z). The gamma delay operator used in the gamma filter is given by, ) 1 ( ) ( z z G (7-16) where is a feedback parameter. On the other hand, a delay operator induced by the Haar trous DWT represented in (7-12) can be given by, 125 0 5 0 ) ( kz z Hk (7-17) for the kth tap. Note that G(z) is constant for all taps while Hk(z) varies its order over taps. The other distinction is the fact that G(z) has an IIR structure whereas Hk(z) forms an FIR filter for k. The transfer function from an input to the kth tap output for the gamma filter, Gk(z) is given in (4-7). The transfer function from an input to the kth tap for the Haar trous DWT is obviously an moving aver age filter of an order of 2k-1 with gain of 0.5. Hence, if we set 0 < < 1 for the gamma filter, both transfer functions feature lowpass filtering.

PAGE 111

97 Figure 7-8 shows the outputs from four taps of two generalized feedforward filters with G(z) and Hk(z), respectively. The input signal to filters is bi n count with 80ms time window of a specific neuronal activity in Auro ra dataset. These ta p outputs are plotted along with x-coordinate of actual hand trajectory. for the gamma delay is set to 0.5. In order to view temporal patterns of each tap output, we divide each output by its Figure 7-8. Tap outputs from tw o generalized feedforward filters for a neuronal bin count input with different delay: th e gamma, and Haar wavelet. maximum absolute value, as done in pr evious analysis in this chapter. In this figure, it can be easily seen that two filters produce similar tap outputs for given input except the different memory dept hs. This reveals that the multiresolution analysis based on the Haar trous DWT generates a feature space of input that is quite similar to the space generated by the gamma f ilter. This is evident by the relationship between the generalized feedforward filter a nd the wavelet transform. Recall that the wavelet coefficient in the Haar wavelets is obtained by subtracting the current

PAGE 112

98 convolution output from the pre ceding output, as depicted in (7-13). As mentioned above, the wavelet coefficients can be obtained by the difference between the consecutive tap outputs of the generalized feedforward filter. No te that the quantity of the Haar wavelet transform we have used here is not wavelet co efficients but the set of convolution output (for equalization with bin count). And this set of convolution output can be linked with the tap outputs of the generalized feedforward filter. Therefore, the multiresolution analysis outputs using th e Haar wavelet transform may cont ain similar information to the tap outputs of the gamma filter.

PAGE 113

99 CHAPTER 8 DETERMINATION OF NEURONAL FIRI NG PATTERNS USI NG NON-NEGATIVE MATRIX FACTORIZATION With a collection of neuronal electrical ac tivities over several cortical areas of a primate synchronously recorded with motor parameters (e.g. hand positions) during the primate’s performance of a particular move ment task, it becomes more plausible to analyze a variety of aspects of the neuronal population such as its function related with behavior, the spatio-temporal structure of the population activity, and the relation of individual neurons in each cortic al area to a particular motion, if we list a few. With the bin count data estimating a local neuronal firing rate and the synchronously recorded hand positions, several BMI models (e.g. the Wiener filter, recursive multilayer perceptrons, etc.) have estimated the lin ear or nonlinear mappi ngs between neuronal population and behavior, as shown in previ ous chapters. From these models, we can extract the information about the ne uronal contributions to movement. Recently, the sensitivity of ne urons and cortical areas based on their role in the mapping learned by the RMLP or the Wiener filt er has been investigated [San03a]. This sensitivity analysis examined how each neur on contributes to the output of the models, and found consistent relationships between co rtical regions and segments of the hand trajectory. For instance in a food reaching task this analysis indica ted that during each reaching action, specific neurons from the poste rior parietal, the prem otor dorsal, and the primary motor regions sequentially became do minant in controlling the output of the models. In addition, from the sensitivity analysis, a model can improve generalization

PAGE 114

100 performance by only using more relevant ne urons [San03b]. However, this approach relies on determining a suitable model, because they explicitly use the learned model to infer the dependencies. There have been other a pproaches that do not depe nd on the model. One popular method applied in BMIs is the cellular tuni ng analysis which statistically reveals the firing modulation of each cell (or neuron) for specific movement parameters [Geo83]. A neuronal tuning curve, which is estimated by th e probability distribution of firing rate of a given cell over entire hand position (or veloci ty) angles, is utilized to determine the tuning property of the cell. In this curve, if a cell firi ng shows higher probability in specific angle, we can see that the cell tune s its firing for a movement in that specific angle. Also, we can sort cells with the shar pness of their curves since shaper curves indicate finer tuning of cells associated with movement. However, this analysis does not take temporal aspects into account since it utili zes statistics over the entire data. Hence, it is difficult with this analysis to ascertain the dynamic properties of the neuronal relation with behavior. In this chapter, we propose a model-inde pendent approach to determine the spatiotemporal neuronal firing patterns by using of nonnegative matrix factorization (NMF) [Lee99, Lee01]. In its original applicati ons, NMF was mainly used to provide an alternative method for determining sparse representations of images to improve recognition performance [Gui01 and Lee99]. D’Av ella and Tresch have also proposed an extension of NMF to extract time-varying musc le synergies for the analysis of behavior patterns of a frog [dAv02]. The non-negativ ity constraints in NMF result in the unsupervised selection of sparse bases that can be linearly comb ined (encoded) to

PAGE 115

101 reconstruct the original data. Our hypothesis is that NMF can similarly yield sparse bases for analyzing neural firing activity, because of the intrinsic non-negativity of the bin counts and the sparseness of spike trains. We apply NMF to extract local features of neural bin counts in the same way sparse bases were obtained to describe the local features of face images. The basis vectors provided by NMF and their tem poral encoding patterns are ex amined to determine how the activities of specific neur ons localize to each segment of the reaching trajectory. We will show that the results from this model-independent analysis of the neuronal activity are consistent with the prev ious observations from the m odel-based analysis. And, the mixture of linear experts based on NMF ba ses and encodings will be designed to demonstrate how we can utilize NMF for improving models in BMIs. Nonnegative Matrix Factorization NMF is a procedure to decompose a non-nega tive data matrix into the product of two non-negative matrices: bases and en coding coefficients. The non-negativity constraint leads to a “parts-based” representa tion, since only additi ve, not subtractive, combinations of the bases are allowed. An MN non-negative data matrix X, where each column is a sample vector, can be approximated by NMF as E WH X (8-1) where E is the error and W and H have dimensions Mr and rN, respectively. W consists of a set of r basis vectors, while each column of H contains the encoding coefficients for every basis for the correspond ing sample. The number of bases is selected to satisfy r(M+N)< MN so that the number of equati ons exceed that of the unknowns. This factorization can be desc ribed in terms of columns as

PAGE 116

102 j jWh x, for j = 1, …, N (8-2) where, xj is the jth column of X and hj is the jth column of H. Thus, each sample vector is a linear combination of basis vectors in W weighted by hj. The non-negative constraints on W and H allow only additive combination of basis vectors to approximate xj. This constraint makes basis vectors to be visualized in the si milar manner as the original sample. This is contrary to factorization by PCA, where the negative elements in basis vectors are allowed. The decomposition of X into W and H can be determined by optimizing an error function between the original data matrix and the decomposition. Two possible cost functions used in the literature are th e Frobenius norm of the error matrix ||X – WH||2F and the Kullback-Leibler divergence DKL(X||WH). The non-negativity constraint can be satisfied by using multiplicative update rules discussed in Lee and Seung [Lee01] to minimize these cost functions. In this paper, we will employ the Frobenius norm measure, for which the multiplicative update ru les that converge to a local minimum are given by i T i T i i j T j T j jk k k k ) ( ) ( ) ( ) 1 ( ) ( ) ( ) ( ) 1 ( WHH XH W W WH W X W H H (8-3) Aab denotes element of a matrix A at ath row and bth column. Notice that the updates are based on the product of the current factor and a measure of qual ity of the current approximation [Wil03]. It has been proven in Lee and Seung [Lee01] that the Frobenius norm cost function is non-incr easing under this update rule.

PAGE 117

103 Factorization of Neuronal Bin Count Matrix We will now apply the multiplicative update ru le in (8-3) to the neuronal bin count matrix. The goal is to determine non-negative sparse bases for the neural activity, from which we wish to deduce the local spatial st ructure of the neuronal firings. These bases also point out common populati on firing patterns correspondi ng to the specific behavior. In addition, the resulting fact orization yields a te mporal encoding matr ix that indicates how the instantaneous neural activity is op timally constructed from these localized representations. Since we are interested in the relationship between the neural activity and behavior, we would like to study the coupling between this temporal encoding pattern with the reaching movement of the pr imate, as well as the significance of the specific bases vectors, which represent neural populations. Data Preparation NMF is applied to two datasets; 3D f ood reaching data collected from an owl monkey (Belle), and 2D target reaching data collected from a Rhesus monkey (Aurora). For each dataset, the neuronal bin count matrix is formed as described in the followings. 3D food reaching data In this recording session of approxima tely 20 mins (12000 bins), 104 neurons can be discriminated and there are 71 reac hing movements for Belle. These reaching movements consist of three natu ral segments shown in Fig. 81. Based on the analysis of Wessberg et al. [Wes00], the instantaneous move ment is correlated with the current and the past neural data up to 1 second (10 bins). Therefore, for each time instant, we form a bin count vector by concatena ting 10 bins of firing count s (which correspond to 10-tap delay line in a linear filter) from every neuron.

PAGE 118

104 0 10 20 30 40 50 60 70 -30 -20 -10 0 10 20 30 40 50 60 70 X Y Z Food to Mouth Mouth to Rest Rest to Food Figure 8-1. Segmentation of the reaching trajec tories: reach from rest to food, reach from food to mouth, and reach from mouth to rest position (taken from Sanchez et al. [San03a]). Hence, if xj(n) represents the nth bin of neuron j, where n{1, …, 12000}, a bin count vector at time instance n is represented by x(n) = [x1(n), x1(n-1), …, x1(n-9), x2(n), …, xM(n-9)]T, where M is the number of neurons. Since we are interested in determining repeated spatio-temporal firing patterns during the reaching movements, only the bin counts from time instances where the primate’ s arm is moving are considered. There is a possibility that in the sel ected training set some neur ons never fire (this reduces M from 104 to 99). The rows correspondi ng to these neurons must be removed from the bin count matrix, since they tend to cause instability in the NMF algorithm. In addition, to prevent the error criterion from focusing too much on neurons that simply fire frequently (although the temporal structure of their activity might not be significan t for the task), the bin counts in each row (i.e., for each neuron) of the data matrix are normalized to have the unit length in its two norm. In general, if M neurons are considered for a total of N time instances, the data matrix X has dimension (10M)N. Since the entries of the data

PAGE 119

105 matrix are bin counts, they are guaranteed to be non-negative. Accounting for 71 movements, there are N = 2143 time instances for this data. 2D target reaching data In this BMI, the primate continuously m oves the arm to track a target on screen. Since there is no pause between movements as is the case in food reaching, a continuous neuronal bin count data collected from certain segment in a r ecording session is used for NMF. The length of the data is chosen to be 500 seconds (5,000 samples). Accounting for temporal firing patterns, we could embed the bin count data for each neuronal channel. However, preliminary experimental studies have revealed that such an embedded data matrix cannot be factorized to yield sparse and local representations of neuronal firing activitie s. Hence, we utilize the multiresolution representation of neuronal firi ngs [Kim05c] as discussed in the previous chapter instead of time delay embedding. The bin widths used here are determined to be 160ms, 320ms, and 640ms since including shorter bin widths onl y increases complexity of factorization without providing useful NMF bases. With three bin widths and 185 neurons, th e bin count matrix set up for NMF has a dimension 5555000. The hand position trajectory is co mposed of x-, and ycoordinates in 2D space. Each row in the bin co unt matrix is normalized as above. Analysis of Factorization Process In the application of NMF to a given neural firing matr ix, there are a few important issues that must be addressed: the selection of the number of bases, the local minima of the NMF algorithm, and understanding how NMF can find repeating patterns.

PAGE 120

106 Choice of the number of bases The problem of the choice of the number of bases can be addressed in the framework of model selection. A number of model selectio n techniques (e.g. the crossvalidation) can be utilized for finding the optim al number of bases. In this dissertation, we choose to adopt a selection criterion that has been recen tly developed for clustering. The criterion is called the index I, which has been used to indicate the cluster validity [Mal02]. This index has shown consistent performance of se lecting the true number of clusters for various experi mental settings. The index I is composed of three factors as, p r rD E E r r I 11 ) ( (8-4) where Er is the approximation error (Frobenius norm) for r bases, and Dr is the maximum Euclidean distance between bases such that, j i r j i rDw w max1 ,. (8-5) The optimal r is the one that maximizes I ( r ). We will utilize this index to determine the optimal r for NMF with p =1. How does NMF find repeated patterns? Although a rigorous proof that the NMF base s will discover repetitive patterns in the neuronal data is not provided here, by intuition we can argue that NMF will prefer to minimize the cost function by selecting repetiti ve firing patterns in the bases. The reason for this is the fact that, if a repeated fi ring sequence is not in the space spanned by the linear combination of the selected bases, the error contribution of this pattern will scale up by the number of occurrences of the patter n. On the other hand, if a firing sequence

PAGE 121

107 occurs only a few times, then the cost of not representing this sequence in the bases will be relatively much smaller. Donoho and Stodden have shown that a uni que solution of NMF is possible under certain conditions [Don04]. They have show n through a geometrical interpretation of NMF that if the data are not strictly positiv e, there can be only one set of non-negative bases which spans the data in the positive orth ant. With an articulated set of images obeying three rules (a generative model, lin ear independence of gene rators, and factorial sampling), they showed NMF identifies the generators, or “parts” of images. If we consider our neuronal bin-count matrix, each row contains many zer o entries (zero bin counts) even after removing non-firing ne urons since most neurons do not fire continuously once in every 100ms window during the entire training se t. Therefore, our neuronal data are not strictly positive. This im plies that the existence of a unique set of non-negative bases for the neuronal bin-coun t matrix is warranted. The question still remains if the NMF basis vectors can find the generative firing patterns for the neural population by meeting the three conditions mentioned above. Here, we discuss the neuronal bin-count data with respect to these conditions. As stated previously, we have demonstrated through sensitivity analysis that the specific neuronal subsets from the PP, PMd, and M1 regions were sequentially involved in deriving the output of the predictive models during reaching movements [San03a]. Hence, the bin-count data for the reachi ng movement will contain increasing firing activity of the specific neuronal subset on local partitions of the trajectory. Due to binning, it is possible that more than one fi ring patterns is associated with a single data sample. This analysis leads to a generative model for the binned data in which data

PAGE 122

108 samples are generated by linear combinati on of the specific firing patterns with nonnegative coefficients. Also, these firing patte rns will be linearly independent since the neuronal subset in each firing patterns tends to modulate firing rates only for the local part of trajectory. The third condition of factorial sampling can be approximately satisfied by the repetition of movements in which the vari ability of a particular firing pattern is observed during the entire data set. However, a more rigorous analysis is necessary to support the argument that the se t of firing patterns is comp lete in factorial terms. Therefore, we expect that the NMF solutions may be slightly variable reflecting the ambiguity in the completeness of factorial sampling. This might be overcome by collecting more data for reaching movements, and will be pursued in future studies. A simple insight into the NMF algorithm may also help us understand how it captures the repetitive patterns. The upda te equation (8-3) is restated here for convenience as ia T ia T ia ia aj T aj T aj ajk k k k ) ( ) ( ) ( ) 1 ( ) ( ) ( ) ( ) 1 ( WHH XH W W WH W X W H H (8-6) We have mentioned that the new update is the product of the current factor and the current quality of the approximation. We can see that this quality of the approximation is estimated by the inner product. Let us fi rst look at the update of each column of W ( XHT)aj and ( WHHT)aj can be rewritten in the vector form as a T i ia T a T i ia Th x WHH h x XH: :ˆ ) ( ) ( (8-7)

PAGE 123

109 where xi: denotes the ith row vector of X and ha denotes the ath row vector of H :ˆix is also the ith row vector of X ˆ where WH X ˆ. Since we normalize each row of X xT i:ha depends on the angle between two vectors. So, it reflects the correlation between ha and the temporal pattern of each neuron (or delayed version of it). Then, some of the entries in the ath basis with similar temporal patterns as ha are updated with relatively larger (positive) amounts. This is analogous to the spherical k -means clustering [Dhi01] in which clusters are organized based on the metric of the cosine similarity, in the sense that the channels with similar temporal patterns are clustered with ha as center, being encoded in the ath basis. This analogy is also supporte d by the fact that seeding NMF using spherical k -means clustering improves convergence speed [Wil03]. After updating the ath basis vector, ha is updated in a similar manner. The update equation can be rewritten as, i T a ia T i T a ia T : :ˆ ) ( ) (x w WH W x w X W (8-8) where wa denotes the ath column of W or the ath basis vector, and x:i is the ith column of X or the ith sample vector. The ith element of ha is then a correlation measure of wa with the ith sample vector. Note that th e numerator of the update for ha, i.e., wa TX can be regarded as a weighted sum of rows of X where more weights are imposed on the rows with similar patterns to ha. This update is very similar to the numerator in the update rule of the spherical k-means clustering that is given by j ji i i i jk x x c ) 1 (, j m k km T j T j ), ( ) ( : c x c x x (8-9)

PAGE 124

110 for the jth cluster center [Dhi01] With this update, ha will encode the information of time when the spatial pattern of the data similar to wa occurs. In short, if there is a repeating pattern in a given sparse neural activity dataset, which implies that a subset of neurons have a common temporal patte rn, this subset and its common temporal pattern would be en coded in the NMF bases and encodings, respectively. Local minima problem We can utilize these insights to reduce the effect of the local minima. For instance, we can initialize H with positive random numbers and W with a fixed positive constant. Then the quantity of a T ih x:ˆ = ( WHHT)ia becomes the same for every row, since every row of W has the same entries and it is multiplied by the ath column of the matrix HHT. Therefore, the update of each entry in the ba ses can be solely dependent upon the angle between xi: and ha. Then the final solution will be mainly dependent upon the initialization of H and have similar sensitivity to initial condition as spherical k-means algorithm does. Case Studies A: 3D Food Reaching The NMF algorithm is applied to the descri bed neuronal data matrix prepared using ten taps, M = 91 neurons. The NMF algorithm with 100 independent runs results in r = 5 bases for which the index I is maximized. The means and the standard deviations of the normalized cost (Frobenius norm of error between approximation and the given data matrix divided by the Frobenius norm of the data only) for 100 runs are 0.8399 0.001. This implies that the algorithm approximate ly converges to the same solution with different initial conditions. Note that we initialize H as stated in the previous section.

PAGE 125

111 In Fig. 8-2, we show the resul ting basis vectors (columns of W ) for the bin counts (presented in matrix form where columns ar e different neurons a nd rows are different delays), as well as their corresponding tim e-varying encoding coefficients (rows of H ) superimposed on the reaching trajectory coor dinates of three consecutive movements. Using these time-synchronized neural activity and hand trajectory reco rdings, it is also possible to discover relationships between firing patterns and certain aspects of the movement. Since NMF is looking for an optim al linear approximation of the data with few bases (which can be realized by discovering latent structures in the data [Lee01]), an efficient representation of the complete fi ring activity can be achieved by selecting the bases such that each basis represents a repeated spatio-temporal firing sequence. For Figure 8-2. The NMF results for food reaching; Left) the five bases. Right) their corresponding encoding signals (thi ck solid line) overlaid on the 3D coordinates of the hand trajectory (dotted lines) for three consecutive representative reaching tasks (s eparated by the dashed lines). a b Dela y Time (100ms) Hand PositionNeuron Index

PAGE 126

112 example, from the basis vectors in the left pa nel of Fig. 8-2, we observe that firings of the neurons in groupb are followed by firings of the neurons in groupa (the bright activity denoted by b occurs earlier in time than the activity denoted by a since increasing values in the vertical axis of each basis indicates going further back in time). Thus, NMF effectively determines and summarizes this sp arse firing pattern that involves a group of neurons firing sequentially. Thei r relative average activity is also indicated by the relative magnitudes of the entries of this particular basis. We can assess the repeatability of a cert ain firing pattern summarized by a basis vector by observing the time-v arying activity of the corr esponding encoding signal (the corresponding row of H ) in time. An increase in this co efficient corresponds to a larger emphasis to that basis in recons tructing the original neural activity data In the right panel of Fig. 8-2, we observe that all bases are activated regularly in time by their corresponding encoding signals (at different time instances and at different amplitudes). For example, the first basis is periodically activated to the same amplitude, whereas the activation amplitude of the thir d basis varies in every move ment, which might indicate a change in the role of the co rresponding neuronal firing pattern in executing that particular movement. The periodic activati on of encodings also indicate s the bursting nature of the spatio-temporal repetitive patterns. Hence, the NMF bases tend to encode synchronous and bursting spatio-temporal patte rns of neural firing activity. From the NMF decomposition, we observe certain associations between the activities of neurons from different corti cal regions and different segments of the reaching trajectory. In particular, an analysis based on Fig. 8-2 indicates that neurons in

PAGE 127

113 PP and M1 repeat similar firing patterns dur ing the reach from rest to food. This assessment is based on the observation that ba ses three, four and five, which involve firing activities from neurons in these regions are repeatedly activated by the increased amplitude of their respective encoding co efficients. Similarly, neurons in M1 are repeatedly activated during the reach to and from the mouth (bases one and two). These observations are consistent with the sens itivity analysis that was conducted through trained input-output models (s uch as the Wiener filter and RMLP) [San03a]. Table 8-1 compares the neurons, which were observed to have the highest sensitivity from trained models, and the neurons that have the larges t magnitudes in each NMF basis. We can see that neurons from NMF are a subset of neurons obtained from the sensitivity analysis. It is also worth stating that NMF basis provides more inform ation than the model-based sensitivity analysis since it determines the synchronous spatio-tempor al patterns while the sensitivity analysis only determines individua l important neurons. Finally, we would like to reiterate that the analysis presente d here is solely based on the data. Table 8-1. Comparison of important neurons; food reaching. Regions PP M1 PMd M1-ipsi. The high sensitive neurons through RMLP 4,5,7,22,26, 29 38,45 93,94 The largest-magnitude neurons in NMF bases 7,23, 29 45 93,94 Case Study B: 2D Target Reaching The NMF algorithm is applied to the neur onal bin count matrix generated by using of multiresolution analysis with three scales and M = 185 neurons. With 100 independent runs, the index I is again maximized for r = 5 bases. The means and the standard deviations of the normalized cost for 100 runs turn out to be 0.425 0.001, which again reveals robustness of the NMF algor ithm about initial conditions.

PAGE 128

114 The resulting basis vectors and the corres ponding encoding coefficients (row of H) along with the hand position trajectories are demonstrated in Fig. 8-3. Unlike previous observations in food reaching, the repetition of a firing pattern is not clearly shown in time-varying encodings due to the irregular na ture of movement. Al so, the basis vectors are not as sparse as the ones for food reaching. Despite these rather complicated results of basis and encoding, we can extract some information about firing patterns encoded in each basis vector by looking into its counterpart encoding time series. Let us firs t segment the hand trajectory into three sample regions; divided by dotted lines in the right panel of Fig. 8-3. Although the trajectories in each region are different, they have similar pattern; increase in both x, and Figure 8-3. The NMF results for target reachin g; Left) the five bases. Right) their corresponding encoding signals (thi ck black line) overlaid on the 2D coordinates of the target reachi ng trajectory (x: blue, y: red). Delay 1 2 3 1 2 3 1 2 3 1 2 3 Neuron Index 50 100 150 1 2 3 -1 0 1 -1 0 1 -1 0 1 -1 0 1 0 50 100 150 200 250 300 -1 0 1 time (100ms) a b c PMd M1 S1 SMA M1_ip.

PAGE 129

115 y directions – decrease in both directions – increase again – and d ecrease at the end. For each region, encodings of individual basis vect ors exhibit characteristi c patterns. The first encoding tends to increase its magnitude ar ound points when hand traj ectory starts to decrease from the positive peaks. The s econd encoding tends to increase when hand trajectory starts to move in positive direc tion. The third encoding increases for moving of hand in negative direction. The fourth encoding exhibits peaks in the middle of moving in positive direction, and the fifth encoding exhibits peaks in negative direction. Also, the temporal sequence of contributions of neurons from each cortical area can be approximated from this figure. In each sa mple region, there seems to be a sequence of dominance of each basis vector by observati on of encoding patterns; the individual encodings tend to increase follo wing a particular sequence such as 3-2-5-4-1 (indices of encodings from top to bottom in Fig. 8-3. Si nce the movement is continuous, this sequence can be made circular (e.g. 5-4-13-2). This observation of sequence may be linked with neurophysiologic functions of each co rtical area, which will be an interesting future research topic. Based on these observations, we statistically analyze each encoding associated with movement; the hand position samples when each encoding series moves around the peaks are collected, and the average position among these is estimated. This average position indicates the relative location of hand positions when the neuronal firing patterns determined in each basis vector appears. Fi gure 8-4 shows the average hand positions for the five basis vectors in 2D space. Note that the estimated standard deviation for each collected samples is so large that there are wide overlaps betw een distributions of

PAGE 130

116 collections. But this distribution approximately explains which part of the trajectory each neuronal firing pattern is related with. It is noteworthy that the distributions of hand positions for some NMF bases are similar, typically located around 45 or 200 a ngles. This is due to the fact that the probability that hand position is located around thos e angles is relatively higher. This fact is empirically illustrated in Fig. 8-5 where th e probability that the hand is positioned in each of 16 angle bins. These bins are obtaine d by partitioning the 2D polar axis into equally spaced 16 angle bins. From this figur e, we can see that the hand is located at around 45 or 200 with higher probability. Figure 8-4. The hand position samples collected along with peaks in each NMF encoding (left), and the mean and variance of each set (right). The number marked with each dot denotes the corresponding basis in Fig. 8-3 (number in order of top to bottom). 1 2 3 4 5 Mean and variance of hand positions for each NMF basis Hand position samples for each N MF basis

PAGE 131

117 Figure 8-5. The probabilities of the occurrence for hand position to be in each of sixteen angle bins. Another analysis of encodings can be conducted based on cell ular tuning [Geo83]. In the cellular tuning analysis, each neuronal modulation with respect to the angle of hand position is investigated. Similarly, tuni ng property of neuronal pattern in each NMF basis is investigated through its counterpart encodings. We first part ition the angle in 2D space into 16 angle bins from – to in radian as described above. Then, a 16 N Boolean matrix A is created of which each column vector an consists of zero elements except only one unit value; an=[0,…0,1,0,…]T. The location of 1 in each v ector points out the angle bin where the current angle of hand position at time instance n belongs to. Since the encoding matrix H has all nonnegative elements and th e large value of encoding indicates activation of the corresponding neuronal pattern, each row of HAT indicates tuning property of the corresponding basis estimated over entire data samples. Figure 8-6 presents each row of HAT that shows a tuning curve of individual neuronal patterns encoded in each basis.

PAGE 132

118 It is interesting to compare Fig. 8-6 with Fig. 8-4. We can see that the angel of each average hand position in Fig. 8-4 matches tuning curves in Fig. 8-6. For instance, the first average hand position has an angle around /3 at which the first tuning curve exhibits a peak. Other hand positions and the corresponding tuning curves also match each other. Hence, we can conclude that NMF basi s vectors and encodings represent tuning properties of neuronal firing patt erns appearing in basis. It also demonstrates that the neurons clustered by each basis have the co mmon characteristics associated with behavior. We compare the neurons with larg e weights in basis vectors of NMF with important neurons selected by the sensitivity analysis, summarized in table 8-2. The neurons are selected empirically by looki ng into elements in NMF basis vectors corresponding to each neuron. The selection by NMF basis is significantly compatible with that from the sensitivity analysis. Figure 8-6. Tuning curve of neuronal firing pa tterns encoded in each NMF basis for 16 angle bins.

PAGE 133

119 Table 8-2. Comparison of importa nt neurons: target reaching. Regions PMd M1 S1 SMA M1_ipsi. The high sensitive neurons through RMLP 15, 54 68,69,73,76,78, 80,81,84,92,96, 99,101,104,107, 108,110 149 167 The largest-magnitude neurons in NMF bases 7,41,45,48 54,66, 67,73,76,78,80, 81,84,89,92,99, 104,108,110, 114,121 145, 149 167, 169, 177 Model Improvement Using NMF We will demonstrate a case study of perf ormance improvement in predicting hand positions in a 3D food reaching BMI by utilizing NMF. We will compare the performance of two systems; the Wiener filter directly applie d to the original binned data and the mixture of multiple linear filter s based on the NMF bases and encodings. The straight Wiener filter is directly applie d to the neural firing data to estimate the three coordinates of the primate’s hand position. The properties and modeling of the Wiener filter are discussed in chapter 3. The mixture of multiple models employs the NMF encodings as mixing coefficients. A NMF basis is used as a window function for th e corresponding local model. Therefore, each model sees a given input vector through a different window and uses the windowed input vector to produce th e output. Then the NMF encodings are used to combine each model’s output to produce the final estimate of the desired hand position vector. This can be descri bed in the following equation, K k c k c k T k k cb n n h n d1 ,) ( ) ( ) ( ˆ g z (8-10) where hk( n ) is an NMF encoding coefficient for the kth basis at nth column (i.e., time index), gk,c is the weight vector of the kth model for the cth coordinate ( c [x,y,z]), and

PAGE 134

120 bk,c is the y-intercept of the kth model for the cth coordinate. zk( n ) is the input vector windowed by the kth NMF basis. Its ith element is given by i k i i kw n x n z, ,) ( ) ( (8-11) Here, xi( n ) is the normalized firing count of the neuron i at time instance n and wk,i is the ith element of the kth NMF basis. gk,c and bk,c can be estimated based on the MSE criterion by using of the stochastic gradient algorithm such as the normalized least mean square (NLMS). The weight update rule of the NLMS for each model is then given by ) ( ) ( ) ( ) ( ) 1 ( ) ( ) ( ) ( ) ( ) ( ) 1 (2 , 2 ,n e n h n n b n b n n e n h n n nc k k c k c k k c k k c k c kz z z g g (8-12) where is the learning rate and is the normalization factor. ec( n ) is the error between the cth coordinate of the desired response and the model output. In the experiment, we divided the data samples into 1771 training samples and 372 test samples for 3D food reaching dataset. The parameters are set as {, K } = {0.01, 1, 5}. The entire training data set is presente d 60 times for the weights to converge. The performance of the model is evaluated on the test set by two measures; the correlation coefficient (CC) between desired hand traject ory and the model output trajectory, and the mean squared error (MSE) normalized by the va riance of the desired response. Table 8-3 presents the evaluation of the performance of two systems. It shows a significant improvement in performance w ith the mixture of models based on NMF factorization. Table 8-3. Performance evaluation of the Wiener filter and the mixture of multiple models based on NMF. CC(x) CC(y) CC(z) MS E(x) MSE(y) MSE(z) Wiener filter 0.5772 0.6712 0.7574 0.4855 0.3468 0.2460 NMF mixture 0.7147 0.7078 0.8076 0.2711 0.2786 0.1627

PAGE 135

121 To quantify the performance difference between the Wiener filter and the mixture of multiple models, we can apply a statistica l test based on the mean squared error (MSE) performance metric as proposed in Kim et al. [K im05a]. The details of this statistical test procedure will be revisited later in chapter 8. We perform the t-test on both modeling outputs with a significance leve l of 0.01 or 0.05. The null hypothesi s is rejected with both significance levels, resulting in the p -values of 0.0023. Therefore, the statistical test of the performance difference demonstrates that the mixture of multiple models based on NMF improves the performance significantly compared to the standard Wiener filter. In this case study, we show that the bases and encodings of NMF can be effectively used for modeling the transfer function from neuronal firing patterns to hand movements. Although we demonstrate a simple mixture of lin ear experts here, ther e are still a lot of possibilities to enhance performance by invest igation of NMF results for BMIs modeling. Future works will pursue this interesting subject. Discussions The results presented in the previous cas e study are a representative example of a broader set of NMF experiments performed on this recording. Select ion of the number of taps and the number of bases ( r ) is dependent on the particular stimulus or behavior associated with the neural data. Altho ugh we have used a model selection method originally developed for cluste ring, and did not provide full jus tification that this index is suitable to NMF, the main motivation is to demonstrate that the problem of selecting the number of bases can be addressed in the cont ext of model selection. This will be pursued in future research. As we discussed in above, NMF attempts to detect repeating firing patterns and assigns a basis vector for each pattern, simp ly because it is the optimal strategy to

PAGE 136

122 minimize the error cost function. Consequent ly, the number of patterns that can be distinctly represented by NMF is limited by the number of base s. A very small number of bases will lead to the combination of multiple patterns into a single non-sparse basis vector. At the other extreme, a very large numbe r of bases will result in the splitting of a pattern into two or more bases, which have similar encoding coefficient signals in time. In these situations, the base s under consideration can be combined into one basis. It is noteworthy that the performance measures of the Wiener filter designed in this chapter are not identical to those of the Wien er filter in table 6-1 since the training and test sets are different. For the Wiener f ilter in table 6-1, 10,000 training samples and 3,000 test samples are used, whereas for the one in this chapter, only 1,771 samples for training and 372 samples for test are used. He nce, the poorer generalization performance of the Wiener filter in this chapter is due to a smaller number of training samples. It is intriguing that the mixture of mode ls based on NMF genera lizes better than the Wiener filter despite the fact that the mixture contains much more model parameters. However, each model in the mixture receives the inputs processed by the sparse basis vector. Therefore, each model learns the ma pping between only a particular subset of neurons and hand trajectories, and the effectiv e number of parameters for each model is much less than the total numb er of input variables. More over, further overfitting is avoided by combining the outputs of local m odels by the sparse encodings of NMF.

PAGE 137

123 CHAPTER 9 REAL TIME NEURONAL SUBSET SELECTION The analytical methods including the se nsitivity analysis, the cellular tuning analysis, and NMF extract neuronal population properties use the enti re data by assuming stationarity. Hence, the information given by these methods may not be sufficient for the analysis of neuronal population functions that may be nonstationary. These facts lead us to consider a new analytical tool for finding the nonstationary propert ies, in both time and the space of the electrodes, of neuronal populat ion related with behavior. This is evident in the observation that only certain subset of neurons is involved in a particular movement, and the composition of subs et varies over time [Kim05b]. In order to develop an analytical so lution, finding the nonstationary neuronal relationships with behavior is recognized as the problem of tracking time-variant MIMO system in which only a subset of input channe ls contributes to desi red response at a given moment. Thus, we begin our de velopment in the basis of current tracking methods for nonstationary systems. Similar to the sensitivit y analysis, the basic id ea is to extract the real time information of the neuronal rela tionships to the model outputs through timevarying model parameters. There have been a variety of adaptive methods to adjust parameters for tracking time-variant system s including least mean squares (LMS), and recursive least squares (RLS) [Hay96a]. Thos e adaptive algorithms, however, may not be suitable to exploit the spatial structure in th e multi-channel data (as in multiple neuronal channels) since their tracking capabilities ar e basically guaranteed for single input time series. Also, the constant control parameters in LMS (e.g. a step size), or in RLS (e.g. a

PAGE 138

124 forgetting factor) prevent the algorithms from tracking the nonstati onary system more accurately (although we can vary such paramete rs in time, it is a nontrivial problem). In order to overcome the limitations of current tracking methods, we propose a new adaptive system modeling which can exploit th e spatial structure of neuronal population in real time by augmenting a spatial filter. In this structure, the outputs from filters in individual channels are filtered again usi ng an on-line variable selection algorithm [Kim04]. This selection algorithm enables us to find a subset of neurons relevant to a particular movement in every time instance. Hence, we can extract the information of which neuronal subset is correlated with behavi or at a particular moment. We believe that this method provides an analytical tool for extracting the nonstationa ry properties of the relationship between neuronal populati on and the associated behavior. In the design of the real time neuronal s ubset selection algorithm, there are several issues to be addressed. Firstly, the movement is correlated with the temporal pattern of neuronal activity. Secondly, the neuronal firi ng data are nonstationary in time and in space (over neuronal channel). Finally, only a subset of neurons may be involved in a particular trajectory of movement. In our proposed algorithm, the correlati on of each neuron with behavior is measured at the filtered output of each input channel, which can incorporate the temporal firing patterns of individual neurons. The filter parameters in every neuronal channel are adjusted in real time by LMS to track the nonstationarity of neuronal contributions to movement. In order to enhance the LMS algor ithm with a selective mechanism sensitive to the spatial multi-channel structure, we include a second stage spatial filter to the filtered channel outputs. This spatial f ilter imposes time varying weighting on each

PAGE 139

125 channel to track in space the time-varying spat ial structure. The spatial filter parameters are adapted in real time by using an on-line va riable selection algorith m. By virtue of the selection scheme in this algorithm, the spatial filter can be sparse a nd select a subset of neuronal channels at every time instance. An on-line variable selection algorithm ba sed on LAR has been developed to select a subset of input variables relevant to desi red response in a sparse linear time-variant system. If we constrain the L1-norm of coefficients such that the LAR procedure stops at certain number of steps less than the total num ber of variables, then the regression model only has nonzero coefficients in a subset of th e input variables. However, as introduced in chapter 4, LAR processes the en tire data to adjust coefficients based on a stationary assumption. In order to devise an on-line algo rithm of the LAR procedure, we utilize the RLS type recursion for the input covariance matrix and the cross-correlation vector between input and desired response, and m odify the LAR algorithm accordingly. By this modified LAR algorithm implemente d in real time, a subset of channels can be selected based on correlation with desired response at every time instance. We will demonstrate real time neuronal subset selection in BMIs for two datasets; 3D food reaching data of Belle, and 2D target reaching data of Aurora. The experimental results will show the nonstati onary characteristics of the contributions of individual neurons to movements. The chapter starts by intr oducing the on-line variable selection by modifying the LAR algorithm. Next, the architecture and pr ocedure of real time neuronal subset selection method will be presented, followed by the discussion about determination of

PAGE 140

126 selection criterion. Finally, the experiment al results for BMIs datasets will be demonstrated. On-Line Variable Selection Correlation between inputs and the desi red response can be accomplished by recursively updating the correlation vector. Th e input covariance matrix can also be estimated recursively. If one decouples the va riable selection part from the model update part in LAR, we can select the input vari ables locally with recursive estimates of correlations. The modified version of LARS fo r on-line variable selection is described as follows. We rewrite table 4-1 here for convenience in table 9-1. Table 9-1. Procedure of the LAR algorithm: revisited. Given an N M input matrix X (each row being M -dimensional sample vector), and an N 1 desired response matrix Y initialize the model coefficient i = 0, for I = 1,…, M and let = [1, …, M]T Then the initial LAR estimate becomes, 0 ˆ X Y Transform X and Y such that 0 11 N i ijx N, 1 11 2 N i ijx N, 0 11 N i iy N for j = 1,…, M (a). Compute the current correlation ) ˆ ( Y Y X Tc (b). Find Cmax = j jc max, and a set A = { j : | cj| = Cmax}. (c). Let Xa = {…, sign( cj) xj, …} for j A (d). Let = Xa T Xa, and = ( 1a T-11a)-1, where 1a is a vector of one’s with a length equal to size of A (e). Compute the equiangular vector = Xa(-11a) that has the unit length. Note that that Xa = 1a (angles between all inputs in A and are equal). (f). Compute the step size, j j j j A jc C c CC max max, min where min+ indicates consideri ng only positive minimum values over possible j (g). Compute j which is defined as the inne r product between all inputs and such as, j = XT (h). Update Y Y ˆ ˆ Repeat (a)-(h) until all inputs join th e active set A or j j exceeds the given threshold.

PAGE 141

127 Let us first analyze the LAR procedure illust rated in table 9-1. As stated in Kim et al. [Kim04], the time correlation (table 9-1 (a)) at a given step for the kth step of variable selection can be simply updated without computing residuals, cj( k ) = cj( k ) j (9-1) Hence, the update procedure of table 9-1( h) can be removed. The initial correlation cj(0), which represents the correlation between i nputs and desired response can be estimated outside of the LAR routine. Instead of compu ting the correlation with entire data, we can recursively estimate the correlation using a forgetting factor, given by p ( n ) = p ( n -1)+ d ( n ) x ( n ) (9-2) where is the parameter controlling memory depth, and x ( n ) is an 1 M input vector at time instance n This estimate of the correlation vector, p ( n ) is utilized by the LAR routine such that cj(0) = pj( n ). For the computation of the covariance matrix in table 9-1(d), we also estimate the input covariance matrix using the leaky integrator in the same way as (9-2), R ( n ) = R ( n -1)+ x ( n )Tx ( n ) (9-3) where is another step size for covariance estimati on. This matrix is no t directly used in table 9-1(d) since is the covariance of only subset of inputs. Also, the input vectors are multiplied by the sign of correlations before computing Therefore, we introduce a diagonal matrix S whose elements are signs of cj( k ) for j A Then, can be computed using R ( n ) and S as = SRaS (9-4) where Ra is La La ( La is the length of A in table 4-1) matrix representing covariance among the selected input variables. Ra can be given by elements of R ( n ), i.e., rij for i j

PAGE 142

128 A To remove the computation of the equiangular vector that requires a batch computation, we incorporate table 9-1( e) into table 9-1(g) such that j = XT = XT Xa(-11a) = XT Xa -11a (9-5) However, XTXa is nothing but the jth columns of R ( n ), for j A followed by multiplication with S So, if we define Racol to be a submatrix of R ( n ) consisting of the jth columns for j A, then j = Racol S -11a (9-6) Hence, using obtained by R ( n ) and S we can compute and consecutively j for j A This modification removes the computation of the equiangular vector in table 4-1(e), which is not directly required for computing j and in (f). Table 9-2 summarizes this modified version of the LARS algorithm. Table 9-2. The modified LAR algorithm for on-line variable selection. Given an N M input matrix X (each row being M -dimensional sample vector), and an N 1 desired response matrix Y initialize p (0) = 0 and R (0) = 0 Transform X and Y such that 0 11 N i ijx N, 1 11 2 N i ijx N, and 0 11 N i iy N for j = 1,…, M Update the correlation: p ( n ) = p ( n -1) + d ( n ) x ( n ) Update the input covariance: R ( n ) = R ( n -1) + x ( n )Tx ( n ) (a) c (0) = p ( n ). (b) Cmax = j jc max, and A = { j : | cj(k)| = Cmax}. (c) Compute a diagonal matrix S with elements of sign of cj( k ) for j A (d) = SRaS where Ra is submatrix of R ( n ) with jth rows and jth columns for j A (e) = ( 1a T-11a)-1 (f) j = Racol S -11a, where Racol is a matrix consisting of jth columns of R ( n ) for j A (g) Compute the step size, j j j j A jc C c CC max max, min. (f) Update correlation: cj( k ) = cj( k ) j. Repeat b – f until all inputs join the active set A or j j exceeds the given threshold.

PAGE 143

129 On-Line Channel Selection Method The on-line variable selection algorithm has been demonstrated to track linear timevariant system by combination with a linea r adaptive system [Kim04]. It outperforms traditional algorithms including LMS when only s ubset of input variables are related with the target signals at each time period. However, this algorithm lacks capabilities to track time-variant MIMO systems. Di rect application of this al gorithm to the embedded input space of an MIMO system may not be feasible due to correlation over time lags that prevent the LAR algorithm from finding appropriate subsets. Therefore, we seek a novel approach to select channels usi ng this variable selection scheme. Assuming linear independence between input channels in MIMO systems, we may be able to apply on-line variable selection to channels instead of each tap output. In order to do that, we need to identify a variable in each channel, which can represent the temporal patterns of input time series. If we consider learning a linear MIMO system, the estimation of target signal is typically th e sum of the outputs from filters at every channel. These filtered outputs can indicate the relationship between target signals and the input temporal patterns at each channe l. Since on-line variab le selection operates based on correlation, the filter output is hypot hesized to be a sufficient variable to provide the correlation information between targ et signals and its input temporal pattern. Hence, we choose the filter outputs as inputs to the on-line variable selection procedure. Then, the remaining question is how to model each filter. In modeling filters, the most important aspect to be considered is the capability of tracking nonstationary characteristics of the MIMO system since we are more interested in the selection of relevant channels than the estimation of ch annel parameters. It means that the learning parameters for the filters are not so important as far as the filters can sufficiently track the

PAGE 144

130 nonstationary MIMO transformation. Therefor e, we choose a finite impulse response (FIR) filter which has the simplest structure, and yet the maximum resolution among generalized feedforward filters [Pri93]. This maximum resolution property is important because real time channel selection will be mu ch influenced by short-time input temporal patterns. The adaptation method for FIR filte rs can be many, but we choose to utilize LMS due to its simplicity and reasonable tracking performance in nonstationary environments [Hay96a]. This linear MIMO sy stem consisting of FIR filters at each channel adapted by LMS can be further suppor ted by the fact that the Wiener filter featuring the same topology with a simple anal ytical solution can estimate hand trajectory with a reasonable performance level in preliminary BMIs studies (see chapter 3 for reference).1 Hence, FIR filters adapted by LMS in real time, will yield filter outputs for on-line channel selection. Figure 9-1 depicts the overall architecture of the real time neur onal subset selection approach. The input xi( n ) at the ith neuronal channel for i = 1, …, M is filtered by an FIR filter with order of L yielding the filt er output vector y ( n ) = [ y1( n ), …, yM( n )]T. The autocovariance matrix R ( n ) of y ( n ), and the cross-correlation vector p ( n ) between y ( n ) and the desired hand position signal d ( n ), are recursively estimated by (9-2) and (9-3), respectively. Then, the on-line va riable selection algorithm receives R ( n ) and p ( n ) to yield a LAR coefficients vector c ( n ) = [ c1( n ), …, cM( n )]T. Note that some of elements in c ( n ) can be equal to zero due to the constraint implied in LAR. The filter coefficients wji( n ) are updated by the normalized version of LM S using the error which is resulted as ) ( ) ( ) ( ) ( ˆ ) ( ) ( n n n d n d n d n eTy c (9-7) 1 Basically, LMS converges to the solution provided by the Wiener filter in stationary environments, and additive white noise assumption.

PAGE 145

131 The update of wji( n ) is then given by ) ( ) ( ) ( ) ( ) ( ) 1 (2i n x n c n e n n w n wj j ji ji x (9-8) for the jth channel with the ith time lag, where is a small positive constant and is a step size. Figure 9-1. The diagram of the architecture of real time neuronal subset selection method. It is obvious in this architecture that the constraint on the LAR procedure plays a key role in channel selection. Hence, results will be enormously a ffected by the selection criterion. The following will present our approaches to address this issue. Determination of Selection Criterion For the on-line channel selection algor ithm, which is based on least angle regression (LAR), we need to impose a constr aint for LAR to stop adding variables at some stage. Determination of this constraint impacts subset selection such that with a relatively loose constraint, the LAR proce dure will yield a large subset, increasing a Adapted by LMS Ada p ted b y on-line LA R x1( n ) z-1 z-1 y1( n ) w10 w1L-1 / /xM( n ) z-1 z-1 yM( n ) wM0 wML-1 / /… y2( n ) c1( n ) cM( n ) ) ( ˆ n d c2( n )…

PAGE 146

132 chance to select irrelevant channels, and w ith a more strict constraint it will yield a smaller subset, possibly missing relevant ch annels. Hence, a careful approach to constraint determination is necessary in the selection algorithm. We propose an approach to determine sele ction criterion based on the correlation between neuronal channels and desired movement s. In our approach, th e constraint in the LAR procedure is based on surrogate data in which the cause-effect relationship between neuronal inputs and desired response are destr oyed. The constraint can be adjusted such that LAR rarely selects subsets in the surroga te data while yields reasonable selection in the original data. The surrogate data was ge nerated by two different procedures; 1) desynchronizing input and output by delay of desired response (hand position), or 2) randomizing the phase of the hand trajectory signal while preservi ng its power spectral density (PSD). In the experiments for food reaching data, however, the de-synchronization data does not provide useful information about deci sion of constraint due to the problem of delay over continuously firing neuronal ensemb le. In order to deal with this problem, further development of conditional selection is proposed. This selection criterion assesses the goodness-of-fit of FIR filters locally in time proceed ing to selection in order to avoid conducting selection on irrelevant filter outputs We believe that this conditional selection will result in more reliable neuronal subsets in real time. Details of approaches and the demonstration of experimental results are pr esented in the remainder of this chapter. Determination of Threshold in LAR Using Surrogate Data In the LAR procedure, variables are select ed one by one at each stage along with the adjustment of the coefficients for all th e selected variables. The LAR procedure can be designed to stop adding variables in the subset when a given constraint is met.

PAGE 147

133 Usually, a threshold is imposed on the L1-norm of the coefficient vector in LAR. However, it is very difficult to decide the proper L1-norm since we do not know a priori the optimal amount of the sum of weight magnit udes at each time instance. Therefore, we need to seek other possible ways of impos ing constraints in which we can extract a practical meaning of the threshold in LAR. One possibility is to impose a constraint ba sed on correlation due to the fact that LAR exploits the correlation between input and regression residua l. Recall that the maximum absolute correlation between input s and the current residual decreases over stages in the LAR procedure (see chapter 4). Also, the first maximum correlation is nothing but the cross-correlati on between the first selected variable and desired response since the procedure starts with all zero coeffi cients. At any stage in the LAR procedure, the absolute values of the correlation between all the selected variables and the current residual are intrinsically same to each other, which is the maximum over all variables. The curve of the maximum correlation over st ages has the following property; if two variables which are successive ly selected have the simila r correlation with desired response, the difference between the successi ve values of the maximum correlation will be small, but if the difference between the correlations of two va riables with desired response is large, the maximum corre lation will decrease drastically. This property is illustrated in Fig. 9-2 where the LAR procedure in 2D input space is shown. Assuming that the input vectors x1, x2, and desired vector d are standardized (namely, zero mean and unit variance), and x1 has more correlation with d the algorithm starts to move in the direction of x1. It finds the coefficient 1 for x1 such that x2 has the same correlation with the current residual r1 as x1, where r1 = d – y1 = d 1x1.

PAGE 148

134 Therefore, the maximum correlation reduces from Cmax(0) = | x1Td | to Cmax(1) = | x1Tr1| = | x2Tr1|. By virtue of standardization, we can approximate the inner product as the angle between vectors such that Cmax(0) cos 0, and Cmax(1) cos 1, where j represents the angle between x and r at the jth stage. In Fig. 9-2, there ar e two examples of the LAR procedure. The left graph shows the case when x1 and x2 have similar correlations with d In this case, the difference between 0 and 1 is small such that Cmax does not decrease much from stage 0 to 1. In the cas e shown in the right graph when x2 is less correlated with d than x1, a large difference between 0 and 1 makes Cmax decreases a lot. Hence, we can observe the correlation between the se lected input variables and desired response through the curve of Cmax. Figure 9-2. An illustration of the successive maximum correlation over stages in the case of two variables (channels); xj: input variable, d : desired response projected in input space, y1: regression by x1, j: the angle between the selected variables and the residual at the jth stage, and r1= d y1. Figure 9-3 demonstrates two examples of the maximum correlation curves. In Fig. 9-3a, there is a huge drop of the correlat ion between the first channel and the second channel. In this case, only the first channel wi ll be selected with some threshold (e.g. 80). x1x2 d y1 0 1 | Cmax(0)| = | x1 T d | cos 0 | Cmax(1)| = | x1Tr1| = | x2Tr1| cos 1r1 x1 x2d y1 r1 0 1

PAGE 149

135 In Fig. 9-3b, the curve does not decrease dras tically until the third channel is selected. Thus, the first three channels will be selected with the same threshold. Figure 9-3. Examples of the maximum absolute correlation curve in LAR. Hence, we can impose a threshold on the curve of the maximum correlation such that the procedure stops when the maximum correlation becomes less than the threshold. Then, the input channels with nonzero coeffici ents are selected into a subset. If the correlation with the first channel and desired response is less than the threshold, none of channels will be selected. However, it is still an open problem to determine a threshold which might be dependent upon data. In our approach, we u tilize the surrogate data to determine the threshold. We generate two types of surroga te data. The first data are composed of neuronal inputs and a delayed version of the hand trajectory. The amount of delay is chosen to be 5 second since successive reaching movements has an interval of approximately 10 second. In this case, the synchronization between neuronal firing and hand movement will be substantially destr oyed. The second data consist of neuronal inputs and the perturbed hand tr ajectory signal. The perturbed hand trajectory signal is generated by randomizing the phase of the orig inal signal while the PSD of the signal is

PAGE 150

136 preserved to make energy unchanged after pert urbation [Pal98]. A threshold is tuned in the surrogate data such that the probability of selection at least one channel at each time instance becomes very low. Then, this threshold is used for the original synchronized data of neuronal inputs and hand trajectory. We implement this approach in 3D food r eaching data (see chapter 2 for details of data). The real time neuronal subset sele ction algorithm is run over 3,000 samples (300 seconds). In the linear filters, there ar e 104 neuronal channels each delayed by 10-tap delay line. The selection algorithm starts afte r 100 second in order to allow LMS to adapt filter weights in the beginning. Th e learning rate of LMS is se t such that the sum of the linear filter outputs (i.e., without subset selection) can track the hand trajectory reasonably in the synchronized data. Note that it must not be set too la rge such that filter outputs do not change too fast over time (if they change too fast, their correlation with desired response may not indicat e the relevance of channels). In the experiment, the learning rate is set to 0.1, a nd the feedback parameter in th e recursive estimates in (9-2) and (9-3) is set to 0.8. These parameter sett ings are kept for all the synchronized and surrogate data. The experimental results with the original and surrogate data are shown in Fig. 9-4. The threshold is empirically determined based on the surrogate data (numerically, it is 80). Fig. 9-4a shows the examples of neurona l subset selection for the surrogate data while Fig. 9-4b shows the examples for the original data. The neuronal subsets are presented along with the corresponding hand traj ectory (z-coordinate) for seven different reaching movements. Notice that there are signi ficantly many channels selected in the desynchronized data. This is due to the fact that the linear filters may utilize inputs of some

PAGE 151

137 Figure 9-4. Neuronal subset se lection examples; (a) the de-synchronized (by 5-second delay) data, (b) the original (synchron ized) data, and (c) the surrogate data with the perturbation of hand trajectory.

PAGE 152

138 neurons (e.g. a neuron indexed as 94), whic h are known to modulate after reaching is finished, for prediction of reaching movement since these inputs are made to be aligned with reaching in time by 5-second delay (a reaching movement approximately spans 4 seconds). If we attempt to select fewer channels for the de-synchronized data by decreasing the threshold, it also reduces the selection for the synchronized data. The results with the second surrogate data wi th the perturbed hand trajectory are shown in Fig. 9-4c. The same threshold as above is used for the LAR procedure, with which very few channels are selected over entire data. If we define a selection rate as the av erage number of selected channels at each time instance, the selection rate for the second surrogate data results in 9.110-4 0.0044. On the other hand, the selection rate for the s ynchronized data with the same threshold is 0.012 0.007 (for the de-synchronized da ta, it is 0.011 0.007). Therefore, for determination of threshold on the maximum co rrelation curve, it might be more practical to utilize the second surrogate method created with the perturbed hand trajectory since we can determine a threshold yieldi ng a very small selection rate. Conditional Selection Criterion We have observed that there exist channels which are consistently selected during reaching movement in the de-synchronized da ta. In order to avoid selecting those channels in the de-synchronized data wh ile preserving a selection rate in the synchronized data, we design an alternative appr oach to utilize the filter outputs prior to the selection procedure. It is based on the observation that the LMS-adapted filters do not track hand trajectory fairly if we de-synchronize the data. It implies that the filter outputs used for subset selection may not contain ad equate information about individual neuronal contributions to model output. Hence, at every time instance we can proceed in two steps;

PAGE 153

139 in the first step we assess how much th e filter outputs are correlated with desired response then proceed to the next step of cha nnel selection only if the correlation measure exceeds certain threshold. Note that this thresh old is different from the one that we have used for the maximum correlation in the LAR procedure. Adjustment of this threshold may reduce the selection rate in the de-synchr onized data with lit tle deterioration of selection in the synchronized data. Figure 9-5 demonstrates examples of the sum of filter outputs in the synchronized a nd the de-synchronized data. 0 500 1000 1500 2000 -30 -20 -10 0 10 20 30 40 50 time (100msec)Hand PositionsSynchronized Data 0 500 1000 1500 2000 -30 -20 -10 0 10 20 30 40 time (100msec)Hand PositionsDe-synchronized Data actual filters output Figure 9-5. Demonstration of filter outputs be fore subset selection; (top) synchronized data, (bottom) de-synchronized data. The correlation between filter outputs and de sired response is estimated as follows. The correlation between the su m of filter outputs and desire d response is measured at

PAGE 154

140 every time instance. We estimate the correla tion by RLS type recursion as used for online selection algorithm. The correlation estimate at time instance n between the sum of filter outputs y ( n ) and desired response d ( n ) is given by ) ( ) ( ) ( ) ( ) 1 ( ) ( n dp n yp n d n y n C n Cyd yd (9-9) where is another forgetting factor for this r ecursion of correlation (set to 0.95 in the experiment). Note that the product of y ( n ) and d ( n ) is normalized by the square root of local signal power estimates in order to avoid biasing of correlation measure to a large magnitude of d ( n ). The local (in time) signal power estimates, which are denoted by yp ( n ) for filter output and dp ( n ) for desired response, are computed by the same recursion, 2 2) ( ) 1 ( ) ( ) ( ) 1 ( ) ( n d n dp n dp n y n yp n yp (9-10) If Cyd( n ) for certain threshold the on-line selection procedure is run on the filter outputs. If Cyd( n ) < we make all the LAR coefficients ( cj( n ) in Fig. 9-1) equal to zero, yielding an empty subset. The threshold is determined empirically such that few channels are selected in the de-s ynchronized data over entire data ( = 0.7 in the experiment). The other threshold for the ma ximum correlation in the LAR procedure is kept same as above. Note that in this case th e filter coefficients may be kept adapted by the standard LMS algorithm even when its output is not select ed. The reason is that some filter coefficients can be kept unchanged fo r long time, losing tracking ability due to conditioning on selection. Hence, it would be preferred to adapt every filter coefficients at every time in order to tr ack nonstationarity. This means that every coefficient is updated by (9-8) with letting cj( n ) = 1 for all j

PAGE 155

141 The neuronal subset selection results for the same seven movements as above are shown in Fig. 9-6 for the de-synchronized data and the synchronized data. The selection rates are 0.006 0.006 for the synchronized data, and 0.001 0.003 for the desynchronized data. Compared with Fig. 9-4, th e subset selection in the de-synchronized data becomes very sparse, whereas similar neuronal subsets are selected in the synchronized data. For the surrogate data with the perturbed hand traj ectory, no subset is selected in this case due to the additional c onstraint on the filter outputs. These results demonstrate that we can determine thresholds with the surrogate data combined with the condition on the correlation between filter outputs and desired response such that neuronal subsets are selected on ly in the synchronized data. Experiments of Neuronal Subset Selection With the conditional selecti on criterion as described above, now we implement the neuronal subset selec tion method in BMIs. In order to ensure the robustness of the selection results to initial conditions, we run the simulation 50 times to obtain multiple realizations of select ion. Then, we define a selection vector as T Mn s n s n s n s )] ( ), ( ), ( [ ) (2 1 (9-11) sj( n ) = 1 if the jth channel is selected, and sj( n ) = 0 otherwise. The average vector of s ( n ) over 50 realizations are computed for every n Figure 9-7 depicts these average vectors for the same seven movements as above. The results show that if the jth channel is selected, the average of sj( n ) becomes very close to 1. It re veals that a subs et of neurons is consistently selected with different initial conditions for the same movement.

PAGE 156

142 0 20 40 -40 -20 0 20 40 60 Hand Positions Neurons 20 40 20 40 60 80 100 0 20 10 20 30 0 20 10 20 30 0 20 40 (a) time (100msec) 20 40 0 20 40 20 40 0 20 10 20 30 0 20 40 20 40 0 20 40 -40 -20 0 20 40 60 Hand Position Neuron 20 40 20 40 60 80 100 0 20 10 20 30 0 20 10 20 30 0 20 40 (b) time (100msec) 20 40 0 20 40 20 40 0 20 10 20 30 0 20 40 20 40 Figure 9-6. Neuronal subset selection c onditioned by the correlation between filter outputs and desired response; (a) de-s ynchronized data, and (b) original synchronized data.

PAGE 157

143 0 20 40 -40 -20 0 20 40 60 Hand Position (Z) Neuron 20 40 20 40 60 80 100 0 20 10 20 30 0 20 10 20 30 0 20 40 Time (100ms) 20 40 0 20 40 20 40 0 20 10 20 30 0 20 40 20 40 0.2 0.4 0.6 0.8 Figure 9-7. Demonstration of the robustness of the algorithm to initial conditions. In Fig. 9-8, the tracking performance of our approach is compared with that of the straight linear MIMO systems tr ained with LMS for the best tracking. In other words, it demonstrates the effect of the on-line channel selection al gorithm on tracking performance. The sample outputs of both our MIMO system with on-line channel selection and the straight linear MIMO syst em are displayed on t op of the actual hand trajectory in z-coordina te. Although the statistic al measurements of performance in terms of the mean squared deviation and the mi sadjustments [Hay96a] must be conducted, which will be pursed in future studies, we can clearly see from this figure that our tracking system identifies the peaks of hand traj ectory a lot better than the straight linear system. Note that the parameter settings in the LMS algorithm for both systems are identical for the purpose of fair comparison. This superior tracki ng performance of our system may result from the fact that addi tional spatial filtering by the on-line channel selection algorithm optimally combines filter outputs to reduce the instantaneous error

PAGE 158

144 between desired response and the final output. Also, the sparse set of coefficients estimated by the on-line channel selection al gorithm may play a role of adjusting the update of weights for indivi dual linear filters which are once adapted by LMS with a constant rate. Figure 9-8. An example of the outputs of tw o tracking systems with (solid line), and without on-line channel selection (dashed line). It is often necessary to find a subset of neurons that are relevant to current movement (of any dimension) without separation into each coordinate. For this purpose, we need to perform selection for individual coordinates and combine the selection results into one since the on-line variable selecti on algorithm is currently developed only for single dimensional output. Therefore, after obtaining se lection vectors s ( n ) from every coordinate, we simply perform Boolean OR operation with those vectors to yield a combined selection vector, by assuming s ( n ) as a Boolean vector ( by virtue of the fact that s ( n ) consists of 1 or 0). It means that if a channel is selected for at least one coordinate of hand position, it joins the sele cted subset. Figure 9-9 demonstrates the

PAGE 159

145 combination result of neuronal subsets for all three coordinates of hand position. It is obvious that more channels are selected w ith combination compared to Fig. 9-6b. With this combination, we can obtain single repres entative of neuronal selection for a given movement instead of examining individual selections for each coordinate of hand trajectory. Next, we investigate if the relationship between neuronal population and behavior varies over time by the analysis of neuronal s ubsets. In order to account for this variation in time, we execute neuronal subset selec tion over a long period of data (2,000 seconds). Then, we analyze neuronal subsets in the early part of the data, and the ones in the late part of the data. The resulted subsets and corresponding hand traject ory at z-coordinate are depicted in Fig. 9-10. There are interesti ng observations in these subsets; there are 0 20 40 -40 -20 0 20 40 60 Hand Position Neuron 20 40 20 40 60 80 100 0 20 10 20 30 0 20 10 20 30 0 20 40 Time (100ms) 20 40 0 20 40 20 40 0 20 10 20 30 0 20 40 20 40 Figure 9-9. Neuronal subset selection for all three coordinates of food reaching movement.

PAGE 160

146 neurons which consistently contribute to movement over time such as 5, 7 and 93. But, some neurons, which are selected in the ea rly part of the session, do not seem to be involved in later movements (e.g. 70). Also other neurons including 23 and 71 are not selected in the early part of the session, ye t join the selection fo r the late part of movements. It is also interesting to see th e transition of contribution from neuron 70 to neuron 71 over time since the activities of those neurons are collected from adjacent Figure 9-10. Neuronal subset selection over 2,000-s econd data; (a) subsets in the early part, and (b) subsets in th e late part of the data. 0 10 20 -40 -20 0 20 40 60 Hand Positions (Z) Neuron 5 10 15 20 25 20 40 60 80 100 0 20 (a) time (100ms) 10 20 30 0 20 40 20 40 0 10 20 -40 -20 0 20 40 60 Hand Position (Z) Neuron 5 10 15 20 25 20 40 60 80 100 0 20 (b) time (100ms) 10 20 30 0 20 10 20 30 70 71

PAGE 161

147 electrodes in PMd. These observations demonstr ates that the real time neuronal subset selection method can provide a very usef ul tool to understand the nonstationary properties of neuronal population associated w ith behavior. Now, we apply the neuronal subset selection method for another BMI data set; 2D target reaching of Aurora. The thresholds used for conditional selection criterion are empirically determined such that the selection rate with the de-synchronized da ta is much smaller than that with the synchronized data. The de-synchronized data is generated by time delaying the hand trajectories by 10 seconds. With certain threshold (numerically, = 0.3), the selection rates turn out to be 0.002 0.006 for the de-synchronized data, and 0.015 0.009 for the synchronized data, respectively. The other thre shold on the maximum co rrelation curve is determined empirically such that the selection rate with the other surrogate data generated by randomizing the phase of hand trajectory is mu ch smaller than that with the original BMI data. Real time neuronal subset selection with th ese settings of thre shold is performed on 1,600-second long data (16,000 samples) consis ting of 185-channel neuronal bin counts and 2D desired hand positions. The FIR filter at each channel has an order of 10. The LMS algorithm is applied to adapt FIR filte r coefficients in real time. A Boolean selection vector s ( n ) for x-, and y-coordinates are comb ined at every time instance by OR operation. The results of the combined subset selection are shown in the bottom panel in Fig. 9-11. The subsets correspond to five sample segments of the entire hand trajectory, which exhibits a similar trajectory as illust rated in Fig. 9-12. Inspecting neuronal subsets over different segments, we can obtain neurons that are consistently selected such as 69,

PAGE 162

148 80, 84, 92, 99, 108, and 110.2 However, there are a number of neurons that are selected only in particular segments such as 45, 54, 67, 149, and so on. Most of selected neurons are collected from M1 area. Yet, neurons fr om PMd area become parts of subsets in the last two segments (late part in the dataset). -40 -20 0 20 40 Hand Position Neurons 762 810 50 100 150 5140 5169 time index (10Hz) 7353 7387 10130 10165 10559 10604 x y Figure 9-11. Neuronal subset selection for a 2D target reaching BMI. 0 0 XY Figure 9-12. 2D hand trajectories in five sa mple data segments selected in Fig. 9-11. 2 These neurons are selected in at least three segments.

PAGE 163

149 Discussions With the proposed conditional selection cr iterion, the on-line channel selection algorithm is activated only when the correla tion between the LMS-adapted filter outputs and desired response is larger than certain threshold. However, there must be moments at which some neuronal channels are significantl y relevant to hand movements even when the sum of outputs is not correlated with desired response due to noise. Also, the imposing constraint on the maximum correlatio n curve in the LAR procedure may not be an optimal choice. Hence, there are still a lot of options for selection criterion to enhance neuronal subset analysis in real time. It is noteworthy that neurons that are sele cted by subset selection match with those with NMF and the sensitivity analysis. Fo r instance, neurons indexed by 5, 7, 23, 71, and 93 are observed in NMF basis vectors, or in the top-ranked group of neurons assorted by the sensitivity analysis, for food reac hing. Also, neurons indexed by 54, 69, 80, 84, 92, 99, 108, 110, 149, and 167 are observed in NMF basis vectors, or in the top-ranked sensitivity group, for target reaching. Thes e comparisons show us that the subsets selected in real time are not different from neuronal subsets determined by stationary methods. However, the advantag e of real time subset sele ction is the capability of detecting time-varying changes of com position of subsets in a nonstationary environment, and under these conditions, the fitting will be greatly improved. One important question is to extend this data analysis tool that requires the desired signal (behavior) for real time subset selec tion in BMI during the testing phase when no desired response is available. This is a non-trivial problem since desired response is usually not available after the model is trai ned. Hence, the subset cannot be found with the correlation between neurona l channels and desired respon se in the evaluation mode.

PAGE 164

150 However, if we can somehow modify the sel ection process such that selection can be executed without target information in real time, it will be enormously useful for decoding models in BMIs. One plausible id ea is to find the relationship between each filter output and selection; for which characte ristics of output, the channel is selected. It may require detection of output patterns or cla ssification in output space. In any case, this will be an exciting topic to pursue in future study. Although we have demonstrated subset sele ction in this chapter, more rigorous analyses must be conducted to quantify the re sults of subset selection. These analyses may involve from the fundamental statistical analysis of subset to the advanced probabilistic approaches to investigate the synchronous act ivity of neural ensemble. Among such analytical approaches, a data mining technique developed for determining the synchronous co-activation subset of multichannel input [Bou04] is of special interest since it provides a methodology to quantify the reproduc ibility of co-ac tive patterns in multi-channel data. It also forms a series of Boolean vectors from the particular activity events in each channel in order to automatically extrac t the synchronous co-activity patterns. Since the neuronal subset selection data can be dealt as a series of Boolean vectors as depicted in previous section, this methodology may provi de an appropriate way to extract very useful information about the synchronous activity of neural ensemble combined with subset selection. This topi c must be covered in the following future investigations. Albeit there is a wide range of ways to perf orm the analysis in the neuronal subsets, we can glimpse the characteristics of the selected neuronal subsets by relatively straightforward quantitative evaluations. Here we demonstrate a few examples for such

PAGE 165

151 quantification using food reaching data. Since we are only interested in subsets selected during movements, we divide the entire subs et data into individual segments for each reaching movement. This procedure results in 149 segments corresponding to each reaching movement in a training dataset. The first example describes the selection of individual neurons for each movement as shown in Fig. 9-13. If a ne uron is selected during a give n movement at least for two consecutive time instances, the neuron is determ ined to be in the set of selection for that movement. In this graph, we can observe th at which neurons are c onsistently selected over many reaching movements. For instance, neurons indexed as 5, 7, 23, 29 and 93 are shown to be selected for overall movement s. These neurons also exhibited the large relations with the hand movement in the sens itivity analysis or the NMF bases (see table 8-1). On the other hand, neurons indexed as 19, 45, 71, and 84 are partially selected for some movements. Especially, the neuron 71 is mostly selected in the movements Figure 9-13. Selection of i ndividual neurons over a seri es of reaching movements.

PAGE 166

152 occurred in the late part of the data. This neuron has been discussed in previous sections in order to reveal the nonsta tionarity of neural activity. It is now clarified by this quantification of subset sele ction. Notice that these neurons were also identified by the sensitivity analysis or NMF. However, w ith those methods we could not discern the temporal characteristic of the relation for individual neurons, which is now feasible through this real time analysis. In the second example, we evaluate the di stribution of the size of subset in each movement as depicted in Fig. 9-14. For each m ovement, we count the number of bins for which a subset contains k neurons for k = 1,…, 8. The bins for which the number of neurons exceeds 8 belongs to the group of bins with 8 neurons selected. Then, we display the counting results in color map per movement In this figure, we can see the tendency of increasing coactivity as time increases sin ce the number of bins with more than one neurons being selected tends to increase in the late movements. This observation may give us a clue for very important aspects of the behavior of neural ensemble in motor cortex with respect to reaching movement (f or instance, the increasing co-activity of neurons during training for a particular task). However, the investigation for this subject must be conducted in more thorough way such as the statistical procedure in [Bou04]. It must also be accompanied by neurophysiologic investigations. Nevertheless, this will be a very attractive rese arch topic in BMIs. Finally, in order to ensure the valid ity of neuronal subset selection, the misadjustment of our MIMO system using online channel selection is compared with that of the straight linear system. Figure 915 shows the average misadjustment computed for each movement. In this figure, the MIMO system with on-line selection exhibits

PAGE 167

153 Figure 9-14. The distribution of the subset size over a series of reaching movements. superior performance to the normal MIMO syst em for most movement s. This result is consistent with the demonstration of tracking performance in Fig. 9-8. Figure 9-15. Comparison of the average mi sadjustment per movement between the standard MIMO system learned by LMS and the MIMO system with on-line channel selection.

PAGE 168

154 CHAPTER 10 CONCLUSIONS AND FUTURE WORKS Conclusions Inspired by an excellent performance of the Wiener filter algorithms in the estimation of movement parameters from th e activity of large (100~200 cells) neuronal ensembles, we conducted an extensive compar able study of MIMO f ilters in BMI design. As test data, we used two datasets, each coll ected in different experimental BMIs, one in a monkey reaching for food in 3D space, and the other one in a monkey reaching a visual target in 2D space. Although in certain comparisons, different models had very similar performance quality, we anticipat e that with the development of BMI field and especially with the increase in the number of simulta neously recorded neurons, some of these modeling ideas will find important applications For the present datasets, all the MIMO filters including the standard Wiener filter performed very well in spite of the large number of degrees of freedom (over 3,000 para meters) and the absen ce of regularization. The major reason for such high performance quali ty may be due to an excellent quality of the neuronal recordings. Multiple microelectr ode arrays were strategically implanted in cortical areas known to be associated with arm and hand movements. In addition, special care was taken to keep experime ntal conditions controlled and restricted to specific task requirements. It still remains to be studied how the linear models perform as the range of motor performances and experimental conditions becomes more complex. Notwithstanding a good performance of nonoptimized Wiener filters for these datasets of 100-200 spatially tuned neurons, we showed that the amount of input data

PAGE 169

155 could be reduced. The number of parameters of the linear model was decreased using two different approaches for pruning in time and in the space of the el ectrodes. In the time dimension, we used gamma delay operators in stead of ideal delays to decrease the number of coefficients while spanning the same memory depth (although with a coarser resolution). The gamma model produces statisti cally better models when compared to the Wiener filter. Pruning in electrode space is achieved us ing two different strategies: selecting important channels and using regularization me thods to control complexity. The selection of channels with PCA (input neuron inform ation) does not perform well, however, a combination of PCA and PLS that chooses s ubsets of neurons based on their importance in the joint (input and desire d signals) space is able to statistically outperform the conventional Wiener filter in both tasks. Li kewise the weight decay regularization also statistically outperforms the Wiener filter. Ho wever, the regularization parameter must be appropriately selected in cross-validation; otherwise the performance is very brittle. Therefore, we conclude that the tools of re gularization theory are an asset for optimal modeling in BMIs, but the improvements are sm aller than expected in spite of being statistically significant. Comparison of the performance of nonlin ear versus linear models showed better performance of nonlinear model for one dataset (food reaching), but not for the other (target reaching). Nonlinear models significan tly outperformed the linear counterparts for the food reaching task, mostly due to their ability to follow better the non-movement (hand at rest) portions of the desired response. This is due to their ability to “shut-off” parts of the network by virtue of nonlinearity. However, in the target reaching task where

PAGE 170

156 the hand is almost always moving, the performa nce was very similar, being statistically indistinguishable from the Wiener filter. Gi ven the complexity of brain networks and no a priori reason for them to have linear propert ies, this was unexpected, and may reflect the fact that it is harder to train nonlinear models to the same specification of the linear ones. Or simply that due to the large i nput space of BMIs finding a linear projection space of reduced dimension (2D or 3D) is sufficient when performance is the only metric. In addition, one would expect a better performance in a nonli near model when it matches in some ways the performance of the real brain network, otherwise it would falter. Linear models on the other hand already inco rporate well-known properties of cortical neurons, such as directional tuning (typically described by a cosine f unction), sensitive to position, velocity and force. The challenge aroused from performance saturation of both li near and nonlinear models led us to view BMI signal processi ng in a different angle. With the redundant representation of neuronal firing activity thr ough the multiresolution analysis of spike trains, the performance level of a simple lin ear model (with regularization) was increased. Although the extent of performance impr ovement was marginal with a simple reconstruction of neuronal inpu t space, these experiments showed us the importance of the encodings of neural information for BMI mo dels. This preliminary result will lead us to seek a congruent se t of encoding basis for neural in formation from which a decoding model can easily find mapping to behavior. We postulate that the non linear topologies may have practical advantages when BMIs are implemented in real time digital signal processors. Work reported in [San03b] shows that when memory constraints and cl ock cycles are taken into consideration,

PAGE 171

157 RMLP requires a smaller computation bandwidth and resources than the FIR filter trained with NLMS. However, training of the RMLP is still more complex than the NLMS algorithm, so further work to find nonlinear t opologies that train fa ster should be sought. The successful implementation of echo state networks as a decoding model for BMIs gives one possible direction for addressing th is issue. In terms of the regularization techniques, the gamma model and the weight decay can easily be implemented in DSPs, but the subspace Wiener filters require a subs tantial increase in computation. Therefore, further work to simplify these algorithms shoul d also be pursued. In terms of deployment, a BMI with 100 channels to predict 2 or 3D hand trajectories based on the regularized NLMS filter can be implemented in real time in a small Texas Instruments C33 WiFi board recently developed by our group. A comment regarding prediction performan ce of these algorithms in terms of correlation coefficient (CC) is in order. Th e CC of all these algorithms is capped at 0.8 for the food reaching tasks and 0.7 for the targ et hitting tasks. It is important to investigate if this limit is related to missing data (only the tiny per centage of the motor cortex neurons is probed) or if it is the intrinsic spatio-temporal nonstationarity of the data that is not properly captured by this cla ss of models that lear n based on stationarity assumptions. Another important issue that should not be forgotten in the design of better BMIs is how to effectively include neurophysiology knowledge both in the filter topologies and in the cost functions. Besides well-earned clinical applications, what experimental BM Is newly bring to researchers is the opportunity to investig ate the functional orga nization of neural ensemble associated with beha vior in real time. And, this investigation is often coupled

PAGE 172

158 with the fitted models such that we can gain a wealth of information from model parameters. With this respect, we stepped in to the development of engineering solutions for the analysis of neural systems and their relation with motor functions. In our works, two methodologies were proposed to provide va luable analytical t ools, including pattern determination using non-negative matrix factor ization (NMF), and a real time neuronal subset selection algorithm. Processed by NMF, spatio-t emporal patterns of neuronal ensemble could be effectively represented in NMF basis vectors, and the contribution of each pattern to motor parameters could be estimated. One of intriguing aspects of this analysis is that only by simple factorizati on of a neuronal firing count matrix we could obtain the information of spatio-temporal ch aracteristics of neur onal populations. This arouses especial interest since no one has demonstrated a way of ascertaining synchronization of a group of cells in such a simple fashion, without analyzing each cell property. Although we are not at a stage of full understanding how NMF can find synchronization in very complex neuronal firi ng data, further apprehension of NMF and BMIs will lead us to consolidate this t ool for many neuroengineering applications. However, as other current solutions, NMF is limited in the case of stationary environment since it factorizes a block of data. Therefore, the second attempt was made to utilize the adaptive filter coefficients for probing time-varying changes of neuronal contributions to movements. In order to overcome the difficulty in tracking a huge MIMO system with the standard on-line adap tive algorithm such as LMS or RLS that is governed by constant parameters, we proposed to utilize an on-lin e variable selection scheme to linear filters. With a proper setti ng of selection criteri on, the on-line selection algorithm could spot a subset of neurons th at was correlated with a present part of

PAGE 173

159 movement. The profile of the selected subs ets also matched with preliminary methods including the sensitivity analysis and the ce llular tuning analysis. Moreover, due to its real time operation, this algorith m could detect the change of neuronal contributions over time. With further calibration of the procedure, we believe that this analytical tool will be a useful probe for the investig ation of neural system analysis based on the BMI setups. Future Works We will pursue research in BMIs in two main thrusts; the design of decoding models, and the analysis of neuronal ensemble coding with behavior. As browsing a wide range of linear and nonlinear models from adap tive and statistical learning theories to obtain better fitted models for BMIs, we have experienced that the existing real time modeling frameworks might reach some limitations. Although we must admit that this limitation may come from the extremely sp arse sampling of neuronal activity among millions of motor cortical cells there seem to be still a good deal of potentials to design a better model. This will also become plai ner as more complex and diverse do the experimental paradigms and the goals of BMIs. Hence, we will continue to seek chances to build more suitable BMI models. What now we see as feasible approaches is based on our preliminary studies; further developmen t of mixture of experts, and an adaptive system accounting for nonstationarity. A nonlinear mixture of linear models (NMCLM) has demonstrated that with a proper mixi ng function, we could improve performance. The defective performance of this model fo r a continuous 2D target reaching BMI might be probably due to lack of an adequate mi xing criterion. A different approach based on the switching Kalman filter model has also shown that a mixture of local models could boost performance [Wu04]. Hence, we will st ep forward in this direction aiming at finding appropriate localization and establis hing a proper mixing f unction, thus boosting

PAGE 174

160 generalization and accuracy of the mixture. Although we apprehend that brain activity and movement generate completely nonstaionary signals, we have not been able to create models which can track time-variant systems. Only close approaches has been based on state estimation using Kalman filters [Wu03, Wu04], or a recursive estimation of tuning properties for the population vector coding [Ta y02]. A major difficulty in this type of modeling is that we have to track changes occurring in th e joint space of input/output, which is not feasible after training without information of desired signals. Hence, the possible alternatives may be to utilize a da tabase built during training stage in suitable ways. With a proper extraction methodology for nonstionary characteristics in the joint space, and the precise construc tion of pavement for model parameters, we might be able to continuously update a paramete r vector after training is fini shed. This research topic is now only beginning in the adapti ve learning theory field, and further developments will lead us to yield a model for nonstationary environments in BMIs. We have started to engineer signal processing tools for probing neural systems in experimental BMI setups, including pattern determination using NMF, and real time subset selection. Although the a pplications of these tools to brain research seem to be promising, there are still remaining issues to be solved. As for NMF, we need to fully understand how the NMF learning algorithm cap tures repeating patterns in the input matrix. Also, it has been shown that for a 2D target reaching BMI we had to employ the multiresolution representation (with relatively larger scales) of firing rate in order to extract repeating patterns by NMF. This leads us to enhance NMF to be effective for the data with a complex structure. In the real time neuronal subs et selection method, we first have to establish a way to verify that the selected subset makes a biological sense. We

PAGE 175

161 have investigated the overall distribution of neuronal subsets by comparing it with the results of other methods, and demonstrated the compatibility between the subset distribution and the assorted neurons from the sensitivity analysis and NMF. However, temporally local selection resulted in rath er various subsets for the repeated reaching movements. These observations pose a question; is time-varying subs et selection caused by the nonstationary aspects of neuronal ensemb le, or by the stochas tic properties of the adaptive algorithm? To address this questi on, we need to form a solid methodology to minimize the chance that the subsets are ge nerated by the stochast ic nature of the adaptive mechanism. Another issue in our a pproach is that sel ection is based on the assumption of linear relationship. However, there is little chance that the true relationship between neuronal firing activity and motor parameters is linear Hence, if we can design a model to track time-variant nonlinear system it will find more convincing subsets that are not restricted by the linearity.

PAGE 176

162 LIST OF REFERENCES [And04] Andersen, R.A., Budrick, J.W., Mu sallam, S., Pesaran, B., & Cham, J.G. (2004) Cognitive neural prosthetics. Trends in Cog. Sci. 8 (11), pp. 486-493. [Aus98] Aussem, A., Campbell, J. & Murt agh, F. (1998) Wavelet-based feature extraction and decomposition strate gies for financial forecasting. J. Comp. Intelli. in Finan., 6 pp. 5-12. [Bis95] Bishop, C. M. (1995) Neural networks fo r pattern recognition Oxford, UK: Oxford University Press. [Bou04] Bourien, J., Bellanger, J.J., Bart olomei, F., Chauvel, P., & Wendling, F. (2004) Mining reproducible ac tivation patterns in epileptic intracerebral EEG signals: application to interictal activity. IEEE Trans. Biomed. Eng. 51 (2), pp. 304-315. [Buc95] Buckheit, J. & Donoho, D.L. ( 1995) Improved linear discrimination using time-frequency dictionaries. Proc. of SPIE, 2569 pp. 540-551. [Cao03] Cao, S. (2003) Spike train characte rization and decoding fo r neural prosthetic devices. Ph.D. Dissertation, California In stitute of Technol ogy. Pasadena, CA, U.S.A. [Car03] Carmena, J.M., Lebedev, M.A., Cr ist, R.E., O’Doherty, J.E., Santucci, D.M., Dimitrov, D.F., Patil, G., Henriquez, C. S., & Nicolelis, M.A. (2003) Learning to control a brain-machine interface fo r reaching and grasping by primates. PLoS Biology, 1 pp. 192-208. [Cha99] Chapin, J. K., Moxon, R. S., Mar kowitz, M. A., & Nicolelis, M. A. (1999) Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nature Neurosci., 2 (7), pp. 664-670. [Che98] Chen, D. & Harris, J.G. (1998) An analog VLSI circuit implementing an orthogonal continuous wavelet transform. Proc. IEEE Intl .Conf. Electronics, Circuits and Sys., 2 pp. 139-142. [Cov91] Cover, T., & Thomas, J. (1991) Elements of information theory New York, NY: Wiley. [Dau92] Daubechies, I. (1992) Ten lectures on wavelets Philadelphia, PA: Society for Industrial and Applied Mathematics.

PAGE 177

163 [dAv02] d’Avella, A. & Tresch, M.C. (2002) Modularity in the motor system: decomposition of muscle patterns as co mbinations of time-varying synergies. Adv. in Neural Info. Proc. Sys, 14 pp. 629-632. [Dhi01] Dhillon, I.S., & Modha, D.S. (2001) Concept decompositions for large sparse text data using clustering. Machine Learning 42 (1), pp. 143-175. [Don02] Donoghue, J.P. (2002) Connecting cort ex to machines: recent advances in brain interfaces. Nature Neurosci. Suppl., 5 pp. 1085-1088. [Don04] Donoho, D. & Stodde n, V. (2004) When does non-negative matrix factorization give a correct decomposition into parts? Adv. in Neural Info. Proc. Sys 16 [Dou94] Douglas, S. (1994) A family of normalized LMS algorithms. IEEE Sig. Proc. Letters, SPL-1 (3), pp. 49-51. [Efr04] Efron, B., Johnstone, I., Hastie, T. & Tibshira ni, R. (2004) Least angle regression. Annals. of Stat. In press. [Erd02] Erdogmus, D. (2002) Information th eoretic learning: Renyi's entropy and its applications to adaptive system trai ning. Ph.D. Dissertation, Department of Elec. and Comp. Eng., University of Florida, Gainesville, FL, U.S.A. [Fah91] Fahlman, S., & Lebiere, C. (1991) The cascade-correlation learning architecture. Technical Report CM U-CS-90-100, School of Comp. Sci., Carnegie Mellon University, Pittsburgh, PA, U.S.A. [Fan96] Fancourt, C. L., & Principe, J. C. (1996) Temporal self -organization through competitive prediction. Proc. Int. Conf. Acou., Speech, and Sig. Proc 4 pp. 3325-3328. [Far87] Farmer, J.D., & Sidorowich, J.J. (1987) Predicting ch aotic time series. Phy. Rev. Letters 50 pp. 845-848. [Fra93] Frank, I. & Friedman, J. (1993) A statistical view of some chemometrics regression tools (with discussion). Technometrics, 35 (2), pp. 109-148. [Fri04] Friehs, G.M., Zerris, V.A., Ojak angas, C.L., Fellows, M.R., & Donoghue, J.P. (2004) Brain-machine and brain-computer interfaces. Stroke, 35 pp. 27022705. [Fur74] Furnival, G. & Wilson, R. (1974) Regression by leaps and bounds. Technometrics, 16 pp. 499-511. [Gei75] Geisser, S. (1975) The predictive sample reuse method with applications. J American Stat. Assoc., 50 pp. 320-328.

PAGE 178

164 [Gem92] Geman, S., Bienenstock, E. & Dour sat, R. (1992) Neural networks and the bias/variance dilemma. Neural Comp. 4 pp. 1-58. [Geo83] Georgopoulos, A.P., Caminiti, R., Kalaska, J.F., & Massey, J.T. (1983) Spatial coding of movement: a hypothesi s concerning the coding of movement direction by motor co rtical populations. Exp. Brain. Res. Suppl., 7 pp. 327336. [Gui01] Guillamet, D., Bressan, M., & Vitr i, J. (2001) A weighted non-negative matrix factorization for local representations. Proc. IEEE Conf. on Comp. Vision and Patt. Rec. 1 pp. 942-947. [Has01] Hastie, T., Tibshirani, R., & Friedman, J. (2001) Elements of statistical learning: data mining, inference and prediction New York, NY: SpringerVerlag. [Hay96a] Haykin, S. (1996) Adaptive filter theory Upper Saddle River, NJ: Prentice Hall. [Hay96b] Haykin, S. (1996) Neural networks: A comprehensive foundation New York, NY: McMillan. [Hoe70] Hoerl, A.E., & Kennard, R.W. (1970) Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12 (3) pp. 55-67. [Isa00] Isaacs, R.E., Weber, D.J., & Schw artz, A.B. (2000) Work toward real-time control of a cortical neural prosthesis. IEEE Trans. Rehabil. Eng., 8 pp. 196198. [Jac91] Jacobs, R. A., Jordan, M. I., Nowl an, S. J., & Hinton, G. E. (1991) Adaptive mixtures of local experts. Neural Comp., 3 pp. 79-87. [Jon93] de Jong, S. (1993) SIMPLS: An altern ative approach to pa rtial least squares regression. Chem. and Intelli. Lab. Sys. 18 pp. 251-263. [Kal60] Kalman, R.E. (1960) A new appr oach to linear filtering and prediction problems. Trans. of ASME-J. of Basic Eng. 82 ( Series D ), pp. 35-45. [Ken98] Kennedy, P.R., & Bakay, R.A. (1998) Restoration of neural output from a paralyzed patient by a di rect brain connection. Neuroreport, 9 pp. 1707-1711. [Kim03a] Kim, S. P., Rao, Y. N., Erdogmus D., & Principe, J. C. (2003) A hybrid subspace projection method for system identification. Proc. Int. Conf. Acou., Speech, and Sig. Proc. VI pp. 312-324.

PAGE 179

165 [Kim03b] Kim, S. P., Sanchez, J. C., Erdogmus, D., Rao, Y. N., Wessberg, J., Principe, J. C., & Nicolelis, M. A. (2003) Di vide-and-conquer approach for brain machine interfaces: nonlinear mixtur e of competitive linear models. Neural Networks, 16 pp. 865-871. [Kim04] Kim, S.P., Rao, Y.N., Erdogmus, D. & Principe, J.C. (2004) Tracking of multivariate time-variant systems based on on-line variable selection. Presented at IEEE Int. Workshop Mach. Learn. Sig. Proc ., Sao Luis, Brazil, Sept. 2004. [Kim05a] Kim, S.P., Sanchez, J.C., Erdogmus D., Rao, Y.N., Carmena, J.M., Lebedev, M.A., Nicolelis, M.A.L., & Principe, J.C. (2005) A comparison of optimal MIMO linear and nonlinear models fo r brain-machine interfaces. In preparation. [Kim05b] Kim, S.P., Sanchez, J.C., Carmena, J.M., Nicolelis, M.A.L., & Principe, J.C. (2005) Real time neuronal subset sel ection in brain-machine interfaces. Presented at Conf. on Sys. Analysis, Data Mining, and Optimization in Biomed. Gainesville, FL, U.S.A., Feb. 2005. [Kim05c] Kim, S.P., Carmena, J.M., Nico lelis, M.A.L., & Principe, J.C. (2005) Multiresolution representation and data mining of neural spikes for brainmachine interfaces. Presented at IEEE Neuroeng. Conf Arlington, VA, U.S.A., Mar. 2005. [Kim05d] Kim, S.P., Rao, Y.N., Erdogmus, D., Sanchez, J.C., Nicolelis, M.A.L., & Principe, J.C. (2005) Determining patterns in neural activity for reaching movements using non-negative matrix factorization. EURASIP J Applied Sig. Proc ., In press. [Kro92] Krogh, A. & Hertz.J.A. (1992) A simple weight decay can improve generalization. Adv. in Neural Info. Proc. Sys, 4 pp. 950--957. [Lar96] Larsen, J., Svarer, C., Andersen, L. N., & Hansen, L. K. (1996) Adaptive regularization in neural network modeli ng. In Orr, G.B. & Muller K. (Eds.), Neural Networks: Tricks of the Trade Lecture notes in Computer Science 1524, Germany: Springer, pp. 113-132. [Lau04] Laubach, M. (2004) Wavelet-based processing of neuronal spike trains prior to discriminant analysis. J. Neurosci. Methods, 134 pp. 159-168. [Lee02] Lee, D. (2002) Analysis of phase -locked oscillations in multi-channel singleunit spike activity with wavelet cross-spectrum. J. Neurosci. Methods. 115, pp. 67-75. [Lee99] Lee, D.D., Seung, H.S. (1999) L earning the parts of objects by non-negative matrix factorization. Nature, 401 pp. 788-791.

PAGE 180

166 [Lee01] Lee, D.D., & Seung, H.S. (2001) Algorithms for non-negative matrix factorization. Adv. in Neural Info. Proc. Sys, 13 pp. 556-562. [Lin97] Lin, S., Si, J. & Schwartz, A.B. ( 1997) Self-organization of firing activities in monkey’s motor cortex: trajectory computation from spike signals. Neural Comp., 9 pp. 607-621. [Mal02] Maulik, U. & Bandyopadhyay, S. ( 2002) Performance evaluation of some clustering algorithms and validity indices. IEEE Trans. on Pat. Anal. and Mach. Intelli. 24 (12), pp. 1650-1654. [Mor99] Moran, D. W., & Schwartz, A. B. (1999) Motor cortical activity during drawing movements: population repr esentation during spiral tracing. J Neurophysiology 82 (5), pp. 2693-2704. [Mur04] Murtagh, F., Starck, J.L., & Rena ud, O. (2004) On neuro-wavelet modeling. Decis. Sup. Sys. J. 37 pp. 475-484. [Mus04] Musallam, S., Corneil, B.D., Greger B., Scherberger, H., & Andersen, R.A. (2004) Cognitive control signals for neural prosthetics, Science, 305 (5681), pp. 258-262. [Nea96] Neal, R. (1996) Bayesian learning for neural networks Cambridge, UK: Cambridge University Press. [Nic01] Nicolelis, M.A.L. (2001) Actions from thoughts. Nature 409 pp. 403-407. [Nic03a] Nicolelis, M.A.L. (2003) Brain-m achine interfaces to re store motor function and probe neural circuits. Nature Rev. Neurosci., 4 417-422. [Nic03b] Nicolelis, M.A.L., Dimitrov, D., Carm ena, J.M., Crist, R., Lehew, G., Kralik, J.D. & Wise, S.P. (2003) PNAS 100 pp.11041-11046. [Nic97] Nicolelis, M.A.L., Ghazanfar, A.A., Faggin, B., Votaw, S., & Oliveira, L.M.O. (1997) Reconstructing the engram: simultaneous, multiple site, many single neuron recordings. Neuron 18, pp. 529-537. [Nob99] Nobunga, A.I., Go, B.K., & Karunas, R.B. (1999) Recent demographic and injury trends in people served by the m odel spinal cord injury care systems. Arch. Phys. Med. Rehabil., 80 pp. 1372-1382. [Pal98] Palus, M. & Hoyer, D. ( 1998) Detecting nonlinearity and phase synchronization with surrogate data. IEEE Engr. Med. & Bio. Mag., 17 (6), pp. 40-45. [Pri93] Principe, J. C., de Vries, B., & de Oliveira, P. G. (1993) The gamma filter: A new class of adaptive IIR filters with restricted feedback. IEEE trans. Sig. Proc., 41 (2), pp. 649-656.

PAGE 181

167 [Rao04] Rao, Y.N., Kim, S.P., Sanchez, J. C., Erdogmus, D., Principe, J.C., Carmena, J., Lebedev, M.A., & Nicolelis M.A. (2004) Learning mappings in brainmachine interfaces with echo state networks. Accepted for 2005 Int. Conf. Acou., Speech, and Sig .. [Ree93] Reed, R. (1993) Pruning algorithms A survey. IEEE Trans. on Neural Networks, 4 (5), pp. 740-747. [San02a] Sanchez, J. C., Kim, S. P., Er dogmus, D., Rao, Y. N., Principe, J. C., Wessberg, J., & Nicolelis, M. A. (2002) Input-output mapping performance of linear and nonlinear models for estima ting hand trajectories from cortical neuronal firing patterns. Proc. of Neural Net. Sig. Proc. pp. 139-148. [San02b] Sanchez, J.C., Erdogmus, D., Princi pe, J.C., Wessberg, J. & Nicolelis, M.A.L. (2002) A comparison between nonlinear mappings and linear state estimation to model the relation from motor cortic al neuronal firing to hand movements. Proc. of SAB’02 Workshop on Motor Cont rol of Humans and Robots: On the Interplay of Real Brai ns and Artificial Devices pp. 59-65. [San03a] Sanchez, J. C., Erdogmus D., Rao, Y. N., Principe, J. C., Nicolelis, M. A., and Wessberg, J. (2003) Learning the contribu tions of the motor, premotor, and posterior parietal cortices for hand traj ectory reconstruction in a brain machine interface. Presented at IEEE EMBS Neural Eng. Conf. Cancun, Mexico, Sept. 2003. [San03b] Sanchez, J. C., Carmena, J. M., Le bedev, M. A., Nicolelis, M. A., Harris, J. G., & Principe, J. C. (2003) Ascertaining the importance of neurons to develop better brainmachine interfaces. IEEE Trans. Biomed. Eng., 61 pp. 943-953. [Sch01] Schwartz, A.B., Taylor, D.M., & Helms Tillery, S.I. (2001) Extraction algorithms for cortical cont rol of arm prosthetics. Curr. Opin Neurobiology, 11 pp. 701-707. [Sch04] Schwartz, A.B. (2004) Co rtical neural prosthetics. Ann. Rev. of Neurosci. In press. [Sch80] Schmidt, E.M. (1980) Single neur on recording from motor cortex as a possible source of signals for control of external devices. Ann. Biomed. Eng., 8 pp. 339-349. [Ser02] Serruya, M. S., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., & Donoghue, J. P. (2002) Brain-machine inte rface: instant neural control of a movement signal. Nature, 416 pp.141-142. [She03] Shenoy, K.V., Meeker, D., Cao. S., Kureshi, S.A., Pesaran B., Buneo C.A., Batista A.P., Mitra P.P., Burdick J.W., & Andersen R.A. (2003) Neural prosthetic control signa ls from plan activity. NeuroReport, 14, pp. 591-597.

PAGE 182

168 [She92] Shensa, M.J. (1992) Discre te wavelet transforms: wedding the trous and Mallat algorithm. IEEE trans. on Sig. Proc., 40 pp. 2464-2482. [Slo93] Slock, D.T.M. (1993) On the c onvergence behavior of the LMS and the normalized LMS algorithms. IEEE Trans. Sig. Proc., 41 (9), pp. 2811-2825. [Sto90] Stone, M., & Brooks, R. J. (1990) Continuum regression: cross-validated sequentially constructed prediction embr acing ordinary least squares, partial least squares and principal compone nts regression (with discussion). J. Royal Statist. Soc. Ser. B, 52 pp. 237-269. [Tay02] Taylor, D.M., Helms Tillery, S.I. & Schwartz, A.B. (2002) Direct cortical control of 3D neuropr osthetic devices. Science, 296 pp. 1829-1832. [Tib96] Tibshirani, R. ( 1996) Regression shrinkage a nd selection via the lasso. J. Royal. Statist. Soc B ., 58 (1), pp. 267-288. [Wes00] Wessberg, J., Stambaugh, R., Kralik, J. F., Beck, P. D., Laubach, M., Chapin, J. K., Kim, J., Biggs, J., Srinivasan, M. A., & Nicolelis, M. A. (2000) Realtime prediction of hand trajectory by ensembles of cortical neurons in primates. Nature, 408 (6810), pp. 361-365. [Wil03] Wilds, S. Seeding non-negative matr ix factorizations with the spherical kmeans clustering. M.S. Thesis, Dept. of Applied Math., University of Colorado, Boulder, CO, USA. [Wol02] Wolpaw, J.R., Birbaumer, N., McFa rland, D.J., Pfurtscheller, G., & Vaughan, T.M. (2002) Brain computer interf aces for communication and control. Clin. Neurophysiol. 113 pp. 767791. [Wol75] Wold, H. (1975) Soft modeling by latent variable s: The nonlinear iterative partial least squares (NIPALS) approach. Pers. in Prob. and Stat., In Honor of M. S. Bartlett pp. 117-144. [Wu03] Wu, W., Black, M.J., Gao, Y., Bien enstock, E., Serruya, M., Shaikhouni, A. & Donoghue, J.P. (2003) Neural decoding of cursor motion using a Kalman filter. Adv. in Neural Info. Proc. Sys., 15 ., pp. 1-8. [Wu04] Wu, W., Black, M.J., Mu mford, D., Gao, Y., Bienenstock, E., & Donoghue, J.P. (2004) Modeling and decoding motor cortical activity using a switching Kalman filter. IEEE Trans. Biomed. Eng., 51 (6), pp. 933-942. [Zhe99] Zheng, G., Starck, J.L., Campbell, J.G., & Murtagh F. (1999) Multiscale transforms for filtering financial data streams. J Comp. Intelli. in Finan., 7 pp. 18-35.

PAGE 183

169 BIOGRAPHICAL SKETCH Sung-Phil Kim was born in Seoul, South Korea. He received a B.S. in the Department of Nuclear Engineering from Se oul National University, Seoul, South Korea, in 1994. From 1994 to 1997, he worked for th e Network Solution and Sales Supports team in Comtec Systems, Inc., Seoul, South Ko rea. In 1998, he entered the Department of Electrical and Computer Engineering at Universi ty of Florida in pursuit of A Master of Science. He joined the Computational Ne uroEngineering Laboratory as a research assistant in 2000. He also received an M.S. in December, 2000, from the Department of Electrical and Computer Engineering at th e University of Florida. From 2001, he continued to pursue a Ph.D in the Department of Electrical and Computer Engineering at the University of Florida under the supervision by Dr. Jose C. Principe. In the Computational NeuroEngineeri ng Laboratory, he has investig ated decoding models and the analytical methods for brain-machine interfaces. His research is funded by the Defense Advanced Research Projects Agency, and is part of a joint research project with Duke University, State University of New York, Massachusetts In stitute of Technology, Plexon, Inc., and the University of Florida.


Permanent Link: http://ufdc.ufl.edu/UFE0010077/00001

Material Information

Title: Design and Analysis of Optimal Decoding Models for Brain-Machine Interfaces
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0010077:00001

Permanent Link: http://ufdc.ufl.edu/UFE0010077/00001

Material Information

Title: Design and Analysis of Optimal Decoding Models for Brain-Machine Interfaces
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0010077:00001


This item has the following downloads:


Full Text











DESIGN AND ANALYSIS OF OPTIMAL DECODING MODELS FOR BRAIN-
MACHINE INTERFACES

















By

SUNG-PHIL KIM


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2005
































Copyright 2005

by

Sung-Phil Kim


































This document is dedicated to my mother and my wife.
















ACKNOWLEDGMENTS

I would like to thank God for His endless love to give me the best all the time in

my life. I would also like to thank my mother, brothers and my wife for their love,

supports, and solid belief in me.

I would like to thank Dr. Jose C. Principe for his untiring support, advice, and

guidance. I will never forget his inspiration for me to make me think as a researcher.

I am very grateful to Dr. John G. Harris, Dr. Michael C. Nechyba, Dr. Karl Gugel,

and Dr. Mark C.K.Yang for their support and advice for the brain-machine interfaces

research.

I am also exceptionally grateful to Dr. Justin C. Sanchez, Dr. Yadunandana N. Rao,

Dr. Deniz Ergodmus, and Shalom Darmanjian for their sincere support and collaboration.

I must acknowledge Dr. Miguel A.L. Nicolelis and Dr. Jose M. Carmena for the

opportunity to conduct this research with their support.

Final thanks go to Yuseok Ko, Jeongho Cho and Dongho Han, who have always

helped and encouraged me.




















TABLE OF CONTENTS


page

ACKNOWLEDGMENT S .............. .................... iv


LI ST OF T ABLE S ................. .............._ viii..._._.....


LIST OF FIGURES .............. .................... ix


AB STRAC T ......__................ ........_._ ........xi


CHAPTER


1 INTRODUCTION ................. ...............1.......... ......


Review of BMI Signal Processing ................. ...............3............ ...
Approaches .............. ...............6.....
O utline .............. ...............8.....


2 EXPERIMENTAL SETUPS FOR BRAIN-MACHINE INTERFACES ................... 10


Recording of Electrical Activity of Neuronal Ensembles .............. .....................1
Behavioral Tasks ................. ...............11........ ......

Properties of Data ................. ...............12..............
Neuronal Firing Patterns .......................... ...............13......
Hand Movements. ................. ...............16........ ......


3 LINEAR MODELING ................. ...............19..............


Linear Modeling for BMIs ................. ...............19........ .....
The Wiener Filter ................. ...............23......___. ....
Stochastic Gradient Learning .............. ...............27....
Other Linear Modeling .............. ...............28....

4 REGULARIZED LINEAR MODELING ................. ...............3.. 1.............


Dimension Reduction Using Sub space Proj section .......... ................ ...............3 1
A Hybrid Subspace Projection .............. ........ ....... ...... .......3
Design of a Decoding Model Using the Subspace Wiener Filter........................35
Parsimonious Modeling in Time Using the Gamma Filter ................. ................ ..37
Regularization by Parameter Constraints .............. ...............41....












Review of Shrinkage Methods .............. ...............43....
Shrinkage methods ................. ... .. .. .. .........4
The relationship between sub space proj section and ridge regression ...........45
Comparison of shrinkage methods ........._.._.. ....._.. ........__. .......45
Regularization Based on the L2-Norm Penalty ................. ............... ...._...47
Regularization Based on the L1-norm Penalty............... ...............50

5 NONLINEAR MIXTURE OF MULTIPLE LINEAR MODEL S.........__ ................54


Nonlinear Mixture of Linear Models Approach ................. ...........................5 5
Nonlinear Mixture of Competitive Linear Models ................. ......................55
Time Delay Neural Networks ................. ...............59................
BMIs Design Using NMCLM .............. ...............59....
A analysis .................. .... ....... .. ........ ........6
Evaluation of Training Performance for NMCLM .............. ....................6
Analysis of Linear Filters ................ ...............62................

6 COMPARISON OF MODELS................ ...............64


Comparison of Model Parameters ................. ...............67................
Performance Evaluation............... ...............6
Statistical Performance Comparison .............. ...............70....

7 MULTIRESOLUTION ANALYSIS FOR BMI ................... ............... 7


Multiresolution Analysis of Neuronal Spike Trains .......................__ ..............76
Multiresolution Analysis ................... ...... ...............7
Multiresolution Analysis for the BMI Data. .............. ........._ ...............80
The Analysis of the Linear Model Based on the Multiresolution
Repres entati on .............. ......... ._. ... .....__ ..... .........8
Comparison of Models with the Multiresolution Representation. .............. ..... ..........85
Combination of Linear and Nonlinear Models ....._____ ..... ... ._ ............._..89
Nonlinear M odeling. ............ _...... ._ ...............91....
Simulations ............ _...... ._ ...............93....
Discussions .............. ...............95....

8 DETERMINATION OF NEURONAL FIRING PATTERNS USING NON-
NEGATIVE MATRIX FACTORIZATION .............. ...............99....


Nonnegative Matrix Factorization ........_........_...__ .........._ .............0
Factorization of Neuronal Bin Count Matrix ................. ........................__..103
Data Preparation ........._..._.._ ................. 103....... ....
3D food reaching data ......... ......_..._.._ ...............103....
2D target reaching data .............. ...............105....
Analysis of Factorization Process .............. ...............105....
Choice of the number of bases ......___ ....... ___ ... ...__..........0
How does NMF find repeated patterns? ......____ .......__ ..............106
Local minima problem ............__......___....._ .............1












Case Studies A: 3D Food Reaching .........__.........__ ......__. ........10
Case Study B: 2D Target Reaching ....._.__._ ........._. ....._._. ........13
Model Improvement Using NMF ................. ...............119...............
Discussions ................. ................. 121........ ....


9 REAL TIME NEURONAL SUB SET SELECTION ................ ............ .........123


On-Line Variable Selection ................ ...............126................
On-Line Channel Selection Method ................ ...............129...............
Determination of Selection Criterion ............... ... ... ......... ........... .........13
Determination of Threshold in LAR Using Surrogate Data.............................. 132
Conditional Selection Criterion............... ...............13

Experiments of Neuronal Subset Selection .............. ...............141....
Discussions ............. ...... .............. 149...


10 CONCLUSIONS AND FUTURE WORKS ......____ ...... .. __ ..........__.....154


LIST OF REFERENCES ...._... ................. ...............162 .....


BIOGRAPHICAL SKETCH ................. ...............169......... ......



















LIST OF TABLES


Table pg

2-1 The distributions of the sorted neuronal activity for each monkey in motor cortical
areas ................. ...............11.................

4-1 Procedure of the LAR algorithm .....__.....___ ..........._ ...........5

6-1 The generalization performances of linear models and nonlinear models for the
3D food reaching task. ............. ...............69.....

6-2 The generalization performances of linear models and nonlinear models for the 2D
target reaching task. .............. ...............69__ ......

6-3 The t-test results for the difference of the magnitude of error vectors from the test
dataset between the Wiener filter and other models............... .................7

7-1 The number of the selected neurons in each cortical area ................. ................. .85

7-2 The number of the nonzero weights ................. ...............87......___. ..

7-3 The number of neurons selected by LAR for each models .................. ...............88

7-4 Performance comparison between the multiresolution and the single resolution
m odels. ............. ...............88.....

7-5 Performance comparison between the combinatory model and the single linear
m odel. ............. ...............94.....

8-1 C omp ari son of i mp ortant neuron s; food reach ng ................. ........................11 3

8-2 Comparison of important neurons: target reaching ................. .......__. .........119

8-3 Performance evaluation of the Wiener filter and the mixture of multiple models
based on NM F. ............. ...............120....

9-1 Procedure of the LAR algorithm: revisited. ......____ .......__ .............. ..126

9-2 The modified LAR algorithm for on-line variable selection .........._... ..............128

















LIST OF FIGURES


Figure pg

1-1 A system identification block diagram for BMIs ................. .......... ...............2

2-1 An experimental setup of 3D reaching task. ............. .....................12

2-2 An experimental setup of 2D target reaching task. The monkey moves a cursor
(yellow circle) to a randomly placed target (green circle), and rewarded if a cursor
intersects the target. ........._ ...... .. ...............13...

2-3 An example of the binned data. ........._._ ....__. ...............13

2-4 The plots of the average (dot) and the standard deviation (bar) for each neuron of
three m onkeys .............. ...............14....

2-5 The traj ectories of the estimated mean firing rates for movement (solid line) and
rest (dotted line) over sequence of subsets ................. ...............15........... ..

2-6 Illustrations of nonstationary properties of the input autocorrelation matrix...........16

2-7 Sample traj ectories of (a) 3D food reaching, and (b) 2D target reaching
m ovem ents. ............. ...............17.....

2-8 The db6 continuous wavelet coefficients of traj ectory signals of (a) 3D food
reaching, and (b) 2D target reaching. ......____ ... ....._ ....__ ...............18

3-1 The topology of the linear filter designed for BMIs in the case of the 3D reaching
task .............. ...............20....

3-2 The Hinton diagram of the weights of the Wiener filter for food reaching. ............26

3-3 The Hinton diagram of the weights of the Wiener filter for target reaching........... .27

4-1 The overall diagram of the sub space Wiener filter ................. ................ ...._.34

4-2 The contour map of the validation MSE for (a) food reaching, and (b) target
reaching. ............. ...............3 5....

4-3 The first three proj section vectors in PCA for (a) food reaching, and (c)
target reaching, and PLS for (b) food reaching, and (d) target reaching,
respectively ................. ...............37.................










4-4 An overall diagram of a generalized feedforward filter............._ .........._ .....39

4-5 The contour maps of the validation MSE computed at each grid (~K,, pl, for (a)
food reaching, and (b) target reaching............... ...............41

4-6 Contours of the L,-norm of weight vector for various values of p in the 2D weight
space. .............. ...............46....

4-7 Convergence of the regularization parameter 3(n) over iterations; (a) food reaching,
and (b) target reaching............... ...............49

4-8 The histogram of the magnitudes of weights over all the coordinates of hand
position, trained by weight decay (solid line) and NMLS (dotted line); (a) food
reaching, and (b) target reaching ................. ...............50........... ...

4-9 An illustration of the LAR procedure. ........._ ...... ......__..........5

5-1 An overall diagram of the nonlinear mixture of competitive linear models. ...........56

5-2 Demonstration of the localization of competitive linear models. ..........................58

5-3 Frequency response of ten FIR filters; (left) pole zero plots, (right) frequency
responses. ............. ...............63.....

6-1 The actual hand traj ectory (dotted red line) and the estimated hand traj ectory (solid
black line) in the x-, y-, and z-coordinate for the 3D food reaching task on a sample
part of the test data ........... ..... ._ ...............65..

6-2 The actual hand trajectory (dotted red line) and the estimated hand traj ectory
(solid black line) in the x-, and y-coordinate for the 2D target reaching task on a
sample part of the test data. ........... ..... ._ ...............66.

6-3 The distributions of normalized weight magnitudes of four linear models over
neuronal space for; (a) food reaching, and (b) target reaching. ............. ................68

6-4 Comparison of the CEM of the nine models for (a) the food reaching task, and (b)
the target reaching task ................. ...............70................

7-1 An illustration of the scaled convolution output from the Haar a trous wavelet
transform .............. ...............8 1....

7-2 An example of the series of u,(k) along with the corresponding hand
traj ectories ................. ...............8.. 2..............

7-3 The demonstration of the relation between the neuronal firing activity
representation at each scale (solid lines) and the hand position traj ectory at x-
coordinate (dotted lines) ................. ...............83........... ....










7-4 The distribution of the selected input variables for (a) x-coordinate, (b) and y-
coordinate of position, and (c) x-coordinate, and (d) y-coordinate of velocity........86

7-5 The CEM curves of the single resolution model (red dotted lines), and the
multiresolution model (black solid lines)............... ...............89.

7-6 An example of the residual traj ectory from a linear model (the x-coordinate). .......93

7-7 An example of the output traj ectories of the combinatory network and single linear
m odel. ............. ...............95.....

7-8 Tap outputs from two generalized feedforward filters for a neuronal bin count input
with different delay: the gamma, and Haar wavelet ................. .......................97

8-1 Segmentation of the reaching traj ectories: reach from rest to food, reach from food
to mouth, and reach from mouth to rest position .............. ....................10

8-2 The NMF results for food reaching ................. ...............111........... ..

8-3 The NMF results for target reaching ................. ...............114........... ..

8-4 The hand position samples collected along with peaks in each NMF encoding (left),
and the mean and variance of each set (right). ........_................. ................1 16

8-5 The probabilities of the occurrence for hand position to be in each of sixteen angle
bins. .............. .. ...............117......... ......

8-6. Tuning curve of neuronal firing patterns encoded in each NMF basis for 16 angle
bins. ........... ..... ._ ...............118..

9-1 The diagram of the architecture of real time neuronal subset selection method....13 1

9-2 An illustration of the successive maximum correlation over stages in the case of
two variables (channels)............... ..............13

9-3 Examples of the maximum absolute correlation curve in LAR. ...........................13 5

9-4 Neuronal subset selection examples............... ...............13

9-5 Demonstration of filter outputs before subset selection; (top) synchronized data,
(bottom) de-synchronized data. ...._ ......_____ ............ .............3

9-6 Neuronal subset selection conditioned by the correlation between filter outputs and
desired response. ............. ...............142....

9-7 Demonstration of the robustness of the algorithm to initial conditions. ................143

9-8 An example of the outputs of two tracking systems with (solid line), and without
on-line channel selection (dashed line). ............. ...............144....










9-9 Neuronal subset selection for all three coordinates of food reaching movement. .145

9-10 Neuronal subset selection over 2,000-second data; (a) subsets in the early part, and
(b) subsets in the late part of the data ................. ...............146........... .

9-11 Neuronal subset selection for a 2D target reaching BMI. .................. ...............148

9-12 2D hand traj ectories in five sample data segments selected in Fig. 9-1 1...............148

9-13 Selection of individual neurons over a series of reaching movements. ...............15 1

9-14 The distribution of the subset size over a series of reaching movements. .............153

9-15 Comparison of the average misadjustment per movement between the standard
MIMO system learned by LMS and the MIMO system with on-line channel
selection ................. ...............153................
















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

DESIGN AND ANALYSIS OF OPTIMAL DECODING MODELS FOR BRAIN-
MACHINE INTERFACES

By

Sung-Phil Kim

May 2005

Chair: Jose C. Principe
Major Department: Electrical and Computer Engineering

The role of decoding models in the design of brain-machine interfaces (BMIs) is to

approximate the mapping from the firing activity of the cortical neuronal ensemble to

associated behavior. The linear model, that in a statistical signal processing setting is

called the Wiener filter, has been the primary vehicle to estimate the mapping. One of the

purposes of this dissertation is to conduct an extensive comparative study of multi-input,

multi-output (MIMO) decoding models in two experimental BMI settings in which

monkeys perform dissimilar behavioral tasks. The issues in decoding model estimation

for BMIs include the large input dimensionality, the spatio-temporal neural firing

patterns, nonstationary, and the adequacy of the linearity assumption. These issues lead

us to concentrate our studies into four research directions; the topology of the models

(linear versus nonlinear), regularization both in space and time, preprocessing from

discrete events to continuous input variables, and ways to cope with the nonstationarity

present in the data. The comparison of the optimized linear and nonlinear MIMO models









with the Wiener filter based on generalization performance shows that the improvement,

although statistically significant, is minor with respect to the baseline.

A second line of investigation deals with the analysis of motor cortex activity based

on experimental BMI setups. Firstly, we propose an input based strategy called use non-

negative matrix factorization (NMF) to uncover spatio-temporal patterns in neuronal

ensembles correlated to behavior. The specific spatio-temporal patterns of neural activity

can be determined from the NMF basis vectors using only the input data, and their

temporal relationships with behavior can be extracted from the NMF encodings.

Secondly, a real time neuronal subset selection method is developed to find a subset of

neurons that is most relevant to kinematic traj ectories at every sampling time instance.

The method based on an on-line implementation of the LAR (Least Angle Regression)

algorithm requires the availability of the desired response. The experimental analysis

demonstrates the nonstationary characteristics of the relationship between the activity of

neuronal ensemble and behavior.















CHAPTER 1
INTTRODUCTION

The direct control of machines by thought has been rather close to fiction until

recent developments in neuroscience which seek direct interfaces between brain and

machines. This emerging field has been called brain-machine interfaces (BMIs). One of

the clinical demands driving BMIs is restoring motor functions in 'locked-in' patients

who suffer from paralysis caused by traumatic or degenerative lesions. In fact, there are

more than 200,000 patients in the United States of America who live with partial or total

permanent paralysis with 11,000 new cases each year [Nob99]. Eventually, BMIs may

also impact the very paradigm of human computer interfaces.

Several research groups have demonstrated that subj ects can control robotic arms

or computer cursors on screen by using their brain activity [Car03, Cha99, Ken98,

Mor99, Mus04, Ser02, She03, Tay02 and Wes00]. These demonstrations in rodents,

primates, and human patients show promising ways to bypass spinal cord lesions. In

these experiments, up to a hundred electrodes are chronically implanted in motor areas in

the cortex to record the electrical activities of hundreds of neurons. The control signals

for external devices are extracted by a series of signal processing modules including

spike detection/sorting algorithms and decoding algorithms. This experimental BMI

paradigm, which is illustrated in Fig. 1-1, relies on three basic elements. Long-term and

stable recordings enable us to obtain a mass of neuronal activity through microelectrode

arrays. A mathematical model extracts the information of motor parameters from

neuronal activity recordings in real time. A prosthetic device such as a robotic arm









receives control signals from a mathematical model to coordinate the subj ect' s intended

movement.


Spike Spike Wiener
sortingnl binming ~ filter ~ dn
-5 d(n)



b i/l ~mouth~Yfo


tray


Figure 1-1. A system identification block diagram for BMIs.

This dissertation mainly focuses on building mathematical models in BMIs. These

models utilize spike trains provided by spike sorting algorithms as inputs, and desired

response of movement parameters such as hand position, velocity, or the gripping force

which are synchronously recorded by optical sensors during motor performance of the

subj ect. The design of these models can be viewed as a system identification problem

[Hay96a]. Recent investigations in BMI modeling have demonstrated successful

estimation of the transfer function from motor cortex neural firing patterns to hand

movement traj ectory of primates, with a relatively simple Wiener filter [Cha99; Mor99;

Ser02 and Wes00]. If one thinks about the complexity of the motor system, starting from

the intricate firing modulation of millions of cells in the cortex, passing through the

added complexity of the spinal cord functionality up to the spatio-temporal firing of

motor neurons that control each muscle fiber, it is rather surprising that a simple linear

proj section in the input space is able to capture the behavior of this complex system with

correlation coefficients around 0.8 between the desired and actual traj ectories. This leads









us to look from an optimal signal processing framework at the challenges and

opportunities of this class of models for BMIs.

There are several challenges of the application based on an idea of the BMI setup.

First, the spatio-temporal patterns in spike trains data are not fully known and thus cannot

guide us in the proper way for designing the models. Second, this is a MIMO (multiple

inputs multiple outputs) mapping problem, with a large dimensionality (i.e., for 100

neuronal inputs, the Wiener filter with 10 taps has 1,000 free parameters for each

coordinate of outputs). Third, the statistics are not constant either in time or in space.

Fourth, some neuronal firings are not related to the task and constitute therefore noise in

the data. Fifth, there is no way of knowing if the true mapping is linear or nonlinear. In

spite of all these difficult questions the linear model learns the traj ectory with a mean

correlation coefficient of 0.6 ~ 0.8; therefore it is instructive to undertake a systematic

analysis of the issues to derive Wiener filters for BMIs.

Review of BMI Signal Processing

An approach to restore motor functions in paralyzed patients using direct interfaces

between cortical motor areas and artificial actuators was first proposed by Schmidt

[Sch80]. He proposed to connect from the electrical activities of cortical neuronal

ensemble to an actuator to bypass spinal cord injuries.

Recently, Chapin and co-workers demonstrated that rats were trained to receive

rewards of water drops by pressing a lever to control the rotation of a robotic arm

[Cha99]. A linear model learned by least squares utilized the activities of 21-46 neurons

in primary motor cortex (M1) as inputs to predict the motion of robot. Rats turned out to

learn to control the robotic arm using only neuronal signals without moving arms.









Afterwards, other research groups j oined the line of the study of experimental

BMIs. Wessberg et al. [Wes00] in a joint research group including Duke University,

SUNY, and MIT demonstrated a real time control of robotic arm using up to 100

neuronal activities. The Wiener filter or time delay neural network (TDNN) was designed

to predict the 3D hand trajectories of food reaching movements using neuronal bin count

data with a 100ms non-overlapping time windows embedded by a 10-tap delay line.

Carmena et al. at Duke University also showed that with a relatively large number of

cells (>100) monkeys could brain control a robot arm to perform two distinct different

motor tasks including reaching and grasping [Car03]. In these experiments, monkeys

could control a real robotic actuator through a closed-loop BMI. They also reported the

change of the contributions of neuronal populations during learning.

Taylor et al. at Arizona State University presented a 3D cursor tracking BMI in

their report [Tay02], where a monkey made arm movements in a 3D virtual environment

to reach a randomly placed target. Using 18 cells from primary cortical area (M1), they

investigated the effect of visual feedback on movements by comparing open-loop

traj ectories of hand controlled cursor movements and closed-loop traj ectories of brain

controlled cursor movements. A co-adaptive movement prediction algorithm based on a

population vector method, which was developed to track changes in cell tuning properties

during brain controlled movement, iteratively refines the estimate of cell tuning

properties as a subj ect attempts to make a series of brain controlled movement. Other

works on decoding algorithms in BMIs were reviewed in Schwartz et al. [Sch01]. In this

review, parametric linear models including the population vector algorithm [Geo83] and

the Wiener filter, and non-parametric methods including the maximum likelihood









estimate, the principal component analysis (PCA) [Isa00], and self-organizing feature

maps (SOFM) [Lin97] were introduced as motor-related information extraction

algorithms from neural activity for BMIs.

Serruya et al. in Donoghue laboratory at Brown University also demonstrated that

monkeys tracked a continuously moving visual obj ect in a video monitor by moving a

manipulandum [Ser02]. The Wiener filter with 50ms bins embedded by 20 tap delay lines

was used to predict hand position from 7~30 Ml cell activities. They also showed that

time required to acquire targets using brain control was very similar to hand control. Wu

et al. in the same group proposed using a Kalman fi1ter as a decoding model [Wu03] for

Ending probabilistic relationship between motion and mean firing rates (for 140ms time

windows). They extended this Kalman filtering framework to build a mixture of linear

models using a switching Kalman filter model in which the hidden state variables were

estimated by the expectation-maximization (EM) algorithm [Wu04].

Andersen and co-workers in Caltech implanted microelectrode arrays in posterior

parietal cortex (PPC) which is assumed to be responsible for planning of movements

[And04, Mus04 and She03]. High-level signals related to a goal of movements were

decoded using the maximum likelihood estimate of cursor positions from ~ 40 neuronal

activities in PPC of monkeys. They demonstrated that neuronal activities in PPC could

provide information about movement plans; thus they can be used for various neural

prosthetic applications without moving limbs.

Kennedy et al. first demonstrated a human BMI by implanting a special electrode

in the human neocortex to extract signals to control a cursor on a computer monitor










[Ken98]. Using spike trains as input to a computer, severely disabled patients could learn

to move a cursor.

Our group at the University of Florida in collaboration with Duke University has

designed decoding models for 3D food reaching or 2D target reaching BMIs, including

the Wiener filter and recursive multilayer perceptrons (RMLP) [SanO2a]. Based on the

sensitivity analysis in the trained linear and nonlinear models, we improved the

performance of models using only relevant neuronal activities [SanO3b]. Further

development of switching multiple linear models combined by a nonlinear network was

proposed by Kim et al. to increase prediction performance in food reaching [Kim03b].

Recently, Rao et al. demonstrated that echo state networks could be used as an alternative

to nonlinear models such as RMLP or TDNN, with relatively uncomplicated training

[Rao04] .

Overall reviews for BMIs can be found in the following studies: [And04, DonO2,

NicO l, NicO3a and Sch04]. For overall reviews of brain-computer interfaces (BCIs), see

Wolpaw et al. [Wol02] and Friechs et al. [Fri04].

Approaches

In this dissertation, we will address the following issues: First, we will apply the

Wiener filter algorithm [Hay96a] to the BMI applications and show its performance in

two types of training data: food reaching and tracking reaching experimental datasets.

This algorithm will be the golden standard for the other adaptive methods developed.

Then we will compare other adaptive algorithms that reach the same solution in the

statistical sense for stationary data, but may handle the nonstationarity nature of the data

better. We are referring to the least mean square algorithm (LMS) that will be

implemented here in its normalized form (NLMS) [Hay96a].









The issue of the number of free parameters of the model will be handled by three

different techniques. The first is the subspace Wiener filter, which first proj ects the input

data using principal component analysis (PCA) [Hay96b], and then derives a Wiener

filter to the desired response. Although PCA has been used as a maj or sub space

proj section method, it does not orient the proj section to take advantage of the desired

response structure. As an alternative, we propose a new idea of seeking subspace

decomposition in the joint space through a hybrid subspace method, which combines the

criterion of PCA and partial least squares (PLS) [Jon93 and Kim03a]. We also implement

reduction in the number of degrees of freedom of the model by using a generalized

feedforward filter based on the gamma tap delay line [Pri93], which has the ability to

cover the same memory depth of the tap delay line with smaller filter order. The third

method implemented uses on-line regularization based on the L,-norm penalty [Has01],

which decreases the values of unimportant weights through training. The problem of

finding the optimal parameter for the penalty function will be addressed. The next issue

covered in this paper relates to the adequacy of the linear modeling. We design a

nonlinear mixture of switching, competitive linear models that implement a locally linear

but globally nonlinear model [Kim03b]. This structure can be thought as a time delay

neural network (TDNN) [Hay96b] that is trained in a different way to conquer the

difficulty of training thousands of parameters with relatively small data sets.

An important contribution of BMIs to brain-related research fields is opening a new

avenue for the experimental studies for the investigation of real time operation of neural

systems in behaving animals [NicO3a]. For instance, using experimental BMIs, we may

be able to explore the real-time nonstationary operations of neuronal ensemble in









association with behavior. Also, the cellular contributions in a large neuronal population

to the motor parameter encoding can be analyzed through BMIs.

In a view of this respect, we investigate the properties of neuronal ensemble

synchronized with behavior in BMIs using several approaches. First, we will seek a way

to represent neuronal activity more efficiently in the context of BMI modeling. Through

the multiresolution analysis [Mur04] for neural spike trains, we can construct a richer

input space to possibly extract more encoded information, thus enhancing prediction

models [Kim05c]. The issue of designing suitable models in this extended input feature

space will be addressed. Second, we will demonstrate an approach to determine neuronal

spatio-temporal patterns using nonnegative matrix factorization [Lee99]. This

mathematical procedure, which has been introduced for image processing, can be utilized

to extract spatio-temporal patterns of different neuronal populations without training of

models [Kim05d]. Third, a real time neuronal subset selection algorithm is developed to

Eind out which groups of neuronal activities exhibit relevance to a particular hand

traj ectory, and to investigate nonstationary characteristics of neuronal ensemble in time

[Kim05b]. This selection scheme is developed based on linear filters used for BMIs.

Outline

The dissertation is organized as follows: The experimental BMIs paradigms and the

descriptions of the recorded datasets are presented in chapter 2. We revisit the

applications of the linear adaptive fi1ters including the Wiener filter to BMIs in chapter 3.

In chapter 4, several regularization methods are investigated to solve the problem of a

large number of free parameters. In chapter 5, the technique of a nonlinear modeling

using competitive multiple linear models is introduced and discussed. The experimental

results and the comparisons of all the models for the two different behavioral tasks are









summarized in chapter 6. Further developments of BMI models based on the

multiresolution analysis are demonstrated in chapter 7. Several analytical methods

including NMF and on-line subset selection using experimental BMIs are introduced in

chapter 8 and 9. Conclusions and future research directions are discussed in chapter 10.















CHAPTER 2
EXPERIMENTAL SETUPS FOR BRAIN-MACHINE INTERFACES

The datasets that are used for the prediction models were collected in experimental

BMIs paradigm by Nicolelis lab at Duke University. In this paradigm, the electrical

activity of cortical neuronal ensembles from awake, behaving primates were recorded and

used by statistical models for controlling a robotic arm in which the arm movements of

primates was reproduced. In this chapter, we describe the recording of the activity of

neuronal ensembles and the experimental paradigm for behavioral tasks. The properties

of the datasets are also presented.

Recording of Electrical Activity of Neuronal Ensembles

Multiple microwire arrays were chronically implanted in multiple cortical areas of

one adult female owl monkey (Aotus trivirgatus) named as Belle, and two adult female

Rhesus monkeys (Macaca mulatta) named as Ivy and Aurora. In an owl monkey,

multiple low-density microelectrode arrays (MBlabs, Dennison, TX), each including 16-

32 50-pum Teflon-coated stainless microwires, were implanted in the left dorsal premotor

cortex (PMd), left primary motor cortes (M1), left posterior parietal cortex (PP), right

PMd and Ml, and right PP cortex [Wes00]. In the first Rhesus monkey (Aurora),

multiple high-density microelectrode arrays developed at Duke University were

implanted in the right PMd, right Ml, right somatosensory (Sl), right supplementary

motor area (SMA), and the left Ml cortex. In the second Rhesus monkey (Ivy), multiple

high-density microelectrode arrays were implanted in the right PP, Ml, and SMA cortex

[Car03 and NicO3b].









After surgical procedures, a multichannel acquisition processor (MAP, Plexon,

Dallas, TX) cluster was used in the experiments to record the neuronal action potentials

simultaneously. The spikes of single neuron from each microwire were discriminated

based on time-amplitude discriminators and a principal component (PC) algorithm

[Nic97 and Wes00]. Analog waveforms of the action potential and the firing time of each

spike were stored. The firing times are binned within a 100ms nonoverlapping window,

yielding a sequence of counts of the number of spikes in each bin. The distribution of the

activity from the sorted neurons over cortex is presented in table 2-1 for each monkey. In

this table, the indices of the sorted neuronal activity based on electrode arrays are used

for identification purpose. These indices will be used through the remainder of

dissertation. Note that in table 2-1, contra indicates the cortical areas in the opposite

hemisphere to moving hand, ipsi does the areas in the same hemisphere.

Table 2-1. The distributions of the sorted neuronal activity for each monkey in motor
cortical areas.
PP- Ml- PMd- Sl- SMA- Ml- PMd /
contra contra contra contra contra ipsi M1-ipsi
1-33 34-54 55-81 82-104
Ll~e(33 ) (21) (27) (23)
1-49 50-139 140-192
Ivy (49) (90) (53)
67-123 1-66 124-161 162-180 181-185
Aurra(.57) (66) (38) (19)(5

Behavioral Tasks

During a recording period, each primate was trained to perform particular motor

tasks. In the first experimental setup, an owl monkey (Belle) performed three-

dimensional movements to reach for food randomly placed at one of four positions on a

tray as depicted in Fig 2-1. In this task, the monkey placed its hand on a platform

SThe number of the sorted neuronal activity in the cortical area.









attached to the chair. When a barrier was open, the monkey reached and grabbed food.

The location and orientation of the wrist of the monkey were continuously recorded using

a plastic strip with multiple fiber optic sensors (Shape Tape, Measureand, Inc.,

Fredricton, NB, Canada) [Wes00]. These signals were sampled at 200Hz.








mouth~b~food


tray
Figure 2-1. An experimental setup of 3D reaching task.

In the second experimental setup, the Rhesus monkeys (Aurora and Ivy) performed

a two-dimensional target reaching task (Fig. 2-2). In this task, the monkey was cued to

move the cursor on a computer screen by controlling a hand-held manipulandum in order

to reach the target. The monkey was rewarded when the cursor intersected the target. The

position of the manipulandum was continuously recorded at 1000Hz sampling rate.

Properties of Data

BMI models are designed to receive the binned spike counts as input signals and to

predict hand position or velocity as desired signals. Before describing BMI models, it is

informative to get the picture of the characteristics of input-output data. Therefore, we

here present several characteristics of the data which are used for all BMI models in the

remainder of this dissertation.







13
















Figure 2-2. An experimental setup of 2D target reaching task. The monkey moves a
cursor (yellow circle) to a randomly placed target (green circle), and rewarded
if a cursor intersects the target.

Neuronal Firing Patterns

Firstly, the examples of the binned data are illustrated in Fig. 2-3 for six sample

neurons collected from M1 cortex of Belle. We can notice that some neurons fire more

frequently than others.


10r
5~
10
10r
5 -
10 I


0-
10r
5 -

0l., ,,.. ",1. "1/ 1 ""ll "' "" ,,. 1... .

0 100 200 300 400 500 600
time (ms)
Figure 2-3. An example of the binned data.

Secondly, we examine the descriptive statistics of the binned data over entire

neurons. The first statistic that we evaluate is the sparseness of the data measured by the

ratio of the number of null bins (containing no spike) to the total number of bins. As a









result, the sparseness is 85.6% for Belle' s dataset, 65.2% for Ivy, and 60.5% for Aurora,

respectively. Then, the average and the standard deviation of the bin count for each

neuron are evaluated in three datasets as depicted in Fig. 2-4. It shows the variance of

statistics over neuronal space.


(a) (b)


(c)
Figure 2-4. The plots of the average (dot) and the standard deviation (bar) for each
neuron of three monkeys, (a) Belle, (b) Ivy, and (c) Aurora are illustrated

In addition, the difference of firing rates during movement and rest for a 3D

reaching task is evaluated. In order to quantify the difference, we estimate the mean firing

rate during movement and rest separately. We collect 1300-second long contiguous data

samples from Belle' s dataset, and manually select 81 subsets of movement from them.

The remaining parts are referred to rest subsets. Then the mean firing rate of each subset










for movement and rest is estimated by averaging bin counts over entire neurons and time

period of a given subset, respectively. Figure 2-5 shows the resulted estimates of mean

firing rates for movement and rest. It shows that neurons tend to fire more frequently in

average during movement. However, due to the uncertainty of the segmentation between

movement and rest these average statistics are variable and subj ect to changing. It is also

noteworthy that the mean firing rate tends to reduce with time.

O 32




II ;




02

Subset Ind ces
Figure 2-5. The traj ectories of the estimated mean firing rates for movement (solid line)
and rest (dotted line) over sequence of subsets.

Finally, the nonstationary characteristics of input are investigated through

observation of temporal change of the input autocorrelation matrix. The autocorrelation

matrix of the multi-dimensional input data is estimated based on the assumption of

ergodicity (see chapter 3 for details). In order to monitor the temporal change, the

autocorrelation matrix is estimated for a sliding time window (4000-sample length)

which slides by 1000 samples (100 second). For each estimated autocorrelation matrix,

the condition number and the maximum eigenvalue are computed as approximations of

the properties of the matrix. The experimental results of these quantities for three datasets

are presented in Fig. 2-6. It is observed that there is temporal variance of the properties of

the input autocorrelation matrix.











1040 10B 1020
Bel Aurora


i10 1010
102
O 1010 104
0 10 0 10 0 5 10 15
Window Index Window Index Window Index
30 100 35
e ele v Aurora
S25




10 40 25
0 10 0 10 0 5 10 15
Window Index Window Index Window Index
Figure 2-6. Illustrations of nonstationary properties of the input autocorrelation matrix.
The dotted lines in the bottom panel indicate the reference maximum
eigenvalue which is computed over entire data samples.

Hand Movements

The hand movements of primates are mainly parameterized by the traj ectories of

hand positions. We treat these traj ectories as our desired signals to be predicted. Note that

the hand positions which are sampled at 200Hz or 1000Hz are downsampled to 10Hz to

be synchronized with the 100ms binned data. Before the investigation of the

characteristics of desired signals, we first present the sample traj ectories from two

different tasks (food reaching of Belle and target reaching of Ivy) in Fig. 2-7. In the food

reaching movement (Fig. 2-7a), the traj ectory approximately spans a hyper-plane in

which three specific parts of movement such as reach to food, food to mouth, and mouth

to rest are placed. Figure 2-7a describes three reaching movements. In Fig. 2-7b, a 2D

traj ectory in the target reaching task over 4 second time duration is depicted. The

traj ectory starts from the dot in the middle of the figure to the arrow. It demonstrates the

traj ectory in this task spans the entire given 2D space and is more irregular than in 3D

food reaching.



















100 40C


(mm) -50-40 (mm) -P30 40
(mm)
(a) (b)
Figure 2-7. Sample traj ectories of (a) 3D food reaching, and (b) 2D target reaching
movements .

Now, we seek to observe the nonstationary characteristics of these traj ectory

signals. The continuous wavelet transform based on the basic wavelet function such as

the Daubechies wavelet (db6 wavelet is used in this analysis) [Dau92] is performed to see

the frequency change over time. 10000-sample traj ectory data from both 3D food

reaching and 2D target reaching are used for wavelet analysis. The absolute values of

wavelet coefficients are plotted in Fig. 2-8. From this wavelet transform, we can clearly

see the nonstationarity of the traj ectory signals for both tasks.






























I


50




50
u 100 200 300 400 CIO... v 0
T rne (rns)
(b)
Figure 2-8. The db6 continuous wavelet coefficients of traj ectory signals of (a) 3D food
reaching, and (b) 2D target reaching. Darker pixels in coefficients indicate
larger values.


Absolute Wavelet Coefficients


r,
'r i
,,


O 100 200 300 400 500 60
Time (rns)

(a)


LL
100


Absolute Wavelet Coefficients




[ II II I ill I
U)


Z

Ir


IV-X
LO 0.-
O-


10














CHAPTER 3
LINTEAR MODELINTG

In this chapter, we will present the design of adaptive linear filters for BMIs and

the standard methods to estimate the parameters.

Linear Modeling for BMIs

Consider a set of spike counts from 2~neurons, and a hand position vector de W~

(C is the output dimension, C = 2 or 3). The spike count of each neuron is embedded by

an L-tap time-delay line. Then, the input vector for a linear model at a given time instance

nZ is composed as x(n) = [xl(n), xl(n-1) ... xl(n-L+1), x2 8Z) .... r- 1417- T, XE LMh~

where x,(n-j) denotes the spike count of neuron i at a time instance n-j. A linear model

estimating hand position at time instance n from the embedded spike counts can be

described as


ye = ff x, (n- j)we, +bC (3-1)

where ye is the c-coordinate of the estimated hand position by the model, w,c" is a weight

on the connection from x,(n-j) to ye, and bc is a bias for the c-coordinate. The bias can be

removed from the model when we normalize x and d such that E[x] = 0, O E W LMh, and

E[d] = 0, O E W where E[-] denotes the mean operator. Note that this model can be

regarded as a combination of three separate linear models estimating each coordinate of

hand position from identical input. In a matrix form, we can rewrite (1) as

y = W x (3 -2)










where y is a C-dimensional output vector, and W is a weight matrix of dimension

(L-M+1)xC. Each column of W consists of [wloc cL' W12c.-- W1L-1c', 20c', 21c' c ,~ III'

. 11 -1 -T

Fig. 3-1 shows the topology of the linear model for the BMI application, which will

be kept basically unchanged in the reminder of this dissertation. The most significant

differences will be in the number of parameters and in the way the parameters w,, of the

model are computed from the data.

All the models are applied to estimate the 3D or 2D hand positions using L = 10

taps, M~= 99 (Belle) neurons (after eliminating the ones that do not fire during training

parts of recordings) for the food reaching task and M~= 192 (Ivy) or 185 (Aurora) for the

target reaching task. The length of the time delays (L) is determined based on the

preliminary BMI study of the correlation between time lags and hand movements in

Wessberg et al. [Wes00], where the neuronal firings up to 1 second before current hand


xl(n)

v 'y(n)







XM(-1






Figure 3-1. The topology of the linear filter designed for BMIs in the case of the 3D
reaching task. x,(n) are the bin counts input from ith HOUTOn (total M~neurons)
at time instance n, and z-l denotes a discrete time delay operator. yc(n) is the
hand position in the c-coordinate. wcg is a weight on x,(n-j) for yc(n), and L is
the number of taps.









movement are significantly correlated with movement. The sizes of the training and the

testing sets are 10,000 samples (~16.7 minutes) and 3,000 samples (~ 5 minutes) for all

the models and three datasets, respectively. The size of the training set is empirically

chosen by consideration of the compromise between nonstationarity and the quality of

estimation: a longer training set can improve estimation of parameters, but increases a

chance of entering more nonstationary characteristics of data in estimation. The weights

are fixed after adaptation, and the outputs of the model are produced for novel testing

samples. Performance of the model is evaluated based on these testing outputs with

respect to generalization.

The following quantitative performance measures are used to evaluate the accuracy

of the estimation:

1. Correlation coefficient (CC) quantifies the linear relationship between estimated
and actual hand traj ectories defined as

CC ~" (3-3)
sdS
where, Cdyi denotes the covariance between two variables d and y, and sd (OT Sy)
denotes the standard deviation of d (or y). In our evaluation, Cdy is the covariance
between actual hand traj ectory (d) and its estimation by model (y).

2. The signal to error ratio (SER) is the ratio of the powers of actual hand traj ectory
signals and the error of a model defined as

d~i(k) 2
SER _k=1 (3 -4)


where d;k) and e(k) are the actual hand signal and the error at a time instance k, and
K is the size of the window in which SER is computed.

3. The cumulative error metric (CEM) estimates the cumulative distribution functions
of the error radius defined as
CEM~) = r( <; ) .(3-5)
So, CEM(r) is the estimated probability that the radius of the error vector is less
than or equal to certain value r.










We compute CC and SER for a short sliding time window in order to see if a given

model predicts better for a particular part of traj ectory. The size of the window is

determined empirically. For the food reaching data, the size is set to 4 seconds which

single reaching movement approximately takes. However, the duration of movement

cannot be estimated for target reaching data since there is no apparent rest period between

consecutive reaching movements. Therefore, the size of the window for the target

reaching data is set long enough (1 minute) to make the computation of CC and SER

reliable.

For comparison between different models, the averages of CC and SER from every

window are computed respectively. These computations are conducted separately for

each coordinate of hand position. Furthermore, we divide the evaluation results of food

reaching into two modes: movement and rest. In each mode, the averages of CC and SER

over three coordinates are used for evaluation instead of individual CC and SER in each

coordinate. For target reaching, where the separation between movement and rest is not

apparent, evaluation is executed separately for each coordinate.

The three performance measures introduced here complement one another; CC

measures linear covariance between actual and estimated traj ectories, thus providing the

evaluation of tracking ability. But it lacks measuring the bias of estimation. This

shortcoming is supplemented by SER that is based on error measurement. However, SER

bears a problem such that it can be inclined to coordinate system which is calibrated

artificially. For instance, with the similar error power, SER becomes relatively large

when the magnitude of actual traj ectory becomes large, thus increasing signal power.

However, the magnitude of hand position does not possess any practical meaning. This










problem can be counterbalanced by CEM in which only the radius of error vector is

concerned. It also provides a statistical tool for performance measure which is especially

useful for statistical comparison of model on average. Hence, we can state that three

measures jointly allow more comprehensive performance evaluation than using

individual measures separately.

The Wiener Filter

The transfer function from the neural bin count to hand position can be estimated

by linear adaptive fi1ters, among which the Wiener filter plays a central role [Hay96]. The

weight matrix in the Wiener filter for the case of MIMO system is estimated by the

Wiener-Hopf solution as

Ww,,,,,,, = R P. (3 -6)

R is the correlation matrix of neural spike inputs with the dimension of (L-M~x(L-M),


rzz r2 --- r2M
R = .(3-7)




where r,, is the L xL cross-correlation matrix between neurons i and j (if j), and r,, is the

LxL autocorrelation matrix of neuron i. P is the (L-M~xC cross-correlation matrix

between the neuronal bin count and hand position as




P = c ,(3-8)




where pc, is the cross-correlation vector between neuron i and the c-coordinate of hand

position. The estimated weights Wurene,- are optimal based on the assumption that the









error is drawn from white Gaussian distribution and the data are stationary. The predictor

Wwlener xIIIIILY mii izes lthe ll men quaret errorI (MSE)C costL function,

J = E [ le 2 ], e = d -y (3 -9)

Each sub-block matrix r,, can be decomposed as

ra, (0) ra 1 --- r, L -1

ra,= r, (1) ra,(0) ra,(L 2)(3-0

ra 1 )r (2 L) r (0) (-0

where r,,(r) represents the correlation between neurons i and j with time lag T. These

correlations, which are the second order moments of discrete-time random processes

x,(m) and x,(k), are the functions of the time difference (m-k) based on the assumption of

wide sense stationarity (m and k denote discrete time instances for each process).

Assuming that the random process x,(k) is ergodic for all i, we can utilize the time

average statistics to estimate correlation. In this case, the estimate of correlation between

two neurons, r,(m-k), can be obtained by

1 ",rmx(-) ii~,,)(-1
r, (m k) = E[x, (m)x, (k)] ,( l,( ) Vj 1 ,M. (-1
N-1,, 1

The cross-correlation vector p,, can be decomposed and estimated in the same way.

r,,( ) is estimated using equation (3-11) from the neuronal bin count data with x,(n)

and x,(n) being the bin count of neurons i and j respectively. From equation (3-1 1), it can

be seen that r,,(r) is equal to r,,(-r). Since these two correlation estimates are positioned at

the opposite side against the diagonal entries of R, the equivalence between r,,( ) and r,,(-

r) leads the symmetry of R. The symmetric matrix R, then, can be inverted effectively by

using the Cholesky factorization. This factorization reduces the computational










complexity for the inverse of R from O(N3) to O(N2) where N is the number of

parameters.

Notice that R must be a nonsingular matrix to obtain the solution from (3-3).

However, if the condition number of R is very large, causing R to be close to a singular

matrix, then the Wwener may be inadequately determined. This usually happens when the

number of samples is too small, or the input variables are linearly dependent to each

other. In such a case, we can reduce the condition number by adding an identity matrix

multiplied by some constant to R before inversion. This procedure is called ridge

regression in statistics [Hoe70], and the solution obtained by this procedure turns out to

minimize a cost function which linearly combines the one in (3-9) and a regularization

term. The details will be discussed in chapter 4. In our estimation of the Wiener solution,

however, we do not employ this regularization scheme

Figure 3-2 and 3-3 display the Hinton diagrams of the weights of the Wiener filter

obtained by (3-6) for food reaching and target reaching, respectively. Each column of

Wwlener (i.e., the weight vector of the Wiener filter for each coordinate) is rearranged in a

matrix form to show spatio-temporal structure of weight vectors. In this matrix form, the

neuron indices are aligned in the x-axis and the time lags are in the y-axis. Note that the

first row of the matrix corresponds to the zero lag (for the instantaneous neuronal bin

counts), followed by the successive rows corresponding to the increasing lags (up to

nine). In the Hinton diagram, white pixels denote positive signs, while black ones do

negative signs. Also, the size of pixels indicates the magnitude of a weight.

From the Hinton diagram, we can probe the contribution of individual neurons to

the output of the Wiener filter. For this purpose, the weights represented in the Hinton










diagrams are yielded from the input in which each neuronal bin count time series x,(n) is

normalized to have unit variance. Then, the value of a weight can represent the sensitivity

of the filter output to the corresponding input [SanO3a]. Also, we can see the sign of the

correlation between a particular neuronal input and the output. For instance, the weights

for neurons indexed by 5, 7, 21, 23, and 71 exhibit relatively large positive values for

food reaching (see Fig. 3-2), indicating that those neuronal activities are positively

correlated with the output. On the other hand, the weights for neurons 26, 45, 74, and 85

exhibit large negative values indicating the negative correlation between neuronal inputs

and the output. There are also some neurons for which the weights have positive and

negative values (e.g. 14 and 93). It is possible from these diagrams to examine the

significant time lags for each neuron in terms of the contribution to the filter output. For

instance, in the case of neuron 7 or 93, the recent bin counts seem to be more correlated

with the current output. However, for neuron 23 or 74, the delayed bin counts seem to be

more correlated with the current output. Similar observations can be made for target

reaching in Fig. 3-3.


10 20 30 40 50 60 70 80 90 100


Figure 3-2. The Hinton diagram of the weights of the Wiener filter for food reaching.



















10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190


Figure 3-3. The Hinton diagram of the weights of the Wiener filter for target reaching.

Stochastic Gradient Learning

The underlying assumption of the Wiener filter is that the statistics of the data are

time-invariant. However, in the nonstationary environment where the statistics of the data

vary in time, the Wiener filter only uses the average statistics to determine weights.

The normalized least mean squares (NLMS) algorithm, a modified version of the least

mean squares (LMS) algorithm, can train weights effectively for nonstationary inputs by

varying the learning rate [Hay96]. It utilizes the stochastic estimation of the power of

input signals to adjust the learning rate at each time instance. The weights at a given time

instance n are updated by NLMS as


w ns~(nZ+ 1)= wim~Cs(nZ)+ 27 c"(n)x(n) (3-12)
7 + x(n)l

where r satisfies 0
the c-coordinate and x(n) is an input vector. If we let r(n) r/(7+-||x(n)||2), then the

NLMS algorithm can be viewed as the LMS algorithm with a time-varying learning rate

such that,

w 73~S (n + 1) = w ,3,, (nZ)+ r(n)ec (n)x(n) (3-13)









Although the weights in NLMS converge to the same solution as the Wiener filter in the

statistical sense for stationary data and a time-varying learning rate, the solution will be

different for the nonstationary data.

The weights of the linear filter for BMIs are estimated by NLMS with the settings

of r = 0.01 and 7=1. In the empirical analysis of the resulted outputs of this fi1ter, we

observed that for food reaching the accuracy of the estimation is improved compared to

the Wiener filter, especially during rest (see the details of results in chapter 6), It means

that the weights found a better compromise between the two very different characteristics

of movement and rest. This improvement has been achieved because of the update rule

(3-12) where the weights in NLMS are updated with a relatively high learning rate during

rest since total firing count increases during movement (see Fig. 2-5). Thus, for the class

of motor behaviors in which movement periods are separated by rest, the NLMS

algorithm captures more information about rest positions than the Wiener filter.

Other Linear Modeling

For comparison with other linear models being proposed for BMIs, a Kalman filter

is designed and its prediction performance is evaluated for the same data used in this

dissertation. The Kalman filter, which estimates the internal state for a linear dynamical

system [Kal60] and produces a generative model for the data, has been proposed to learn

the dynamical nature of the biological motor system in BMIs [Wu03, SanO2b]. In the

Kalman filtering framework, the system state includes the hand position, velocity and

acceleration, and the observation includes the neuronal bin count. Based on the

assumption of the linear relationship (with additive Gaussian noises) between the state

and the observation, as well as the states at current and previous time instances, the










Kalman filter recursively estimates the hand kinematics in real-time from cortical

neurons. Although the system parameters representing the linear relationship are Eixed

after training, the Kalman filter can adjust its gain to track the time-varying nature of

motor systems.

We briefly review the method of the Kalman filter used for BMIs. The linear

dynamic equation for the state is given by

z(n +1) = Az(n) + o(n), (3-14)

where z(n) is a state vector for the hand kinematics such that z(n) = [px(n) p,(n) vx(n) v,(n)

ax(n) a;(n)] ; pc(n) denotes hand position for the c-coordinate, v(n) velocity, and a(n)

acceleration, at a time instance n. For food reaching, pz(n), vz(n), and az(n) are added to

the state vector, respectively. o(n) is a process noise vector following the Gaussian

distribution with a zero-mean vector and a covariance matrix G2. The state-output

mapping equation is given by

x(n) = Hz (n) + u(n) (3-15)

where x(n) is the instantaneous neuronal bin count vector (binned by a 100ms non-

overlapping time window). Note that Wu et. al. designed the same Kalman filter with

different window size (70ms) [Wu03]. u(n) is a measurement noise term following the

Gaussian distribution with a zero-mean vector and a covariance matrix Q. Given the

training set, A and H are determined by the least squares (LS) which solves the following

optimization problems,

N-12
A =arg min Iz(n +1)- Az(n)l (3-16)
A n=1


H = arg min x(n7) Hz (n) (3-17)
H n=1










Given A and H, the estimate of covariance matrix 02 and Q can be obtained by

1 N-1
02 = C (z(n + 1) Az(n))(z(n + 1) Az(n))T (3-18)


1^N
Q (x(n) Hz(n))(x(n) Hz(n)) (-9
N n=1

With the model (A, H, 02, Q) obtained, the Kalman filter estimates the state of the hand

kinematics from the novel neuronal bin count vectors (the test data) in real time. The state

estimate z(n) and the Kalman gain matrix K(n) are updated at each time instance by the

following recursion,

P (n) = AP(n 1)AT + 0Z (3 -20)

K(n) = P (n)H' (HP- (n)HT + Q)' (3 -21)

z(n) = Aztn -1) +K(n)(x(n) -HAz(n -1)) (3 -22)

P(n) = (I K(n)H)P- (n) (3-23)

Note that the error covariance matrix P and the state vector estimate z must be initialized

before starting this recursion.















CHAPTER 4
REGULARIZED LINEAR MODELING

In chapter 3, we have demonstrated the design of linear filters which can be

adapted for BMI applications. Despite the intrinsic sophistication in the BMI system, the

simple linear filter (which merely combines the weighted bin count inputs) could estimate

the primate' s hand position fairly well, especially showing the ability of tracking low-

frequency traj ectory. Based on this fact, we seek an opportunity to improve the

performance of linear models by importing advanced learning techniques. Among those,

a class of regularization methods is preferred since it yields smoother function

approximation in order to improve the generalization performance for BMI models.

In this chapter, we propose to use three different regularization approaches. The

first approach reduces the input space dimension using subspace proj section and

subsequently operates the linear filter in the subspace. The second approach reduces the

fi1ter order in each neuronal channel by employing the gamma delay line. The third

approach places constraints on the model parameter space to reduce the effective number

of parameters. We will discuss the methodology, implementation and analysis of these

regularization approaches in this chapter.

Dimension Reduction Using Subspace Projection

One of the challenges in the design of decoding models for BMIs is that some

neurons' firings are not substantially modulated during task performance, and they only

add noise to the multi-channel input data. In addition, some neurons' firings are

correlated with each other; thus it may be advantageous to blend these inputs to improve









model performance. Subspace proj section, which can reduce the noise and blend

correlated input signals together, may curtail unnecessary firing signals by a proper

proj section matrix. It also reduces the number of degrees of freedom in the multi-channel

data, and consequently decreases the variance of the model. Here, we introduce a hybrid

sub space proj section method which is derived by combining the criteria of principal

component analysis (PCA) and partial least squares (PLS). Then, we will design the

sub space Wiener filter based on this hybrid subspace proj section for BMIs.

A Hybrid Subspace Projection

PCA, which preserves maximum variance in the data, has been widely adopted as a

proj section method [Hay96b]. The proj section vector WPCA is determined by maximizing

the variance of the proj section outputs as


wPCA =argmax JPCA(w)= E[ x'w 2 TR~w (4-1)


where Rs is the input covariance matrix computed over the neuronal space only (it is an

M~x M~ matrix where M~ is the number of neurons). x is an M~x 1 instantaneous neuronal

bin count vector. It has been well known that WPCA turns out to be the eigenvector of Rs

corresponding to the largest eigenvalues. Then an M~x S proj section matrix which

constructs an S-dimensional sub space consists of S eigenvectors corresponding to the S

largest eigenvalues. However, PCA does not exploit information in the j oint space of

both input and desired response. This means that there may be directions with large

variance that are not important to describe the correlation between input and desired

response (e.g., some neuronal modulations related to the monkey's anticipation of reward

might be substantial, but less useful for the direct estimation of movement parameters),

but will be preserved by the PCA decomposition.









One of the subspace proj section methods to construct the subspace in the j oint space

is PLS, which seeks the proj section maximizing the cross-correlation between the

proj section outputs and desired response [Jon93]. Given an input vector x and a desired

response d, a proj section vector of PLS, wPLS maximizes the following criterion,

wPLS = arg max JPLS (w)= E[(xTw)d] = E[wT (xd)] = wTp, (4-2)


where, p is defined as an M~x 1 cross-correlation vector between x and d. The

consecutive orthogonal PLS proj section vectors are computed using the deflation method

[Hay96b].

There have been efforts to find a better proj section which can combine properties of

PCA and PLS. The continuum regression (CR), introduced by Stone and Brooks [Sto90],

attempted to blend the criteria of the ordinary least square (OLS), PCA and PLS.

Recently, we have proposed a hybrid criterion function similar to the CR, together with a

stochastic learning algorithm to estimate the projection matrix [Kim03a]. The learned

proj section can be either PCA, PLS, or combination of them. A hybrid criterion function

combining PCA and PLS is given by

(wT p)2A wTR,w)l~
J(w, 2) = (4-3)
ww

where, Ai is a balancing factor between PCA and PLS. This criterion covers the

continuous range between PLS (il= 1) and PCA (il= 0).1 Since the log function is

monotonically increasing, the criterion can be rewritten as,

log(J(w, 2))= AZlog(w'p)2 +(l- /)l0g(wrRyw) -log(w'w) (4-4)


1 The CR covers OLS, PLS and PCA. However, since we are only interested in the case when subspace
projection is necessary, OLS can be omitted in our criterion.









We seek to maximize this criterion for 0<;&: 1. There are two learning algorithms

derived in [Kim03a] to find w (one is based on gradient descent, and the other is based on

the fixed-point algorithm), but we opt to use the fixed-point learning algorithm here due

to its fast convergence and independence of learning rate. The estimation of w at the

k+1th iteration in the fixed-point algorithm is given by

[ Ap (1 AZ)Rsw(k)
w(k +1I) = (1 T)w(k) + T +~)p Wk TRwk (4-5)


with a random initial vector w(0). T (0
oscillating behavior near convergence. The convergence rate is affected by Tthat

produces a tradeoff between the convergence speed and the accuracy. Note that the

fastest convergence can be obtained with T = 1. The consecutive proj section vectors are

also learned by the deflation method to form in each column of a projection matrix W.

After proj section onto the subspace by W, we embed the input signal at each channel with

an L-tap delay line and design the Wiener filter to estimate the hand position. Figure 4-1

illustrates the overall diagram of the subspace Wiener filter.




x~n)Sub space z Wig r ~r(n)
projection f~iter




Figure 4-1. The overall diagram of the subspace Wiener filter. y(n) denotes the estimated
hand position vector. There are L-1 delay operators (z- ) for each subspace
channel .










Design of a Decoding Model Using the Subspace Wiener Filter

The hold-out cross-validation method [Bis95] is utilized to determine both the

optimal sub space dimension (S) and h simultaneously. 10,000 consecutive data samples

are divided into 9,000 training and 1,000 validation samples for both the food reaching

and target reaching tasks, respectively. The MSE over validation samples is computed

after training for each set of (Si, hj), where Sie {20, 21, ..., 60} and Aj {,t 0.,.. 1 }1 In


Fig. 4-2, the contour map of the computed MSE is depicted. The minimum MSE is found

at (S, h) = (37, 0.9) for food reaching and (S, h) = (44, 0.6) for target reaching,

respectively. The validation MSE also tends to be smaller for larger h in the lower

sub space dimension while the MSE levels are rather flat in the higher subspace

dimension. This indicates that PLS plays a more important role in building a better

subspace Wiener filter for the lower sub space dimension.




087


05" I
04~


20 25 30 35 40 45 60 56 60
subspace d menslan subspace d menslan
(a) (b)
Figure 4-2. The contour map of the validation MSE for (a) food reaching, and (b) target
reaching. The darker lines indicate lower MSE levels.

To investigate the difference between the subspaces by PCA and PLS further, the

first three proj section vectors are estimated by setting ii= 0 or 1 in (4-5) as presented in

Fig. 4-3. Note that PLS yields separate vectors corresponding to each hand position

coordinate since it utilizes desired response, while PCA needs only one proj section










regardless of coordinates. In food reaching, the proj section vectors of PCA have large

weights on the neurons that fire frequently. For instance, the neurons indexed as 42, 57,

and 93 are empirically discovered to have the largest firing counts. Since the neural firing

data is sparse, PCA attempts to build a subspace with frequently firing neurons in order to

preserve the variance. On the other hand, the weights in PLS proj section have larger

values on different neurons which do not fire very frequently such as the neurons indexed

as 7 and 23. From the Hinton diagram described in previous chapter (see Fig. 3-2), these

neurons were discovered to significantly contribute to the output of the Wiener filter

designed for BMIs. Therefore, PLS is able to utilize the information from important

neurons that do not even fire very frequently by exploiting the information in the j oint

space. For target reaching, we can also observe that more neurons are involved in the

proj section vectors in PLS than PCA. The neurons with larger weights in the PCA

proj section, again, are observed to fire more frequently. It is interesting to observe that for

target reaching, the subspace dimension obtained from the cross-validation is of the same

order as the number of neurons obtained in the neuron dropping analysis performed in

Sanchez et al. [SanO3b]. In fact, the number of important neurons, for which the

correlation coefficient between model outputs and desired hand traj ectories is maximized,

is 35, which is close to the subspace dimension of 44.

The empirical measurements of performance for the test data using the subspace

Wiener filter with the above parameters demonstrate that the generalization performance

of the subspace Wiener filter for both tasks reach slightly higher level than those of the

Wiener filter or the linear filter trained by NLMS (see chapter 6). We expect, however,

much higher improvements using the subspace proj section methods for larger datasets











1st proleCt on




C 52nprlcon


3rdprlcon .



S20 40 60 80 100


(a)

1 1~st projection



2nd projection


0 -r

3rd projection



050 100 150


1st projection



2nd prolection



1 3rd pro ect on



0 20 40 60 80 100
Neurons


(b)

1st projection



2nd projection



3rd projection



050 100 150


(c) (d)


Figure 4-3. The first three proj section vectors in PCA for (a) food reaching, and (c) target
reaching, and PLS for (b) food reaching, and (d) target reaching, respectively.


(more than 200 neurons; Carmena, J.M., Lebedev, M.A., & Nicolelis, M.A.L.,


unpublished observations) and anticipate that these techniques will be important in the

foreseeable future when the number of simultaneously recorded neurons surpasses 1,000.


Parsimonious Modeling in Time Using the Gamma Filter

The large number of parameters in decoding models is caused not only by the


number of neurons but by the number of time delays required to capture the history of the


neuron firings over time. Although we use a 10-tap delay line in this study, the size of the


delay line can be variable depending upon the bin size (e.g., if we use a 50ms time bin,









then the number of time lags increases to 20). Hence, it is desirable to represent the

temporal patterns of neuronal data through more efficient way to reduce the number of

taps.

A linear filter described in previous chapters can be decomposed into multiple

finite impulse response (FIR) filters arranged to every neuron. An FIR filter has

advantages of trivial stability and easy adaptation. However, the length of the impulse

response and the filter order are equivalent in an FIR filter. Hence, when a problem

requires a deep memory and a small number of parameters, an infinite impulse response

(IIR) system is more likely appropriate. However, the stability issue in the adaptation and

the non-convex error surface of an IIR filter yield nontrivial challenges for practical use.

A generalized feedforward filter provides a signal processing framework to incorporate

both FIR and IIR characteristics into single system by employing a local feedback

structure [Pri93]. As shown in Fig. 4-4, an input signal is delayed at each tap by a delay

operator defined by specific transfer function G(z). Note that when G(z) = :- it becomes

an FIR filter. The transfer function of an overall system H(z) is stable when G(z) is stable

since


H(z) = [ wk(G.(z))k (4-6)


where K is the number of taps. It has been shown that a generalized feedforward filter can

provide a trivial stability and easy adaptation while decoupling the memory depth from

the filter order.











xK(Z


I~- p y(n)


Figure 4-4. An overall diagram of a generalized feedforward filter [Pri93]. xo(n) is an
instantaneous input, and y(n) is a filter output.

The gamma filter is a special case of the generalized feedforward filter with G(z) =

p/l(z-(1-pu)) where pu is a feedback parameter. The impulse response of the transfer function

from an input to the kth tap, denoted as gk(n), is given by


gk (nZ)= Z '(Gk (z))= Z1 -~; = n 1kl Un-k u(n k) (4-7)


where, Z (-) indicates the inverse z-transform and u(n) the step function. When pu = 1, the

gamma filter becomes an FIR filter. The stability of the gamma filter in adaptation is

guaranteed when 0 < pu < 2 due to a local feedback structure.

The memory depth D with a feedback parameter pu in the Kth-order gamma filter is

given by

D) = Itfor pu < 1, or D = for pu > 1. (4-8)
pU 2-pU

If we defined the resolution R pu, the property of the gamma delay line can be described



K = D x R for pu < 1, or K = D x (2 R) for pu > 1. (4-9)

This property shows that the gamma filter decouples the memory depth from the filter

order by adjusting a feedback parameter (p). In the case of p = 1 (i.e., the FIR filter), the









resolution is maximized whereas the memory depth is minimized for a given filter order.

But this choice sometimes results in overfitting when a signal to be modeled requires

more time delays than the number of descriptive parameters. Therefore, the gamma filter

with the proper choice of a feedback parameter can avoid overfitting by the decoupled

memory structure.

The tap weights can be updated using NLMS, and therefore the computational

complexity is of the same order of FIR filters. A feedback parameter pu can also be

adapted from the data. However, instead of adaptively learning pu, we can serach the best

combination of K and pu by using the cross-validation. In the same way as performed in

previous section, the MSE in a validation set is computed for each set ofK, and 4u, where

KJE (2, 3, ..., 10) (note that we ignore the case of K,=1, which implements memoryless

process) and yuE (0. 1, 0.2, ..., 1.9). The number of samples is 9000 and 1000 for training

and validation, respectively. The contour of the validation MSE is shown in Fig. 4-5. The

minimum MSE is achieved for (K, pu) = (4, 0.3) for food reaching and (K, pu) = (10, 1.2)

for target reaching, respectively.

The memory depth estimated by this empirical method becomes D 13 for the

food reaching task and D = 12.5 for the target reaching task. The savings in the number

of parameters are 60% (3 120 1248) for the food reaching task. It appears that the

temporal resolution of the filter (R) for target reaching is larger than that for food

reaching: R = 0.3 for food reaching and 0.8 for target reaching, respectively. It might

indicate that relatively irregular target reaching movement requires finer temporal

resolution. The generalization performance of the gamma filter with the optimized K and











18- 1





08 08




2345678910 2345678910
Numbef of taps Numberof taps


Figure 4-5. The contour maps of the validation MSE computed at each grid {~K,, pu,} for
(a) food reaching, and (b) target reaching. The darker lines denote lower MSE
levels.

p is evaluated through the novel test data. The empirical results show that the gamma

filter exhibits slightly better performance than both the Wiener filter, and the FIR filter

trained by NLMS (see chapter 6).

Regularization by Parameter Constraints

There have been numerous efforts about model selection to deal with the bias-

variance dilemma [Gem92]. One of them is a pruning technique which seeks to diminish

unnecessary parameters by imposing some constraints in the model parameter space (see

[Ree93] for review). Among many pruning techniques, weight decay has been widely

used due to its simplicity and fair performance [Kro92]. Weight decay is based on an

error cost function to which an additional penalty term of parameters is added. This

penalty restricts the L2-nOrm of a parameter vector, and is balanced with the MSE cost by

a regularization parameter. Although weight decay has originated in a neural networks

field, it shares the same cost function with a statistical method called ridge regression

[Hoe70]. A difference is that ridge regression provides an analytical solution, where

weight decay provides an iterative solution. Hence, understanding ridge regression may









give us a better appreciation of weight decay. One of the interesting features of ridge

regression is its link to sub space projection, especially PCA. This feature leads us to see

in which directions in the input space ridge regression (or weight decay) prunes more.

This property will be reviewed in more details shortly.

Ridge regression belongs to a class of shrinkage method in statistical learning. As

stated earlier, it employs the L2-nOrm penalty. However, recent studies of statistical

learning have revealed that the L1-norm penalty sometimes provides a better solution as a

shrinkage method in many applications than the L2-nOrm penalty [Has0 1]. LASSO

(Linear Absolute Shrinkage and Selection Operator) has been a prominent algorithm

[Tib96] among the L1-norm based shrinkage methods. However, its implementation is

computationally complex. LAR (Least Angle Regression) has been recently proposed by

Efron et. al. providing a framework to incorporate LASSO and forward stagewise

selection [Efr04]. With LAR, the computational complexity in the learning algorithm can

be significantly reduced.

It is notable that we have already applied this class of regularization for BMIs using

NLMS since NLMS can be viewed as the solution to the constrained optimization

problem [Hay96a]. In fact, the NLMS algorithm described in (3-12) is the solution to the

following problem:

M/irinimize Iw(n+1) -w(mII
(4-10)
subject to d(nZ) w(nZ +1)T x(n) = 0

for a given desired response d(n) and an input vector x(n) [Dou94]. It has also been

shown in [Slo93] that NLMS can be the solution to the following optimization problem:



w (n + 1) p411









where pu is the step size. In the NLMS algorithm, the weights are updated such that the

change of weight vectors is minimized. The NLMS algorithm can be therefore viewed as

the solution to the error minimization problem with the constraints on the difference

between successive weight updates.

In this section, we will review statistical shrinkage methods and its relationship

with subspace proj section. Then, the application of ridge regression and weight decay for

BMIs will be investigated. Finally, the properties of the LAR algorithm and its

application to BMIs will be discussed.

Review of Shrinkage Methods

Here, we review the basic concepts in coefficient shrinkage methods. The link

between subspace proj section and shrinkage methods is then illustrated. Various shrinkage

methods are finally illustrated both in a geometric view and in a Bayesian framework.

Shrinkage methods

Consider a constrained minimization problem for given an input vector x, and an

desired output d such that

v~=arg minE d-w'X
w, (4-12)
subject to Iw 2~

where w is a linear model parameter vector, and 1^ is the optimal solution to it. This

modeling technique is called ridge regression. When there are many correlated input

variables in a linear model, the estimated weights can be poorly determined with high

variance. For instance, the effect of a large positive weight on an input variable can be

canceled by a large negative weight on another correlated input variable. If we restrict the

size of weights as in (4-12), such a problem can be effectively prevented. The other









motivation of ridge regression is to make the input autocovariance matrix nonsingular

even if it is not of full rank. Let X be an NxL input matrix in which each row represents

an observation vector (x in equation 4-12), and d be an Nx l desired output vector. N

indicates the number of observations, and L is the input dimension. We assume that each

column of X is normalized to have zero-mean. Then, the optimal solution 1^ in (4-12) by

ridge regression is

w y = (R + 61) P, (4-13)

where I is an L xL identity matrix. R and P represent XTX and X'd, respectively. Notice

that the matrix R + 6I is invertible even if R is a singular matrix.

We can obtain some insights in the properties of ridge regression by the singular

value decomposition (SVD) of X. The SVD of X is given by

X = UAA", (4-14)

where U and V are NxL and L xL unitary orthogonal matrices, and A is an L xL diagonal

matrix with diagonal entries Atl > At 2, ..., ALr, > called the singular values. Then, the

prediction outputs yielded by ridge regression can be written using the SVD as


Xv~, = X(XTX + 6) XTd = UA~(A2 + 1-1A~UTd = CuI u, d, (4-15)


where u, is the ith COlumn of U. From (4-15), we can see that ridge regression finds the

coordinates of d with respect to each orthonormal basis u,. Then it shrinks coordinates by


?j +6 (3> 0). Therefore the coordinate with a smaller A will be shrnmk more. It is easy

to show that the singular values (A,? Jnl+ i~ndct ther varinceo the~ principal componnents of










X [Has01]. Hence, the smaller singular value corresponds to the direction of smaller

variance which is shrunk more by ridge regression.

Now we consider LASSO as an L1-norm based shrinkage method. The fundamental

difference between ridge regression and LASSO is the penalty in the cost function as,

arg mmn 2 ~ -Wn rhe t Lw~p(-
wzassoE[d-wx]sbett w W =1-

where w, is the ith element of weight vector w. The solution to this minimization problem

is no longer linear in d, and a quadratic programming algorithm is usually used to

compute the solution. The L1-norm penalty in (4-16) can make some weights be exactly

zero; thus LASSO is able to select a subset of inputs.

The relationship between subspace projection and ridge regression

We have seen that ridge regression shrinks all directions of principal components

of X, with different rates of shrinkage depending on the variance of each direction.

Sub space proj section with PCA, on the other hand, selects S (sub space dimension) high-

variance directions while ignoring the rest. PLS tends to shrink low-variance directions,

while also reduce high-variance directions depending on the environments [Fra93]. It is

obvious from these facts that a hybrid subspace proj section utilized for BMIs would

behave in a similar fashion as PCA and PLS. Hence, we can see that ridge regression and

the hybrid subspace projection manipulate solutions in similar manner: they tend to

shrink low-variance of principal directions more. The difference is that ridge regression

shrinks smoothly while subspace proj section shrinks in discrete steps.

Comparison of shrinkage methods

Generalization of ridge regression and LASSO creates a criterion as,










arv =i E[ d w x ]+ 31 w, P (4-17)



The penalty is L,-norm for p > 0. In Fig. 4-6, the contours of C,I w, i" are illustrated in the

two-dimensional weight space. Note the difference of contour shapes between ridge

regression and LASSO. Since the contour for LASSO has corners, it is possible that the

performance surface hits the corner, causing one weight to be zero. If the dimension of

the parameter space increases, the contour shape for LASSO becomes a rhomboid, and

has more corners, flat edges and faces. Then, there are more chances to generate zero

coefficients. This geometric description illustrates why LASSO provides a sparser

solution, including zero coefficients, than ridge regression.








(a) p = 4 (b) p = 2 (c) p= 1 (d) p = 0.5


Figure 4-6. Contours of the L,-norm of weight vector for various values of p in the 2D
weight space.

Now let us look at the criterion (4-17) in a Bayesian framework. The penalty term

can be considered to represent a log-prior probability density function for w,, with zero-

mean and variance of 1/3 [Nea96]. The prior distribution of w, is different depending on

p. The Lo-norm simply counts the number of nonzero parameters. This corresponds to

subset selection of input variables [Fur74]. The L1-norm penalty has a Laplacian prior.

The L2-nOrm penalty has a Gaussian prior. Hence we can consider ridge regression,

LASSO, and subset selection as Bayesian estimate of solution to (4-17) with different

priors for the weight.










Regularization Based on the L2-NOrm Penalty

So far, the basic properties of shrinkage methods including ridge regression and

LASSO have been investigated. The applications of these methods to BMIs models will

be discussed in the remainder of this chapter.

We have seen that an additional identity matrix scaled by white noise power to the

input autocovariance matrix avoids singularity and help shrink input variables in the

direction of the eigenvectors corresponding to smaller eigenvalues. However, it is an

open problem to determine the noise power, or a so-called regularization parameter (6 in

equation 4-13). Even if we want to determine the regularization parameter empirically,

we need to follow a systematic procedure. One of the most popular procedures is the

cross-validation, but it expenses a separate validation set and is not adequate in real-time

procedure. For the real-time implementation of BMIs, therefore, we need a different

procedure without generating an explicit validation set. One feasible approach is to

maintain the balance between the noise power represented by the regularization

parameter and the input signal power estimated by eigenvalues. In this approach, the

input signal to noise power ratio (SNR) is estimated by

tr [R]
SNR = (4-18)


where tr[R] denotes the trace of the input covariance matrix R. From this estimation, we

can approximate 6 as,

tr[R]
6 (4-19)
SNR

for a desirable SNR. For instance, if we want to ensure that the input SNR is kept greater

than 30dB with tr[R] computed as 0.1, then the regularization parameter is determined to









be 10-4. This estimation procedure for the regularization parameter will be particularly

useful in BMI implementation when we seek the analytical estimate of the parameters of

a linear filter in real-time with a large number of neurons for which the inversion of the

input autocorrelation matrix is not guaranteed.

Weight decay can be viewed as a simple on-line method to minimize criterion

function in (4-17) using the stochastic gradient, updating the weights by

w (n + 1) = w(n) + 17, V C(n)> -w(n2) (4-20)

where, Vi((n) = iBE e(m~ ..jw~(nl), and rlw is a learning rate for the weight vector.

Instead of determining 6 by the input SNR, we opt to use an adaptive procedure to

estimate the optimal value from data. Larsen et al. [Lar96] proposed that S can be

optimized by minimizing the generalization error with respect to 3. Following this

procedure, we utilize the K-fold cross-validation [Gei75], which divides the data into K

randomly chosen disj oint sets, to estimate the average generalization error empirically as


r= 1sk (4-21)
K k=1

where sk is the validation MSE for the kth set. Then, the optimal regularization parameter

is learned by using gradient descent as,


3(k + 1) = 3(k)-9 (4-22)


where ilk) is the estimate of i at the kth iteration, and r is a learning rate for the

regularization parameter. The detail procedure of estimation of 8/(n)/86 using weight

decay is given in Larsen et al. [Lar96].










In the experiment, we set K = 10, r= 10-6 and update Guntil the difference


15(n + 1) (n) becomes less than 10-3. The number of training samples is 9,000 and the

number of validation samples is 1,000. The term V((n) in (4-20) is estimated by NLMS.

During training, 6 converges to 1.36x 10-' for food reaching and 1.02x10-5 for target

reaching, respectively, as depicted in Fig. 4-7. Then, we train the filter with fixed using

the entire training samples (10,000) to obtain the regularized model. The histogram of the

weight magnitude computed over all the coordinates of hand position is depicted in Fig.

4-8 to demonstrate the effect of weight decay. Note that the number of weights that have

smaller magnitudes increases with weight decay. For instance, the number of weights that

are close to zero is approximately 345 for weight decay versus 75 for NLMS in Fig. 4-8a,

and 460 for weight decay versus 150 for NLMS in Fig. 4-8b. It shows that more weights

are pruned by weight decay, thus the effective degree of freedom of the model reduces.

The reduced degree of freedom can help generalization as examined by measuring

performance in the test data. Empirical performance measures in the test dataset show

that regularization using weight decay improves the generalization performance over the






10. 10'




ItefatlOns ItefatlOnS

(a) (b)

Figure 4-7. Convergence of the regularization parameter 3(n) over iterations; (a) food
reaching, and (b) target reaching.













3350
1m 300
S250


01 02 0 4 5 0 0 8 090 6 0
lwl lwl
(a) (b)
Fiue48 h itga fth antdso egt vr l h oriae fhn






Regulriza8.th ion B rase on the Ly-normde Penaltyve lltecorintsofhn





The least angle regression (LAR) algorithm has been recently developed to

accelerate computation and improve performance of forward model selection methods. It

has been shown in Efron et al. that simple modifications to LAR can implement the

LASSO and the forward stagewise linear regression [Efr04]. Essentially, the LAR

algorithm requires the same order of computational complexity as the ordinary least

squares (OLS).

The selection property of LAR, which leads to zeroing coefficients, is preferable

for identification of sparse systems when compared to regularization methods with the

L2-nOrm penalty. Also, the analysis of the selection process often provides better insights

into the unknown system than the L2-nOrm based shrinkage methods.

The LAR procedure starts with an all zero coefficients initial condition. The input

variable having the most correlation with desired response is selected. We proceed in the

direction of the selected input with a step size which is determined such that some other









input variable becomes to have as much correlation with the current residual as the first

input. Then, we move in the equiangular direction between these two inputs until the

third input has the same correlation. This procedure is repeated until either all input

variables join the selection, or the sum of coefficients meets a preset threshold

(constraint). Note that the maximum correlation between inputs and the residual

decreases over successive selection step in order to de-correlate the residual with inputs.

Table 4-1 summarizes the details of the LAR procedure [Efr04].

An illustration in Figure 4-9 (cited from Efron et al. [Efr04]) would help

understand how the LAR algorithm proceeds. In this figure, we start to move on the first

selected input variable xl until the next variable (X2 in this case) has the same correlation

Table 4-1. Procedure of the LAR algorithm
Given an Nxl~input matrix X (each row being M~-dimensional sample vector), and an
Nx l desired response matrix Y, initialize the model coefficient P, = 0, for I = 1,... ,M,
and letp ( [Pt, .., Phu]
Then the initial LAR estimate becomes, Y = XP = 0 .
1 N 1 N 1 N
Transform X and Y such that x x, = 0, x, = 1, Cy, = 0 for j= 1,..., M.
& z=1 & z=1 & z=1
(a) Compute the current correlation c X' (Y Y)
(b) Find Cmax= max gc l, and a set A (fV: |c,|= Cmx).

(c) Let Xa = {..., sign(c,)x,, ... } for je A.
(d) Let cD = Xa" Xa, and a = (la'@- la,)- where la is a vector of one' s with a length
equal to size of A.
(e) Compute the equiangular vector Ct = Xa(at- la) that has the unit length. Note that
that XapL = ala (angles between all inputs in A and Ct are equal).
Cmax cJ C max + cJ
(f) Compute the step size, 7 = min'": eAc 8 uB

where min' indicates considering only positive minimum values over possible j.
(g) Compute 6, which is defined as the inner product between all inputs and y such as,
6, X 4
(h) Update Y. = Y+ ye
Repeat a-h until all inputs join the active set A, or I p exceeds the given threshold.











with the residual generated by xl. pl is the unit vector in this direction as computed in

table 4-1(e). The amount of movement along pt, denoted as 71, is computed by equation

in table 4-1 (f). yl denotes the OLS estimate of desired response y with input xl. Note that

the estimate by LAR (91) moves toward yl, but does not reach it. The next direction y2

intersects the angle between xl and x2 equiangularr vector in the two-dimensional space

of xl and x2) Such that the angle between xl and the updated residual (rl = y2 y101) is

same as the one between X2 and rl. Since every input is standardized such that the

correlation, which is measured by the inner product of xl and x2, can be estimated by the

angel between xl and x2, these two variables have the same absolute correlation with rl

following the equation in Table 4-1(a). The coefficient 72 is computed again following

Table 4-1(f) such that X3 has the same absolute correlation with the next residual r2 = 3 -

(71pi1+ 72C12) aS X1 and x2. So, the next direction Cl3 intersects the angle between xl, X2, and

x3. This procedure is repeated until the L1-norm of coefficients reaches a given threshold.

LAR can be easily modified to implement LASSO; when some coefficients cross

zero at a given step, those are forced to be zero. And the corresponding inputs are



X3Y



YX3 3 g y2



Y1IX1


Figure 4-9. An illustration of the LAR procedure.









removed from the selected j oint set. The LAR procedure can be continued with the

remaining inputs since they still have the same absolute correlation with the current

residual .

There are two maj or considerations in the implementation of LAR. First, LAR

assumes the linearly independence between input variables. Second, the determination of

threshold for the L1-norm of coefficients is an open problem and dependent upon data.

The performance of the linear model learned by LAR can be greatly influenced by a

choice of this threshold.

If we attempt to apply LAR to the linear model in BMIs, difficulties lie in the fact

that the embedded inputs are likely to be correlated with each other (although they might

be linearly independent), so that LAR might not be able to operate optimally. Also,

finding an optimal threshold will be a nontrivial task.2

Despite these difficulties, we test the performance of the linear model learned by

LAR with the food reaching and the target reaching datasets. The threshold is determined

by the hold-out cross-validation. The performance measures computed in test data show

that LAR performs in the similar level as weight decay. It may indicate that the

difficulties in the implementation of LAR could prevent it from improving generalization

further compared to weight decay. We will skip the presentation of the numerical

performance results of the linear models with LAR since they are very similar to those

with weight decay.






SWe can utilize the cross-validation as in the case of the gamma filter or subspace projection. However, the
range of the search for threshold will become much broader.















CHAPTER 5
NONLINEAR MIXTURE OF MULTIPLE LINEAR MODELS

In the design of decoding models for BMIs, there have been a number of

approaches including linear and nonlinear models; e.g. the Wiener filter, the Kalman

fi1ter, time delay neural networks (TDNN), recursive multilayer perceptrons (RMLP),

and so on. These modeling frameworks have successfully predicted target hand

trajectories only using neuronal activity signals.

However, an important consideration in designing BMIs is the feasibility of the

approach taken. The target applications necessitate real-time implementations with

minimal computational and hardware requirements. On one hand, linear models are

usually the best in terms of their computational requirements. On the other hand, a simple

linear model is often insufficient to accurately capture the complex input-output

relationships between neural activity and hand position. Recently, a performance

comparison has been conducted between linear and nonlinear modeling approaches, and

the latter was found to be favorable [SanO2a].

In this chapter, we aim to demonstrate that the target mapping between the neural

activity and the hand trajectories can be discovered using a divide-and-conquer approach.

In this approach, we combine the simplicity of training linear models with the

performance boost that can be achieved by nonlinear methods. Specifically, a two-stage

structure is used where the first stage consists of a bank of competitively trained linear

filters and the second stage consists of a single-hidden layer multilayer perceptrons

(MLP) (see Fig. 5-1). Model comparison in the next chapter will demonstrate the









outstanding performance of this approach among various models for the food reaching

BMI data.

Nonlinear Mixture of Linear Models Approach

In this section, we describe the modeling approach using nonlinear mixture of

competitive linear models (NMCLM). A brief description of TDNN will also be provided

for comparison purpose.

Nonlinear Mixture of Competitive Linear Models

The overall architecture of NMCLM is identical to a single hidden layer TDNN as

shown in Fig. 5-1. However, the training procedure undertaken here is significantly

different. This modeling method uses the d'ivid'e-and'-conquer approach. Our reasoning is

that a complex nonlinear modeling task can be elucidated by dividing it into simpler

linear modeling tasks and combining them properly [Far87]. Previously, this approach

was successfully applied to nonstationary signal segmentation, assuming that a

nonstationary signal is a combination of piecewise stationary signals [Fan96].

Hypothesizing that the neural activity will demonstrate varying characteristics for

different localities in the space of the hand traj ectories, we expect the multiple model

approach, in which each linear model specializes in a local region, to provide a better

overall input-output mapping. However, the problem is different here since the goal is not

to segment a signal but to segment the joint input/desired signal space.

The topology allows a two-stage training procedure that can be performed

sequentially in off-line training; first, competitive learning for the local linear models and

then error backpropagation learning for the MLP. It is important to note that in this

scheme, both the linear models and the MLP are trained to approximate the same desired

response, which is the hand trajectory of a primate.































________________________________________


Figure 5-1. An overall diagram of the nonlinear mixture of competitive linear models.

The training of the multiple linear models is accomplished by competitively (hard

or soft competition) updating their weights in accordance with previous approaches using

the NLMS algorithm. The winning model is determined by comparing the (leaky)

integrated squared errors of all competing models and selecting the model that exhibits

the least integrated error for the corresponding input [Fan96]. The leaky integrated

squared error for the ith model is given by

cr (n) = (1-- pU)E (n -1)+ pue (n), i= 1, --- ,M\ (5-1)

where, M~ is the number of models and pu is the time constant of the leaky integrator.

Then, the fh MOdel wins competition if e,(n) < e,(n) for all if j. If hard competition is

employed, only the weight vector of the winning model is updated. Specifically, if the fh

model wins competition, the update rule for the weight vector w,(n) of that model is

given by









e (n~x(n)
w,(n +1) = w, (n)+q 2 (5 -2)
S+ Ix(n)ll

where, e,(n) is the instantaneous error, and x(n) is the current input vector. r represents a

learning rate and yis a small positive constant used for normalization. If soft competition

is used, a Gaussian weighting function centered at the winning model is applied to all

competing models. Every model is then updated proportional to the weight assigned to

that model by this Gaussian weighting function such that

A.~, (n~e, (n~x(n)
w,(n +1) = w, (n)+ r ,i =1, --- ,M2 (5-3)
7 + x(mrll

where, w, is the weight vector of the ith model. Assuming the jh model wins competition,

A,,,(n) is the weighting function defined by


A2,~ (n) = exp(- ) (5 -4)
2cr 2 (Z

where, d, is the Euclidean distance between indices i and j, which is equal to Ij-i|, r(n) is

the annealed learning rate, and cl(n) is the Gaussian kernel width decreasing

exponentially as n increases. The learning rate also exponentially decreases with n.

Soft competition preserves the topology of the input space, updating the models

neighboring the winner; thus it is expected to result in smoother transitions between

models specializing in topologically neighboring regions (of the state space). However,

the empirical comparison using BMIs data between hard and soft competition update

rules shows no significant difference in terms of model performance (possibly due to the

nature of the data set). Therefore, we prefer to utilize hard competition rule for its

simplicity.










With the competitive training procedure, each model can specialize in local regions

in the joint space. Figure 5-2 demonstrates the specialization of 10 trained models by

plotting their outputs (black dots) with the common input data (40 seconds long) in the

3D hand traj ectory space. Each model's outputs are simultaneously plotted on top of the

actual hand trajectory (red lines) synchronized with the common input. The figure shows

that the input-output mappings learned by each model display some degree of

localization, although overlaps are still present. These overlaps may be consistent with a

neuronal multiplexing effect as depicted in Carmena et al. [Car03], which suggests that

the same neurons modulate for more than one motor parameter (the x- and y-coordinates

of hand position, velocity and griping force).

1 2 3 4 5






6 7 8 9 10







Figure 5-2. Demonstration of the localization of competitive linear models.

The competitive local linear models, however, require additional information for

switching when applied to BMIs, since the desired signal that is necessary to select a

winning model is not available after training in practice. A gate function as in the mixture

of experts [Jac91] utilizing input signals needs to be trained to select a local model. Here,

we opt for a MLP that directly combines the predictions of all models. Therefore, the

overall architecture can be conceived as a nonlinear mixture of competitive linear models









(NMCLM) [Kim03b]. This procedure facilitates training of each model compared to the

TDNN, since only one linear model is trained at a time in the first stage, while only a

relatively small number of weights are trained by error backpropagation [Hay96b] in the

second stage.

Time Delay Neural Networks

In the TDNN, the mapping between neural activity and hand traj ectories is

estimated by nonlinearly combining bin counts (and their past values) from each neuron.

The tap delay lines in the input layer preset the memory to account for temporal

dependencies in neural activity. This architecture has a single hidden layer with sigmoid

nonlinearities, and the output layer with linear processing elements (PEs). The output of

the TDNN is given by y(n) = W~f(W1 x(n)+b)+b+b, where the weight matrices and bias

vectors W1, W2, bl, and b2 are trained by the error backpropagation algorithm.

BMIs Design Using NMCLM

NMCLM is trained with the same sets of data used for the Wiener filter in chapter

3. The topology consists of 10 competitive linear models for each coordinate and a single

hidden layer MLP with 2~inputs (M~= 10-C, C is the output dimension: 2 or 3), 30 hidden

PEs with the hyper-tangent (tanh) functions, and C linear output PEs to predict each hand

position coordinate. Each linear model has the same topology as the one used in chapter

3. The number of multiple models and the number of hidden PEs were chosen

empirically (although it was not optimized). The hard competition learning rule is utilized

along with NLMS for the training of linear models and the conjugate gradient algorithm

is used to train the MLP. The training of the MLP is repeated with 100 random initial

conditions and the network with the least MSE is selected. The time constant of the leaky

integrator (a) is determined by the hold-out cross-validation method. The data is divided









into 9000-sample training set and 1000-sample validation set. The resulting values of a

are 0.3 for the food reaching task and 0.6 for the target reaching task.

TDNN is trained with the same input and desired response as in NMCLM. The 30

PEs in the hidden layer use tanh nonlinearities. All the weights and biases are trained by

the error backpropogation algorithm.

Even with the simpler training approach, there are over 30,000 parameters in

NMCLM to be trained. Each linear model with around 3,000 parameters is trained with a

fraction of the total number of samples (only the ones pertaining to its local area of the

space), which is considered too high for the restricted number of training samples. With

linear models built from gamma filters, we can reduce significantly the number of

parameters in the first layer ofNMCLM, while preserving the same level of

computational complexity in training.

As will be shown in the next chapter, NMCLM results in superior generalization

performance compared to other linear models and the TDNN for food reaching.

Substitution of the gamma filters for the FIR filters also improves the performance

further. Due to the difficulty of training a large number of parameters in the TDNN with

error backpropagation, its performance suffers compared even with the linear models.

However, these nonlinear models do not exhibit any significant improvement for target

reaching. This will be discussed in the following chapter.

Analysis

Evaluation of Training Performance for NMCLM

Now, we demonstrate the advantage of training in NMCLM compared to the

TDNN using the food reaching data. The topology proposed in NMCLM is basically

equivalent to a three-layer network: the first layer of weights consists of the competitive









model coefficients, the second and third layer of weights are simply the weights of the

following MLP. In this topology, the first hidden layer and the output layer have linear

PEs, whereas the second hidden layer has nonlinear PEs. In the NMCLM approach, the

first layer weights are trained competitively to predict the desired signal, whereas the

MLP is optimized using error backpropagation.

In order to quantify the performance of this training procedure from an

information-theoretic point-of-view, we evaluate the mutual information [Cov91],

I(ze,d), between the outputs of the competitive models, ze, and the desired output, d.

Using a Parzen window estimator for the mutual information [Erd02] on ten arbitrary

segments of the hand traj ectory (each of length 1000 samples), the average and standard

deviation of I(ze,d) is found to be 8.97 nats (f 1.21 nats). The maximum mutual

information allowed by this model and data, obtained by estimating I(ze, d) is 9.83 nats

(f 1.19 nats). Percentage-wise, the information contained in the competitive model

outputs pertaining to the desired output is thus 92 % (f 6 %). From this, we conclude that

the information loss in the first layer is just 8 % (f 6 %).

For comparison, another network with the same topology is trained as follows: The

MLP weights are borrowed from the second hidden layer and the output layer of the

above network (in order to ensure identical information loss at this stage). The first layer

weights are then trained using standard backpropagation through these MLP weights,

instead of using competition. This network, therefore, uses the minimum MSE solution

for the first layer weights. Similarly, the mutual information I(zB,d) between the output of

the first layer of this network zs, and the desired output d is calculated to be 7.42 nats (f

1.35 nats). For this network, the maximum mutual information is 10.90 nats (f 0.40 nats).










These correspond to an information-transfer percentage of 68 % (f 11 %). Therefore, the

information loss in the first layer of the second network is 32 % (f 11 %).

In summary, the mutual information between the desired output and the

competitive model outputs is larger than the first layer outputs of the equivalent TDNN

(all the weights are trained only by error backpropagation), which shows that the training

in NMCLM is more efficient.

Analysis of Linear Filters

It is intriguing to pose a question of what is the value of adapting the parameters in

the input layer where most of the weights reside. For this, we analyze the pole zero plots

of the trained FIR filters for each neuron from multiple linear models. In this analysis, we

verify that there are only minor variations in the pole zero plot no matter what is the

neuron or the adaptation procedure. Figure 5-3 shows the frequency responses of the 10

linear filters (with 10-tap delay line) in NMCLM for the food reaching task for a specific

neuron. These frequency responses indicate that they are all lowpass filters and the

locations of the zeros (denoted by different markers for different models) for all models

are similar. It means that the role of the filters is to lowpass filter (smooth) the input. As

depicted, the zeros tend to be placed at equal intervals very close to the unit circle. The

maj or difference seems to be the gain at DC.

Hence, one can synthesize an alternate adaptive filter that displays a very similar

response and has only two free parameters, as

1+ z-10
H(z) =G (5-5)
1 az

where the two free parameters encode the gains (G) and the locations of the pole of the







63


Magnitude Response







Phase Response






-3005 115 225 3
w (rad)

Figure 5-3. Frequency response of ten FIR filters; (left) pole zero plots, (right) frequency
responses.

filter for each neuron (a), imposing the constraint |a| < 1. The number of NMCLM

weights with this filter for the estimation of one output coordinate can then be reduced

from 30,630 to 6,870. The performance evaluation of this simplified model shows a

slightly low level compared to the original performance (the performance profile is

similar to the Wiener filter for the prediction of movement, while superior to the linear

models for rest). This indicates that a variable gain control and a variable integration

over time per neuron seem sufficient to derive optimal models for BMIs. These

characteristics can be obtained by a multitude of systems that can be much easier to

implement and do not even require adaptation. Further work will be pursued along this

line.










SThe numerical results are followings; CC(move)= 0.76 (f 0.18), SER(move)= 4.61 dB (f 2.31 dB),
CC(rest)= 0.01 (f 0.26), SER(rest)= 7.67 dB (f 4.43 dB). See chapter 6 for the comparison of these results
with others.















CHAPTER 6
COMPARISON OF MODELS

In this chapter, we summarize the evaluation of the generalization performance for

all models introduced so far in this dissertation. We emphasize, however, that the

comparison is done for the datasets of 100-200 simultaneously recorded neurons for

which the standard Wiener filter algorithm yielded very good performance. With the

increase of the number of simultaneously recorded neurons, task complexity, and

complexity of predicted motor parameters, what we will see only tendencies in this

comparison may become important for BMI designs.

Before presenting comparison results, we first demonstrate the outputs of every

model along with the actual hand traj ectories for food reaching in Fig. 6-1 and for target

reaching in Fig. 6-2, respectively. Since our approaches have been developed by

assigning the Wiener filter as a golden standard, observations in these Eigures are likely to

be made mainly by comparing traj ectories of models with that of the Wiener filter. First,

we can observe that NLMS can predict better for rest positions than the Wiener filter in

Fig, 6-1b. This explains how a time-varying learning rate in NLMS can help track a

nonstationary data. Next, we can see that regularized models yield smoother output

traj ectories than the Wiener filter especially during rest. Also, it can be easily captured

that NMCLM provides the most accurate prediction in Fig. 6-1f. NMCLM shows its

ability to stay in rest position with little jitters, and to track rapid changes of hand

traj ectory during movements. This may be due to the nonlinear structures in NMCLM.

On the other hand, in Fig. 6-2, all models show similar prediction performance for


















































4U
20 i
E i
o
20 ~:.?*(~~.L=~.....i


60
40
~20
:i ~1~cr*ru*rwv'
20


40
20

ok_~
20


20
E
U
20

v
60

40 i
E 20
E
O
20
z

40 i;3
20
E i
E O
20


20
E
0
20

Y
50
40
E 20
E
0
20
L

40~ E1
20
E
E 0
20


20
E i
E o
lyCy~il



60
40
i
~20

20


40~ r.
,20
E O
20


Figure 6-1. The actual hand traj ectory (dotted red line) and the estimated hand traj ectory

(solid black line) in the x-, y-, and z-coordinate for the 3D food reaching task

on a sample part of the test data; (a) the Wiener filter, (b) the linear filter with

NLMS, (c) the subspace Wiener filter, (d) the gamma filter, (e) the linear filter

regularized by weight decay, and (f) NMCLM.


20
E i
E O
20 ~"Y""dnl""~ Irc~:;;~~x......-.~...;.1 Y YYa:

Y
60 ''
40
E 20
E
O
20
z

40
i
,20
E O
20





(b)


x
40
20
E
E
20

v
60

40 ,I
E 20
E
Ob~n,~...nx~l r.~an~~.~Y
20
r

40
20
E o
20




















(a) (b)


(c) (d)


(f) (f)


Figure 6-2. The actual hand traj ectory (dotted red line) and the estimated hand traj ectory
(solid black line) in the x-, and y-coordinate for the 2D target reaching task on
a sample part of the test data. (a) the Wiener filter, (b) the linear filter with
NLMS, (c) the subspace Wiener filter, (d) the gamma filter, (e) the linear filter
regularized by weight decay, and (f) NMCLM.










target reaching. None of models outperforms visually in the output traj ectories.

Performance measures presented later will demonstrate this similarity of performance

(although there are statistical differences between models).

Comparison of Model Parameters

We now compare the weights of four linear models: the Wiener filter, the linear

model trained by NLMS, the gamma filter, and the linear model regularized by weight

decay. Since the number of tap delays is different among models, the weights must be

represented based on neurons (not every tap of different time lag). Hence, we compute

the average value of the weight magnitudes over tap delays and over three (or two) output

dimensions. Then, the standard deviation of each neuronal data estimated from the

training set is multiplied by the average magnitude to obtain a measure of neuronal

contribution; that is, the average sensitivity of the output to individual neurons [SanO3a].

Figure 6-3 shows the calculated sensitivities in each model for both food reaching and

target reaching. Note that we rescale the sensitivity values to be in [0, 1] in order to

facilitate the visual comparison.

It can be observed in Fig. 6-3a that the normalized weight magnitude distributions are

similar among models except for the gamma filter. The weight distribution ofNLMS

follows that of the Wiener filter. But, it exhibits smaller magnitudes when the

corresponding neurons do not contribute much. This may explain the regularization

property of NLMS with the constraint on the weights as presented in chapter 4. Weight

decay also prunes weights, generating the sparse weight distribution, which can enhance

generalization. The weight distribution of the gamma filter might differ from others since

it utilizes the different time scale. It weights more on neurons indexed as 57, 84, 87 and

94, where neuron 57 is the neuron with highest firing rate, and neuron 94 is one of the










highest sensitivity neurons according to the analysis in Sanchez et al. [SanO3b]. For the

target reaching task as shown in Fig. 6-3b, all models present similar weight magnitude

distributions, which may explain the similar performance of all models.







--- Gamma
-- Weight Decay .


(b)

Figure 6-3. The distributions of normalized weight magnitudes of four linear models over
neuronal space for; (a) food reaching, and (b) target reaching.










Performance Evaluation

Tables 6-1 and 6-2 summarize the generalization performances of all models using

measures introduced in chapter 3. For food reaching, there are ten reaching movements in

the test data for which the performances are measured. The CEM curves of all models are

presented in Fig. 6-4. Since the CEM curve measures the probability that the distance

between the estimated and actual hand positions is less than a given quantity represented

Table 6-1. The generalization performances of linear models and nonlinear models for
the 3D food reaching task.


# of
weights
2973
2973
1191
1113
<2973
1017
29823


SER (move)
(dB)
4.76 & 1.87
4.85 & 2. 11
5.25 A 1.97
4.84 & 2.06
4.73 & 2.04
4.32 & 1.97
4.87 & 2.56


SER (rest)
(dB)
2.40 & 2.80
3.40 & 2.76
3.59 & 3.11
3.78 & 2.57
3.76 & 2.78
2.26 & 3.85
3.29 & 5.67


CC (move)
0.76 & 0.19
0.75 & 0.20
0.78 & 0.19
0.77 & 0.18
0.77 & 0.18
0.78 & 0.20
0.77 & 0.17


CC (rest)
0.03 & 0.22
0.06 & 0.22
0.07 & 0.21
0.09 & 0.20
0.07 & 0.22
0.05 & 0.25
0.02 & 0.22


Subspace
Weight decay
Kalman
TDNN


Measures

Wiener
NLMS
Gamma


NMCLM
(I 30753 0.81 &0.15 5.90 &3.00 0.03 &0.22 5.64 &4.00
NMCLM
12933 0.81 & 0.19 6.08 & 3.19 0.06 & 0.23 6.23 & 5.23
(Gamma)


Table 6-2. The generalization performances of linear models and nonlinear models for
the 2D target reaching task.
# of
Measures .CC (x) SER (x) (dB) CC (y) SER (y) (dB)
weights
Wiener 3842 0.66 & 0.02 2.42 & 0.54 0.48 & 0.10 1.08 & 0.52
NLMS 3842 0.68 & 0.03 2.42 & 0.55 0.50 + 0.08 0.90 + 0.49
Gamma 3842 0.70 + 0.02 2.81 & 0.69 0.53 & 0.09 1.55 & 0.43
Sub space 882 0.70 + 0.03 2.80 + 0.83 0.58 & 0.08 1.90 + 0.57
Weight Decay <3 842 0.71 & 0.03 2.79 & 0.92 0.57 & 0.08 1.75 & 0.46
Kalman 1188 0.71 & 0.03 2.77 & 0.65 0.58 & 0.10 1.63 & 0.76
TDNN 57691 0.65 f 0.03 2.24 f 0.59 0.51 f 0.08 1.10 f 0.39
NMCLM
58622 0.67 & 0.03 2.62 & 0.53 0.50 + 0.07 1.23 & 0.40
(FIR)
NMCLM
(Gmm)58622 0.67 &0.02 2.55 &0.61 0.47 &0.07 0.95 &0.40











at the x-axis, the closer the curve is to the upper left corner the better the corresponding


model performs. To visualize the performance clearly, we give an instance of the CEM


profile for certain distance; the models are listed in the order ofPr(|e| < 20mm), where


the top model exhibits the highest probability. Figure 6-4 shows that the differences


among models are more distinguishable in the food reaching task than in the target hitting


task. Also, NMCLM demonstrates superior performances for the food reaching task,


while it does not improve performance for the target hitting task.


lnn loor -Wiener
NLMS
90 90Subspace
80o so Gamma
Weight Decay
70 Ii- 70 Kalman
60 NMCLM(FIR)
S60 NMCLM(Gamma)
50 ~ ~ / NMCLM(FIR) so -0 TDNN ~ Subspace
NMCLM(Gamma) Gamma
40 Subspace .40 WieghtDecay
Weight Decay 30Kalman
NLMS NMCLM(FIR)
20~Is//Gcamma 20~ yTDNN
TDNN I NMCLM(Gamma)
10 ~ / W iener 10 Wo ~~iener
Kalman I NLMS
10 15 20 25 3] 0 5 10 15 20 25 30
mm mm


(a) (b)


Figure 6-4. Comparison of the CEM of the nine models for (a) the food reaching task,
and (b) the target reaching task.

Statistical Performance Comparison


To quantify the performance evaluations obtained above, we test the statistical


difference between the Wiener filter and all the other models [Kim05a]. We first assume


that the average magnitude of the error vector (E[|e|]) on the test data is a sufficient


measure of model performance. To compare the performance of different models, we test


the difference between the distributions of E[|e|]. E[|e|] is locally estimated in individual


4-second non-overlapping time windows through the test data (approximately 3,000-


second long). Since a summation is used to estimate the mean, the set of E[|e|] can be









assumed to be drawn from a Gaussian distribution based on the central limit theorem

(CLT). Also, the use of non-overlapping windows can approximately satisfy the

independence condition between the estimates ofE[|e|] from different windows.

Therefore, the t-test can be applied to the set of E[|e|].

In order to setup a test environment between one model with the Wiener filter, we

first define A as the difference between E[|e|] for the Wiener filter and for one of the other

models,

A(k) = E~e (k) -E~e~ (k) (6-1)

where E[|e|]uhk) denotes the average magnitude of error vectors in the kth window for the

model under comparison and E[|e|]w(k) for the Wiener filter. Note that A is a Gaussian

random variable since the linear combination of two Gaussian variables E[|e|]un and

E[|e|]w is also a Gaussian variable. Then, we apply the t-test to A with the realizations

{ A/k). The hypotheseso for the one-tal t-test then become,

Ho : E[A]>0O
(6-2)
HA : E[A]<0

Given the significance level of a, if the null hypothesis is rej ected we can claim

with the confidence level of (1-a) that the compared model performs better than the

Wiener filter.

The t-test results are presented in table 6-3. For the food reaching task, every model

performs better than the Wiener filter except the TDNN. Note that the TDNN shows

higher level of mean SER during rest, but with a relatively large variance. For the target

reaching task, however, only three linear models pruned by regularization are shown to










outperform the Wiener filter. These results are fairly consistent with results in tables 6-1,

6-2, and Fig. 6-4.

Table 6-3. The t-test results for the difference of the magnitude of error vectors from the
test dataset between the Wiener filter and other models.
Food reaching Target Reaching
Signfcance level 0.01 0.05 0.01 0.05
NLMS 1 1 0 0
Gamma 1 1 1 1
Subspace 1 1 1 1
Weight Decay 1 1 1 1
Kalman 0 0 0 1
TDNN 0 0 0 0
NMCLM (FIR) 1 1 0 0
NMCLM (Gamma) 1 1 0 0


1The test result of 0 indicates the acceptance of null hypothesis; while 1 indicates the rejection of null
hypothesis.















CHAPTER 7
MULTIRESOLUTION ANALYSIS FOR BMI

Most designs of decoding algorithms for BMIs including our models have used the

estimate of the local firing rate of neurons that is estimated by inning neural spikes with

a non-overlapping sliding time window of the length ranging from 50ms up to 100ms

[Cha99, Ser02, Tay02 and Wes00]. These representations of the firing rate have been

used for modeling of the relationship with responsive motor parameters. Adaptive models

(including linear and nonlinear ones) based on this estimate have predicted motor

parameters with a correlation coefficient between 0.6 ~ 0.8. However, it has also been

shown in previous chapter that all the models reached the same basic performance level

especially for the target reaching task, which may not be sufficient for more involved real

applications.

These results lead us to revisit our approaches for designing decoding models;

extracting advanced features from neural data followed by developing an adequate

mathematical decoding algorithms and topology may bring us a better decoding model.

Extracting desirable features in complex, high-dimensional neuronal data is, though, an

open problem, requiring intensive studies. Yet, we present here a simple approach by

considering the representational space for neuronal firing activity, which will

demonstrate how extracting features in input can help to improve the model performance.

In our approach, we revise the present representation of a local firing rate, encoded

in a series of bin counts within a fixed width time window. Since a local firing rate can

represent the local frequency of a neural spike train, the features can be extracted based









on local frequency. One of the methods for the representation of local frequency

information is the multiresolution analysis [Mur04], usually realized in wavelet

transform. With the multiresolution analysis, it is possible to represent the time-frequency

characteristics of a signal. Basically, we can obtain as many local frequency components

as we want at a given time instance. Hence, the multiresolution analysis of neural spikes

may provide richer information about neuronal behavior compared to the inning using a

fixed width time window.

If we consider the multiresolution analysis for spike trains, it is easy to see that

inning process is nothing but discrete wavelet transform (DWT) using a Haar wavelet

[Dau92]. However, since the original DWT is basically a non-causal process, a wavelet

transform featuring causality should be considered. For this purpose, we employ the a

trous wavelet transform [She92] to implement a causal DWT. With this procedure, the

multiresolution analysis for spike trains can be regarded as inning spike trains with the

multi-scale windows. Hence, the decoding models, which have been designed upon the

bin count data, need not be fundamentally modified to the multiresolution data.

With the multiresolution data, however, the regularization of decoding models must

be considered due to the increased input dimensionality and the collinearity between

input channels. Among the number of regularization techniques used in data mining, the

method based on the L1-norm penalty will be more suitable since it is able to generate a

sparser model than others using the L2-nOrm penalty. It also enables us to understand the

association of neuronal activities with behavior, by selecting more correlated channels.

Similar works for the multiresolution analysis for neural spike trains have been

done in various research groups. Lee has estimated the cross-spectrum using wavelet










analysis between simultaneously recorded spike trains, revealing the phase-locked

oscillation between spike trains [Lee02]. Laubach has demonstrated the wavelet-based

processing of spike trains from the motor cortex of a behaving rat [Lau04]. He utilized

discriminant pursuit (DP) [Buc95], which is based on wavelet analysis, to improve the

discriminant analysis methods for better statistical predictions of temporally localized

events. Cao has worked on the Haar wavelet analyses of spike trains to understand the

characteristics of spike trains and enhance the decoding models in neural prosthetic

systems [Cao03]. This work seems to be most relevant to our approach presented here.

However, one of the maj or differences is that he pruned wavelet coefficients by using

information theoretic measures (e.g., the mutual information) between each neuron and

behavior, followed by building decoding models (e.g., Bayesian classifiers) with those

pruned coefficients. On the other hand, we contain all wavelet coefficients in the input

channel of the linear model and prune inputs by a regularization technique. Therefore, in

our approach we can select wavelet coefficients which explicitly contribute to the output

of the designated model architecture, while the selected coefficients by the mutual

information method may not directly contribute to the specific decoding model. We also

propose to use the a trous wavelet transform instead of standard DWT to link the

multiresolution analysis with inning process for real-time applications, which has not

been explicitly shown in Cao's works.

In this chapter, we design a linear model with the multiresolution input data for

BMIs, which is learned by the regularization method based on the L1-norm penalty. The

multiresolution input for each neuron is composed of the instantaneous spike count

binned by multiple time windows of various widths. We investigate the trained linear









model using the multiresolution input for the analysis of neuronal firing activities. Next, a

comparison of the multiresolution based model with the single resolution model is

demonstrated. For this comparison, each channel of the multiresolution input is

embedded by a time delay line in the same way as the single resolution model is formed

(see Figure 3-1 for this structure). The performances of two models are evaluated.

Finally, a combination of linear and nonlinear networks is considered to investigate the

possibility of the performance improvement over linear models. With the optimally

designed linear model using the multiresolution input, an additional nonlinear network is

added in order to further reduce residuals from the learned linear model. This approach

will help us to understand how much a nonlinear structure can help to a linear model

when we utilize the multiresolution input.

We would like to remark here that the data used in this chapter are collected from

Aurora, which is different from previous chapters.

Multiresolution Analysis of Neuronal Spike Trains

An overall procedure of the multiresolution analysis in BMIs is as follows: The

multiresolution analysis based on the Haar wavelet is applied to spike trains of 185

neurons recorded in the cortical areas of a Rhesus monkey (Aurora); see chapter 2 for

data descriptions. The Haar a trous wavelet transform [Zhe99] is utilized to perform the

multiresolution analysis for individual spike trains. The resulted wavelet coefficients (or

equivalently, the multi-scale bin count data) are used as the input data to a linear model.

The linear model is learned by a regularization method to predict the hand traj ectories.

The analysis of model parameters is performed to investigate the association of single

neurons with target reaching movements.









Multiresolution Analysis

The multiresolution analysis of a neural spike train can be performed via the

wavelet transform. To facilitate the computation, we apply the discrete wavelet transform

with the dyadic Haar wavelets. This dyadic Haar wavelet is basically utilized in the a

trous wavelet transform which can be implemented very effectively in hardware.

The Haar wavelet transform is the simplest form of wavelet and was introduced in

the earliest development of wavelet transform [Dau92]. Here, we only introduce the

functional form of the Haar wavelets. Details in the Haar wavelet transform can be found

in [Dau92]. Let us first define the Haar scaling function as,


(x) =(7-1)


Let Y, be the set of functions of the form,

Cak (2 x k) (7-2)


where ak is a real number and k belongs to the integer set. ak is HOnzero for only a finite

set of k. y, is the set of all piecewise constant functions whose supports are finite, where

discontinuities between these functions belong to a set,

2 1 1 2
,.. ,J 2 ,0, .''1 (7-3)


Note that y, c y, c y, c The Haar wavelet function yr is defined by,

r(x) = ~(2x) #(2x 1). (7-4)

If we define W, as the set of functions of the form

Cak y(2 x k), (7-5)


then, it follows that










V, = W, O W, O 9 Wo O Vo (7 -6)

where O denotes the union of two orthogonal sets.

The discrete wavelet transform (DWT) using a dyadic scaling is often used due to

its practical effectiveness. The output of the DWT traditionally forms a triangle to

represent all resolution scales. This form is resulted from decimation (holding one sample

out of every two), and has the advantage of reduction in computational complexity and

storage. However, it is not possible to obtain representation with different scales at every

time instance with the decimated output. This problem can be overcome by a non-

decimated DWT [Aus98] which requires more computations and storage. The non-

decimated DWT can be formed in two ways; 1) the successive resolutions are obtained

by the convolution between a given signal and an incremental dilated wavelet function. 2)

the successive resolutions are formed by smoothing with an incremental dilated scaling

function, and taking difference between successive smoothed data.

The a trous wavelet transform follows the latter procedure to produce a

multiresolution representation of the data. In this transform, successive convolutions with

a discrete fi1ter h is performed as


v,+1(k)= C h(1)v,(k+2'l), (7-7)


where, vo(k) = x(k), the original discrete-time series. In its first introduction [She92], the

fi1ter h was defined as a B3 spline; (1/16, 1/4, 3/8, 1/4, 1/16). Then, the difference

between successive smoothed outputs is computed as

w, (k) = v,_, (k) -v, (k) (7-8)









where w, represents the wavelet coefficients. It is clear that the original time series x(k)

can be decomposed as


x(k) = v, (k) + T w, (k) (7-9)


with S being the number of scales. The computational complexity of this algorithm is

O(N) for the data length N.

Note that the d& trous wavelet transform does not account for a causal time series

where the future data are not available in the present computation of wavelet transform.

To apply the d trous wavelet transform for such a case, the Haar d trous wavelet

transform can be used [Zhe99]. The Haar d trous wavelet transform can be regarded as

the merge of the non-decimated DWT (by the d trous wavelet transform) with the Haar

wavelet transform. A difference in the Haar d trous wavelet transform from the original d

trous wavelet transform is that h is now replaced by the filter with (1/2, 1/2). For a given

discrete-time series x(k) (= vo(k)), the first resolution is obtained by convolution vo(k)

with h such that


v (k) = (vo (k) + vo (k 1)) (7-10)


And the wavelet coefficients are obtained by

w (k) = vo (k) -v,(k) (7-11)

For the jth TOSOlutions,


v, (k)= ~(v,_z(k)+ v,_z(k -2' ')) (7-12)


w, (k) = v _, (k) v, (k)


(7-13)









Hence, the computation in this wavelet transform at time k involves only information at

k-1 with I being a nonnegative integer.

The Haar d trous wavelet transform can provide a set of features from the time

series data. One possible feature set can be extracted from the decomposition described in



vs-1(k) are selected. However, if we seek to associate the Haar d trous wavelet transform
with the inning process for spike trains, the set {vlk,-/ ..,v-1() ca" n be ~transla~ted into


the bin count data with multiple bin widths. To yield the multi-scale bin count data using

(7-10), we only have to multiply v,(k) by 2J such that

u,(k) = 2Jv,(k), for j= 0, ..., S-1. (7-14)

Hence, the convolution output in the Haar d~ trous wavelet transform can provide the

feature set related with inning. In the following models for BMIs, we will utilize the
scaled convolution outputs {u,(k)} for j = 0, ...S1, or equivarlently, tPhe bn count data


with different widths, as the input features.

Multiresolution Analysis for the BMI Data

In order to apply the multiresolution analysis to the BMI data, we must choose the

suitable set of scales. Although it is not straightforward to determine a set of scales for

the Haar d~ trous wavelet transform of spike trains, we may take the characteristics of

neuronal data collected from our BMI paradigm into consideration for the determination

of scales. Basically, the smallest scale must be larger than 1ms because of the refractory

period of neuronal firing. Also, the largest scale may not exceed 1sec since it has been

reported that the past neuronal activity up to 1 second is correlated with the current










movement [Wes00]. In our experiments, we select eight scales starting at 5ms up to

640ms with the dyadic scaling; 5, 10, 20, 40, 80, 160, 320, and 640ms .

With the selected scales, the Haar d trous wavelet transform is performed on each

neuronal spike train in Aurora's dataset. Instead of performing the wavelet transform

directly on raw spike trains, we first generate the basic bin count data with a 5ms non-

overlapping window for every neuronal channel. Next, the Haar d trous wavelet

transform is applied to the 5ms bin count data at each neuronal channel, yielding the

convolution output v,(k) for j= 0, ..., 7 following the equation (7-12). Each series v,(k) is

then multiplied by 2' to generate it,(k. An illustrative example of the generated u,(k) at

specific time instance ko is presented in Fig. 7-1.



1111 1 IllI II I11 II Il I II IIi


5ms 0e
10ms
20ms
40ms 2
80ms
160ms 6
320ms 12
640msl 21


Figure 7-1. An illustration of the scaled convolution output from the Haar a trous wavelet
transform; zer(k) for a given spike train at a time instance ko. The number in
each box denotes the value of it,(ko) for j= 0, ..., 7.

Note that the sampling rate in ter(k) is 200Hz for any j. In terms of a inning

process, it,(k) can be interpreted as the bin count data for a given spike train with a 5-2'ms


SThe minimum 5ms scale is chosen by empirical observation such that the bin count data is significantly
different from raw spike trains containing 1's and O's. However, it must be remarked that a more rigorous
procedure of choosing the minimum scale may be necessary in the future study.










time window that slides over time by step of 5ms. Therefore, it,(k) with a larger j will

contain more overlaps between successive bins, zer(k) and u,(k-1). Such overlaps will then

yield the smoother temporal patterns of zer(k) with larger j.

The top panel in Fig. 7-2 demonstrates an example of it,(k) of a specific neuron for

5-second period. u,(k) for each j is normalized to have the maximum value of 1. Darker

pixels denote larger values. The set of u,(k) are temporally aligned with the associated

hand traj ectories plotted in the bottom panel. In order to view the correlation of zer(k) with

the movement for each j, u,(k) is separately plotted on top of the hand traj ectory (the x-

coordinate) in Fig. 7-3 (both u,(k) and the hand traj ectory are scaled to be in the similar

dynamic range for visualization purpose). It demonstrates that u,(k) with larger j is more

Scales
(ms) 5

40



40 1 2 sc





-51 2 3 5 (sec)
I-lHand Veocityo Tra ectories

o

-20g 2 3 4 5 (sec)


indcesovr 5seondduatin. (blotom) theraectories o adpston
veoit tx-sli) ndy(ote) orints










correlated with the hand traj ectory than smaller j.

Scales
(m s)

















0 1 2 3 4 5 6 7(sec)



Figure 7-3. The demonstration of the relation between the neuronal firing activity
representation at each scale (solid lines) and the hand position traj ectory at x-
coordinate (dotted lines).

The Analysis of the Linear Model Based on the Multiresolution Representation

For the further investigation of the relationship between the multiresolution

representation of neuronal firing activities and target reaching movements, we develop a

linear model using u,(k) as inputs. A discrete-time series u,(k) for each j is normalized to

have zero-mean and the unit maximum magnitude such that a model can avoid biasing to

the larger-scale inputs. 185 neurons with 8 scales yielded the input dimension of 1480.

The multiresolution representation for the 320-sec training dataset (containing 320x200 =

64,000 samples) generates an input data matrix X (64,000 x1,480), where each row

represents the input feature vector at a given time instance. Then, a linear model is

designed to predict the desired response (the x-, or y-coordinates of hand position or

velocity) vector d (64,000 x1) with the linear combination of X such that










d = d+e = Xw +e (7-15)

where w is the model weight vector and e is the error vector. Note that the desired

responses are normalized to have zero-mean so that the estimation of the y-intercept is

not necessary.

Learning the model weight vector w can be achieved by a variety of methods.

However, we must consider regularization in this model due to the very high input

dimensionality (>1,000). We have introduced several regularization methods in chapter 4.

Among those techniques, the L1-norm based algorithm may be suitable since it generates

a sparser model and enables the selection of input variables, which is useful for the

analysis in neuronal population. Here, we utilize the LAR algorithm which learns w by

the stagewise selection of input variables with constraints on the L1-norm of w. Recall

that this algorithm is based on the assumption that the input channels (or columns in X)

are not linearly dependent to each other.

To determine the threshold for the L1-norm of weight vector in the LAR algorithm,

we utilize the hold-out cross-validation. We hold out the last 10% of the training data as

the validation set. The threshold is determined by minimizing the MSE for the validation

set. The LAR algorithm stops learning when the L1-norm reaches this threshold.

The LAR algorithm selects a different subset of input channels for each desired

response (there are four responses including the x-, and y- coordinates of hand position

and velocity). From the trained weight vectors, we select neurons that have nonzero

weights for at least one scale (recall that there are eight scales per neuron). Then, we


2 Although more thorough analysis must be executed, we can empirically test if the rank of X is equal to the
number of channels. And, the empirical results shows that at least for the matrix X used in this study, input
channels are not linearly dependent.









examine the distribution of the selected neurons over multiple cortical areas. The number

of selected neurons and its portion for each area are shown in table 7-1 (see table 2-1 for

the description of cortical areas in Aurora's dataset). In this table, we can observe that

more neurons are selected in the case of predicting velocity. Although the biological

analysis of this result must be complemented, it might be caused by the fact that the

traj ectory of velocity changes more rapidly than that of position, thus requiring Einer

resolution inputs.

Table 7-1. The number of the selected neurons in each cortical area.
PMd Ml S1 SMA Ml ipsi.
Position-x 18 (27%3) 27 (47%) 9 (24%) 7 (37%) 2 (40%)
Position-y 20 (30%) 26 (39%) 15 (39%) 7 (37%) 0 (0%)
Velocity-x 50 (76%) 46 (81%) 30 (79% 15 (79%) 5 (100%)
Velocity-y 40 (61%) 42 (74%) 30 (79%) 13 (79%) 3 (60%)

Figure 7-4 describes the selection results for each desired response. The black

pixels denote the selected variables aligned in neuronal space (x-axis) with the scales in

the y-axis. These graphs show that LAR prefers selecting inputs with larger scales since

the temporal trajectories of larger scales exhibit more correlation with movement as

shown in Fig. 7-3.

Comparison of Models with the Multiresolution Representation

We now seek to answer the following questions;

* Can the multiresolution representation of the neuronal firing activity improve the
prediction performance of decoding models for BMIs compared to the single
resolution representation?

* If so, how much does it improve performance?

Two linear models are designed with different input datasets; the first model

receives the single resolution data, i.e., the bin count data with a Eixed width window of


3 The ratio of the number of selected neurons to the total number of neurons.











51 5
101 10
201 20
co 401 m 40
E 801 E 80
160 1 I 160
Illlll III II Il I1 11 |j ||||.Ill IIII I. II| 11111
50 100 150 50 100 150
(a) (b)>
51 5
101 10
201 20
S401 co ~, 40
E 80 | E 80


50 100 150 50 100 150
(c:) (d)

Figure 7-4. The distribution of the selected input variables for (a) x-coordinate, (b) and y-
coordinate of position, and (c) x-coordinate, and (d) y-coordinate of velocity.

80ms as inputs and the second model receives the multiresolution data with eight

resolution levels (scales) from 5ms up to 640ms4. Normalization and embedding are

applied to every channel in both inputs (single resolution and multiresolution input data);

each input channel is normalized to have zero-mean and the unit maximum magnitude

and a 6-tap time delay line is used to embed the bin count data at each channel. This

embedding results in an 1110 (6 x 185) dimensional input space for the single resolution

model and an 8880 dimensional input space for the multiresolution model, respectively.

The same training dataset as above (320-sec data) is used for both models. However, the

number of training samples is different between models since two input data are binned

with different windows: the single resolution data are generated by inning with a 80ms

non-overlapping window, yielding 4,000 samples for 320 seconds, and the



4 The 80ms bin width is chosen since it belongs to a set of scales. It means that the single resolution
representation can be a special case of multiresolution representation using only one scale.