<%BANNER%>

Point Process Monte Carlo Filtering for Brain Machine Interfaces

Permanent Link: http://ufdc.ufl.edu/UFE0021935/00001

Material Information

Title: Point Process Monte Carlo Filtering for Brain Machine Interfaces
Physical Description: 1 online resource (169 p.)
Language: english
Creator: Wang, Yiwen
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2008

Subjects

Subjects / Keywords: brain, carlo, estimation, interfaces, machine, monte, point, process, sequential
Electrical and Computer Engineering -- Dissertations, Academic -- UF
Genre: Electrical and Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Brain Machine Interface (BMI) design uses linear and nonlinear models to discover the functional relationship between neural activity and a primate?s behavior. The loss of time resolution contained in spike timing cannot be captured in traditional adaptive filtering algorithms and might exclude useful information for the generation of movement. More recently, a Bayesian approach based on the observed spike times modeled as a discrete point process has been proposed. However, it includes the simplifying assumption of Gaussian distributed state posterior density, which in general may be too restrictive. We proposed in this dissertation a Monte Carlo sequential estimation framework as a probabilistic approach to reconstruct the kinematics directly from the multi-channel neural spike trains. Sample states are generated at each time step to recursively evaluate the posterior density more accurately. The state estimation is obtained easily by reconstructing the posterior density with Parzen kernels to obtain its mean (called collapse). This algorithm is systematically tested in a simulated neural spike train decoding experiment and then in BMI data. Implementing this algorithm in BMI requires knowledge of both neuronal representation (encoding) and movement decoding from spike train activity. Due to the on-line nature of BMIs, an instantaneous encoding estimation is necessary which is different from the current models using time windows. We investigated an information theoretic technique to evaluate neuron's tuning functional relationship between the instantaneous kinematic vector and neural firing in the motor cortex by a parametric linear-nonlinear-Poisson model. Moreover, mutual information is utilized as a tuning criterion to provide a way to estimate the optimum time delay between motor cortical activity and the observed kinematics. More than half (58.38%) of the neurons instantaneous tuning curves display a 0.9 correlation coefficient with those estimated with the temporal kinematic vector. With the knowledge gained from tuning analysis encapsulated in an observation model, our proposed Brain Machine Interface becomes a problem of state sequential estimation. The kinematics is directly reconstructed from the state of the neural spike trains through the observation model. The posterior density estimated by Monte Carlo sampling modifies the amplitude of the observed discrete neural spiking events by the probabilistic measurement. To deal with the intrinsic spike randomness in online modeling, synthetic spike trains are generated from the intensity function estimated from the neurons and utilized as extra model inputs in an attempt to decrease the variance in the kinematic predictions. The performance of the Monte Carlo Sequential Estimation methodology augmented with this synthetic spike input provides improved reconstruction further. The current methodology assumes a stationary tuning function of neurons, which might not be true. The effect of the tuning function non-stationary was also studied by testing the decoding performance in different segment of data. The preliminary results on tracking the non-stationary tuning function by a dual Kalman structure indicate a promising avenue for future work.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Yiwen Wang.
Thesis: Thesis (Ph.D.)--University of Florida, 2008.
Local: Adviser: Principe, Jose C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2008
System ID: UFE0021935:00001

Permanent Link: http://ufdc.ufl.edu/UFE0021935/00001

Material Information

Title: Point Process Monte Carlo Filtering for Brain Machine Interfaces
Physical Description: 1 online resource (169 p.)
Language: english
Creator: Wang, Yiwen
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2008

Subjects

Subjects / Keywords: brain, carlo, estimation, interfaces, machine, monte, point, process, sequential
Electrical and Computer Engineering -- Dissertations, Academic -- UF
Genre: Electrical and Computer Engineering thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Brain Machine Interface (BMI) design uses linear and nonlinear models to discover the functional relationship between neural activity and a primate?s behavior. The loss of time resolution contained in spike timing cannot be captured in traditional adaptive filtering algorithms and might exclude useful information for the generation of movement. More recently, a Bayesian approach based on the observed spike times modeled as a discrete point process has been proposed. However, it includes the simplifying assumption of Gaussian distributed state posterior density, which in general may be too restrictive. We proposed in this dissertation a Monte Carlo sequential estimation framework as a probabilistic approach to reconstruct the kinematics directly from the multi-channel neural spike trains. Sample states are generated at each time step to recursively evaluate the posterior density more accurately. The state estimation is obtained easily by reconstructing the posterior density with Parzen kernels to obtain its mean (called collapse). This algorithm is systematically tested in a simulated neural spike train decoding experiment and then in BMI data. Implementing this algorithm in BMI requires knowledge of both neuronal representation (encoding) and movement decoding from spike train activity. Due to the on-line nature of BMIs, an instantaneous encoding estimation is necessary which is different from the current models using time windows. We investigated an information theoretic technique to evaluate neuron's tuning functional relationship between the instantaneous kinematic vector and neural firing in the motor cortex by a parametric linear-nonlinear-Poisson model. Moreover, mutual information is utilized as a tuning criterion to provide a way to estimate the optimum time delay between motor cortical activity and the observed kinematics. More than half (58.38%) of the neurons instantaneous tuning curves display a 0.9 correlation coefficient with those estimated with the temporal kinematic vector. With the knowledge gained from tuning analysis encapsulated in an observation model, our proposed Brain Machine Interface becomes a problem of state sequential estimation. The kinematics is directly reconstructed from the state of the neural spike trains through the observation model. The posterior density estimated by Monte Carlo sampling modifies the amplitude of the observed discrete neural spiking events by the probabilistic measurement. To deal with the intrinsic spike randomness in online modeling, synthetic spike trains are generated from the intensity function estimated from the neurons and utilized as extra model inputs in an attempt to decrease the variance in the kinematic predictions. The performance of the Monte Carlo Sequential Estimation methodology augmented with this synthetic spike input provides improved reconstruction further. The current methodology assumes a stationary tuning function of neurons, which might not be true. The effect of the tuning function non-stationary was also studied by testing the decoding performance in different segment of data. The preliminary results on tracking the non-stationary tuning function by a dual Kalman structure indicate a promising avenue for future work.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Yiwen Wang.
Thesis: Thesis (Ph.D.)--University of Florida, 2008.
Local: Adviser: Principe, Jose C.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2008
System ID: UFE0021935:00001


This item has the following downloads:


Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101111_AAAAAW INGEST_TIME 2010-11-11T10:28:43Z PACKAGE UFE0021935_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 23420 DFID F20101111_AAAPBJ ORIGIN DEPOSITOR PATH wang_y_Page_128.QC.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
5b8d78f4cbb5f269685f603a92d9e6dd
SHA-1
f00a62ab1c603813a0f55bcd67170decd4f7eb7f
3475 F20101111_AAAOWD wang_y_Page_086thm.jpg
324fb1d12edc6f84f889ad936a47e49d
fa504dbc22cc0241a53191ffc7ebfcd2d34bbda3
4654 F20101111_AAAOVO wang_y_Page_126thm.jpg
504af078393af18d2b29228b7f75113e
3347b8ebbb3defaa4c7b4c3922ede4bc3b3da050
1053954 F20101111_AAANTB wang_y_Page_161.tif
433c6408d160a0d29cb44bc0d0eb6623
b2f3fd80372f1da1f5cacb3e0d12535dfda7c682
99163 F20101111_AAANSM wang_y_Page_161.jpg
2bbb6cb007559be106b3e114932df9b0
fffc8a0563fbc506e12e3287eafbc3d8b5263891
26414 F20101111_AAANRY wang_y_Page_008.QC.jpg
faa5f29439464e2bd50c41ec0d5a7a94
acca9ffd74f44a4ce4bf5b2a8622d45cbe8d77f5
11858 F20101111_AAAPBK wang_y_Page_129.QC.jpg
0398324e523c2c16bfee754300cf9740
b56923c93ef33bfd425ef32bd4a8a7c5316828f1
27333 F20101111_AAAPAV wang_y_Page_101.QC.jpg
6e647c7d942392e8112437c767e4dd05
ac394191eb8d07d9bb9b8cc763af561af684c160
26625 F20101111_AAAOWE wang_y_Page_040.QC.jpg
656cab3d3ff29732485f8eef9083013c
d9cea3a97710e3409372b37e00d42e8e979be390
28689 F20101111_AAAOVP wang_y_Page_152.QC.jpg
df130505abf2a670ed7a8f59044ea937
3d2502e4f93714bbfc40ff6a5b9aba6157a41394
26939 F20101111_AAANTC wang_y_Page_075.QC.jpg
7cfb84658bb1e10318aa3daca654e623
3a1da31bfda23003eb7a617124e98c66272db86f
5401 F20101111_AAANSN wang_y_Page_093thm.jpg
fbc63f39299bb0ff6411a962d209cde4
e9b73ea8a259727d1349cfc444e4fc079a0291a8
6258 F20101111_AAANRZ wang_y_Page_153thm.jpg
d3d18c5961d215966da0e95b5eb68e81
ce4bb786e1821ebe489b3640197bf95cf009b6aa
16555 F20101111_AAAPBL wang_y_Page_130.QC.jpg
a3aab76a88163fd13c58a8aabeda3d7a
77fcb4026f7c1db996287eced4aba0ad33b96a82
6566 F20101111_AAAPAW wang_y_Page_101thm.jpg
cd61a4b2b52acf0c8863428a5dbcfa42
7a329bc19d9a4a2ecd97afd42fd04da4b0f88968
10914 F20101111_AAAOWF wang_y_Page_045.QC.jpg
82828b06675766df6738d05313daabbf
af373d41879df6fb619a513e3c5eca7d7d54aa05
6366 F20101111_AAAOVQ wang_y_Page_096thm.jpg
2c3639a05ca8b4af8bc475a2be78926c
5a2376eddf1c512a2723360bacd87e542f6a920f
6508 F20101111_AAANTD wang_y_Page_017thm.jpg
c4cf980adb3c8a64dc64385c7d1fcd5b
22db57d3c60bf448d4072f829b3aa31772bba3d3
53176 F20101111_AAANSO wang_y_Page_152.pro
4ef313d373428bc312d5923eaabd2152
750921653857dc7a3668600bd505c527ea1d36d4
26568 F20101111_AAAPCA wang_y_Page_153.QC.jpg
7dcf969d4d1d0f5a838def55930e505f
244deeb90deef79d6736afc26cdcf5e5d0448d52
4264 F20101111_AAAPBM wang_y_Page_130thm.jpg
e461442100a689e02eaef1001eb821e8
a9d670ab9726459c099617581c321dec660d79e7
26692 F20101111_AAAPAX wang_y_Page_103.QC.jpg
eae0b5bcf0d3f5f0c081c8bf57e70e95
75abfc4c51f2aead36f54f6fb638349e9ba0e502
6279 F20101111_AAAOWG wang_y_Page_064thm.jpg
6f6efe6936c01ba6c0154e5bca0c76a7
64c557fe07223fb1196d69d41aaed34eb0631c27
26513 F20101111_AAAOVR wang_y_Page_140.QC.jpg
ae315c8cb5864bc3ee37644947430140
3615a46f6bc8d3227af8e114ed2941a5e4559824
2027 F20101111_AAANTE wang_y_Page_103.txt
8e1b1bc381a4698d7d3044ce5503ccde
d4c91a2c4776c7047c85dfef6abc71e390e2fd19
106927 F20101111_AAANSP wang_y_Page_062.jp2
6b00475d7094d14e1c58b052045076b5
f2d0a476a99e931ace2c66b5c417c80ccc0d8fe7
6225 F20101111_AAAPCB wang_y_Page_154thm.jpg
43187557dad6661439f6a455dd35fe5c
e884b5983d32866dbcb41206f4f1fc427163a243
2368 F20101111_AAAPBN wang_y_Page_132thm.jpg
ff06718d5e7fb65210c089a325343b50
17be474f656a5e8fa6c391add4bd3a139c50e3b6
21735 F20101111_AAAPAY wang_y_Page_107.QC.jpg
bca22b7f3b6530e6e1e244ba96696a02
8c3ee8a21dbb16b7ad01fd122342faa8a2feac26
18389 F20101111_AAAOWH wang_y_Page_036.QC.jpg
f4cd4415ab96479060ef70727f46bf86
f3d61ba94abb848c07ceaceb0634ec1f23636ceb
28177 F20101111_AAAOVS wang_y_Page_050.QC.jpg
e3f426d2c6dcce8d5c8f9de027ef37b9
3dad4cd1e6eacaac8f09eb8ba92107bf8f8d2488
26177 F20101111_AAANTF wang_y_Page_032.QC.jpg
2027337ed5d1e2d9f450590d360005e9
78000c5b6eb2519783be8c3e45ca68ac2cc4ecbc
2144 F20101111_AAANSQ wang_y_Page_047thm.jpg
24b97d8a5d9b0e9e61503b4188fc184c
aa168cc9eae14b33d9afd37b1fd531254e453560
23254 F20101111_AAAPCC wang_y_Page_155.QC.jpg
3d689499a9160e430ae1083c733ac3c8
0b2db892365391695b5de86b99c025a82af3d796
19464 F20101111_AAAPBO wang_y_Page_135.QC.jpg
0774e67fb60f49a97971911c62ad8bfc
abd4f06ef1457b44a171e4e1d620f2b65e6d79b2
24038 F20101111_AAAPAZ wang_y_Page_109.QC.jpg
b5ba90ee6b185c463a262bf4dbc51981
6b18d0b72ccab894fd7eb87b2852381a75440e7c
6659 F20101111_AAAOWI wang_y_Page_069thm.jpg
cd4d2917564c45ef4e4b6ba045f91f4d
c5f22a0a6b19961edc2e7c939c36489d27098c44
26591 F20101111_AAAOVT wang_y_Page_119.QC.jpg
16a58ba42f57100620dc167a019f5323
cde8354cc2e6d6921d38b6fe139d35e542e8e6a6
65623 F20101111_AAANTG wang_y_Page_106.jpg
15311dd066b89ad26c348503e9d440b8
1b5c955e889c0118ed6246f585b0c06d67206993
55397 F20101111_AAANSR wang_y_Page_147.pro
6d72f48a26deddadcb83e9c212b1ce71
fffa304b1454b4f2feb6b2285283f086e793b570
17557 F20101111_AAAPCD wang_y_Page_156.QC.jpg
621f8515b86d17337b0fcbfcb83edb9f
9f7212efa739fa19a01294007aa853a824220d2f
7323 F20101111_AAAPBP wang_y_Page_136.QC.jpg
081f1570915e9040a7e337c8301a2bfa
76252d3dc6b70f4c5c08f138c3cad364cef69283
25679 F20101111_AAAOWJ wang_y_Page_062.QC.jpg
a5748f7087cef8ac490ae0ab6a750407
b543dc7ecb8d1eecedc718038eb1774a90c2f792
25336 F20101111_AAAOVU wang_y_Page_095.QC.jpg
926099973e8187ede87d468e8b3f9aab
7ceeb74fcf845aaa76476130f61e6d9b368695ac
118669 F20101111_AAANTH wang_y_Page_104.jp2
3026d3b2836959a17490258733edd24e
e8b36140c06703ac7b896a58670b8c3409a6ec81
49301 F20101111_AAANSS wang_y_Page_154.pro
089713b891db372e22aa9e2fdba9e7cd
e152d8d875f558b29cba407ec37f613cf58c9fcd
6475 F20101111_AAAPCE wang_y_Page_157thm.jpg
69d3efad110553c341d88200742fef17
e2e95745fbbd80290ed3c3adb50ba6669f437879
29043 F20101111_AAAPBQ wang_y_Page_141.QC.jpg
39ba60795c5dfc9134c579264d9fde2c
7a494cd2d9b5d249d371629e0dda31f51671c02c
27996 F20101111_AAAOWK wang_y_Page_102.QC.jpg
a6254b6b943c16569bb9a7a00da8343a
b7014f8dbbba5dc440326bd4cb548f1548a190e9
6012 F20101111_AAAOVV wang_y_Page_155thm.jpg
bd3c8f465e7e0500e36859f16594c862
d7858ce993c894e6f1c01303dcf1319969a4f3f2
1735 F20101111_AAANTI wang_y_Page_135.txt
e5a4dcb692262083f7168003e474f7f8
7181f9c57ae59d93df747325348c44a5282461de
6373 F20101111_AAANST wang_y_Page_061thm.jpg
04b058fb694fb9dffd59b0cd30e56454
0e6e21f7e7ed07221bf5e29734befafabadac146
19537 F20101111_AAAPCF wang_y_Page_159.QC.jpg
89a6e175c114a4440a6b8a1461bacfa2
2858546200e356f262de683dbb25148d1ceddb86
6849 F20101111_AAAPBR wang_y_Page_141thm.jpg
49921485683e0231e9dc448c2786a374
100055061677231ed491b607c80a8c57c9964a5a
6818 F20101111_AAAOWL wang_y_Page_054thm.jpg
a09f8f6b42020eac76c7085ecec223a9
07023a822f8749406f82dbf159cdca63d2b49393
7052 F20101111_AAAOVW wang_y_Page_163thm.jpg
c9449c0966eb63ec40a875216038c0c1
d312713dc6875ed6c1057a40cd62f4b5a85fddf3
27176 F20101111_AAANTJ wang_y_Page_089.QC.jpg
22a07e584a8b4a1bbb723972e3249bfc
f0d2aca201c3a83204d7eac4b556c463c3ebd29f
6460 F20101111_AAANSU wang_y_Page_042thm.jpg
88a4ef004282abe97b9558d56aace71d
48f755fcdc08e3afaaa4b87b99b2a23e93e99b63
5137 F20101111_AAAPCG wang_y_Page_159thm.jpg
28a1ae44dc009f526caa8dbe7cb888a1
21d2f7de4f167b5797427a8d805ca697c05fc544
28102 F20101111_AAAPBS wang_y_Page_142.QC.jpg
b30b2c51d5088e6a168073b1d1b78590
471f61229e5a69fadd28b897eb65bca7eaed5107
12445 F20101111_AAAOXA wang_y_Page_122.QC.jpg
d51d2d9466576e2807eec6800302385d
c1b323d6884cf57fca86bb512593e6af97afcb32
6756 F20101111_AAAOVX wang_y_Page_119thm.jpg
9e540f0cf9885ae1c868e863277caaca
52c75663220bc585a6bcdc7a130fd7f18287bdf5
92728 F20101111_AAANSV wang_y_Page_021.jpg
03dc2cf7efc484ebe0cbf5809e2a7fcb
727a39153ce973705b317f8393b47fdeefe69cae
9859 F20101111_AAAPCH wang_y_Page_160.QC.jpg
739615b03b684d9fe623d0e2c2c83eba
d92a6b9ec4c4f9ab867ba55bedd747d363571691
26547 F20101111_AAAPBT wang_y_Page_143.QC.jpg
45979116ba11e7e7c3095fb75250044e
5d434c689a1b75788bbe0f7746ff5ae7e5843a14
15871 F20101111_AAAOXB wang_y_Page_004.QC.jpg
895c5069c4f7b27b28cfd611beec123f
900575fd632a647b1def1344b0939512187faa93
2800 F20101111_AAAOWM wang_y_Page_131thm.jpg
9b9714e33d24341ad5eeeb3b15e0838c
155b380e4d29818bf97bfd24b30579cd620f2aed
6438 F20101111_AAAOVY wang_y_Page_143thm.jpg
a2db811c7b6b7257f887e8c017b3bbde
c7f4fee711150b054dbd96a3d3d129764c256e17
108179 F20101111_AAANTK wang_y_Page_069.jp2
0f669d801268bedf5f5da5e3dc8c2c53
313aea7b86b83f8a8e52d27cb4263542a3974d7f
115368 F20101111_AAANSW wang_y_Page_152.jp2
56a0b35831bc70380eaa7052c4095251
d751a4941d8ed172095ac2cf82206e0cb0585673
7378 F20101111_AAAPCI wang_y_Page_162thm.jpg
8acca571bd4e00cd18da298aaf1b496e
9c6acbecee3415116ca101288e9c828f2d842ef2
6235 F20101111_AAAPBU wang_y_Page_145thm.jpg
59469cb3836dd4589b7697bb19b60460
0f38cbb87bd46ec26dd2de91dcd5bd4d114d8e01
3489 F20101111_AAAOXC wang_y_Page_045thm.jpg
da3ab496aecf140ccbf896633df7314b
a15ced83c4298ba62e34d2638ccd694f52b89ffb
4827 F20101111_AAAOWN wang_y_Page_009thm.jpg
d59f9ba19f919d28ba55755e5daf7b4a
53bb26fb3d342e22fc711a28f0721e3a57c1fac1
5734 F20101111_AAAOVZ wang_y_Page_039thm.jpg
f80c52a6f7b0f2ff8ddebdf3d5f14c63
5e2e987e8fd645774d11579e2f9d316baa8662f4
28294 F20101111_AAANTL wang_y_Page_144.QC.jpg
d6912ed3c6a97f954afa7955f2197d68
d173ff70a93f422be2d1316fd2b3d572bd9a9ac8
15581 F20101111_AAANSX wang_y_Page_046.pro
41e2bd673bf048071909cd365cea2af9
b9542bd74e0b1948756aac9f66515d9a391f1ee7
2269 F20101111_AAANUA wang_y_Page_054.txt
f5a7039cfd9f5279003b337f823fe089
7524c63b703c45868d8951a5524a7c4258eda9e4
31042 F20101111_AAAPCJ wang_y_Page_164.QC.jpg
53b08a728c31af8d2596f2c3397b45a6
c980da82a7fda36ca2e4fc160d1550c77a6a83e6
27284 F20101111_AAAPBV wang_y_Page_146.QC.jpg
06485e6f4af846133a77da65d34a916f
3050cee6b49ddd6abab35b42e88386aed41d2e5f
26205 F20101111_AAAOXD wang_y_Page_055.QC.jpg
9e00b021a582fc9df83cf4a7abc080fc
583ddbe82d46c05e3efadba1aa55055c6c5ac9f7
28259 F20101111_AAAOWO wang_y_Page_063.QC.jpg
ee5b9ad8c578189c7567f34d6540d2c1
d71e6df9bacdcf9ac0534c885d8aba55de284ae6
73527 F20101111_AAANTM wang_y_Page_008.pro
cc18676d90efa6902a872bba9281fc11
50aca34de9e275454c9f5ae18a1c5746b5b2ecb5
51146 F20101111_AAANSY wang_y_Page_040.pro
ccea76ef38d066d61d595dc5cc5ed854
3e9c709ca76a12b3c078d6de915cb4519a3cc13b
28509 F20101111_AAANUB wang_y_Page_139.QC.jpg
e7daa3d1d254c3633d734ab82357493a
cd37c34e4de190f11f439762a32deedac8ca29cc
31150 F20101111_AAAPCK wang_y_Page_166.QC.jpg
721e3a2f5ab8bbed1d391cf4979a678b
99eeaeb7196d187c168b0f5f763f51d9da89e982
6421 F20101111_AAAOXE wang_y_Page_048thm.jpg
44599af8352fb753e45fdac96e90d8ca
2deb44655f44ffbe967db612242fd935149fedf3
31449 F20101111_AAAOWP wang_y_Page_162.QC.jpg
f01eae801a56abfcc7520ab6d3388f7a
cf8ac3f18df1e43a78010ee2acda30d270c0f860
F20101111_AAANTN wang_y_Page_034.tif
9102cacb0ea819de10f4c1557bac7929
6b12e3971990139e040c844e7f0d102c6a988746
85639 F20101111_AAANSZ wang_y_Page_150.jpg
46ac0ca718ce69f8c3d436e2f08e5144
989e0452306edd4f20e10a5851a3ec950a9a2304
88734 F20101111_AAANUC wang_y_Page_013.jpg
c4d5c6a664d2c0606ac9699506323b84
9e90acc403977fdaf90dc58ad4fad344e0dd6744
7350 F20101111_AAAPCL wang_y_Page_166thm.jpg
7dc910c6551d7d868fe452c8df833844
a33f4ee8af01aba06ab77f59171bdbb5c98767fd
6494 F20101111_AAAPBW wang_y_Page_146thm.jpg
2e4abf5d45d7185878e2df40ee4be009
5769a49dce22827ede3e4ff3db04598a0d05a1b8
6691 F20101111_AAAOXF wang_y_Page_114thm.jpg
4e3864c6a8924db6266ce5f6d63d3778
b0b8b48db313ceb356a42c845ea78b213b385025
5254 F20101111_AAAOWQ wang_y_Page_107thm.jpg
3e3c1c4417dbaabdeafc2301b0b50a91
3a67c27f77fda3e63642e21667aba7a6047ce841
F20101111_AAANTO wang_y_Page_142.tif
b4f3ca47c37d78578a414363721f5f35
0261a1c5a234da032e57848607898f5fc8d73928
F20101111_AAANUD wang_y_Page_113.tif
f2e4919b5b4f44bf2375dd9b0bbde366
7820c9827b22f8983cbedbb2127722eec9bddca7
7063 F20101111_AAAPCM wang_y_Page_167thm.jpg
03a9b77bdc52bc186bbd008e01ccca51
585ae83f66bedbe3744bc1454fbd055d17220414
28439 F20101111_AAAPBX wang_y_Page_147.QC.jpg
1273b4855d52a842932d593c21443aaf
ffa15c2f882f6faea40aa5e2b060ca9085c3fd14
6483 F20101111_AAAOXG wang_y_Page_138thm.jpg
c2664848fbb7d28e886d62cdb473cffa
6571886863a30d2b62edca54f60e9f5ab2cca6bc
27016 F20101111_AAAOWR wang_y_Page_025.QC.jpg
0c60217b088a578069896cf0b0265c5a
9d1cdb19692dff947f9bf786bac4f999d6c345ac
F20101111_AAANTP wang_y_Page_037.tif
1d4bfd846aaf9208f507e98844aae219
19a073e59866d1057ad1cedda15814ff96dad02a
2019 F20101111_AAANUE wang_y_Page_042.txt
f8a1e24b51e78ede7ec7318fc51e59fe
25b46318d772f4681e94a2627c5a730793dfc701
14473 F20101111_AAAPCN wang_y_Page_169.QC.jpg
bc330ef3ee5af5d82c424ca5ac86b0da
dde54b632c366765cbe54a532f784868c21a18d3
28545 F20101111_AAAPBY wang_y_Page_149.QC.jpg
c7d74f938823f7e03b91a99456c5bce4
26c5cc4f2d104e0f40325aaf95ab1dedd79f1403
6640 F20101111_AAAOXH wang_y_Page_013thm.jpg
4a64b06f5a73748569b8a1adad678f58
196830d4d80f91085b69597461d83eb22bf0fa71
6249 F20101111_AAAOWS wang_y_Page_056thm.jpg
cb4ec2a208a9e8237173aeab9f26caef
6ed4a893c7dc6b8d31a70cd846c0f49b41594e46
F20101111_AAANTQ wang_y_Page_044.tif
8178faa745159a187788737782ad3265
dbaa83fa3cd21ac65bdfab221903a8a89febb55e
5313 F20101111_AAANUF wang_y_Page_106thm.jpg
ca212723b9db0d6ef1605371b04116ea
1350698d5d95fdd1ab40301261fda4a9b8ba6af9
3363 F20101111_AAAPCO wang_y_Page_169thm.jpg
7f12f9a592a60b85ad91c55bc6227909
3491d74d6bb622f41912c0f72d049d974dfffcb1
26500 F20101111_AAAPBZ wang_y_Page_150.QC.jpg
37e75784eb02498d720f964cced8871b
c4cefb52201ca7aef3f3726f2b74c2a3f91a38d9
5868 F20101111_AAAOXI wang_y_Page_033thm.jpg
9b2cb112f8d7891744bb17670b0f0436
d3de7cb0c07fe3b359afadf24658bbd0f9e63e11
28203 F20101111_AAAOWT wang_y_Page_100.QC.jpg
06fa68d7129cefd0848e3cfd4257d3aa
a980051ac804660911d33ff02463e3926dd8847e
552349 F20101111_AAANTR wang_y_Page_083.jp2
41b0c58263be2416fa7b42f39d001dfe
174bdb529e79bd45200eb1ca352072936cb84d42
17063 F20101111_AAAOAA wang_y_Page_079.QC.jpg
b12c3bcada167065bd7203103af8a437
e46886f8a5af32f39ca8e2ca89f85f62152c1fbb
1887 F20101111_AAANUG wang_y_Page_156.txt
5ff9a7f7ca0c1e8831327c057c2c7b2a
08863180c0c40e87e117e7b16ebc466849ce9e41
21543 F20101111_AAAOXJ wang_y_Page_081.QC.jpg
e79657d5f3387f7431955a35f8d03f41
9bcd8a425d9e4a4100aa3ce6e1d32c76032f60df
6506 F20101111_AAAOWU wang_y_Page_089thm.jpg
244d75c608e34fa0fa3503fb28c0313a
57cd6e69939ae80d1923cf1d63e0536919f7db3b
F20101111_AAANTS wang_y_Page_111.tif
26e7294652edb7665f834fdb81a602c3
2534e1c78dc37c7c70abf703bf7226a16c1b859f
105134 F20101111_AAAOAB wang_y_Page_010.jp2
328d35dda737d89521bc78f1386af81b
e0dfac4c4aee922dc51109f20e7f825f95ea718d
2085 F20101111_AAANUH wang_y_Page_114.txt
0c47786704be512ce4b14d36a476f1c1
1dd04b38f9c76d45456749d02e7e1de1075d40ae
29095 F20101111_AAAOXK wang_y_Page_167.QC.jpg
39e81a9cb2dbdb39a0f574a069d84006
9cffad895e70f276504e66bf3c06ef24c7b1493a
5381 F20101111_AAAOWV wang_y_Page_036thm.jpg
6f752a090abecec5a6ce3e5b5f8248db
cdcb2d84c9530a6e106c5d0cd0547370494339ea
27202 F20101111_AAANTT wang_y_Page_125.jpg
8b794cba4c8465b34d272676acfa395e
d5f85392ff4eb41f34ffbb8a1e3964be94421483
114616 F20101111_AAAOAC wang_y_Page_026.jp2
7e685fbdf66948fac26dc45bf377861d
74cb8ae155ea8f90c11263777ba0213ff753ec9c
8985 F20101111_AAANUI wang_y_Page_084.QC.jpg
b08ebb4b0cdeab4546c0aac163a16b88
01bfebfb9c9ae0df9c686a895cb229db231a1ec2
7417 F20101111_AAAOXL wang_y_Page_085.QC.jpg
80421ad40ad06fe34e06f6d74e310ac2
8d8aece15ce7b9da3872db67faa196ff6c5c5000
5460 F20101111_AAAOWW wang_y_Page_049thm.jpg
940b6e53f27544ef6bfcc9b4a24cf3f4
15ee21c2883fcb218cc520dc1008f0e1cf09bf0d
6039 F20101111_AAANTU wang_y_Page_105thm.jpg
243fac55b185f18bdb2698a95dbffb2d
db3dccd73afb300a36d92f72999024fc13ab6e6f
64901 F20101111_AAAOAD wang_y_Page_037.jpg
ebbe5621240d3eb5e41ae42e8810ca67
ba5f30f2b566a438a12ea97bad14ffab6ef8b3b3
26282 F20101111_AAANUJ wang_y_Page_091.QC.jpg
96c644991f780a5865c4d273b0ef3022
d86afcbf07d33ae2e5ad916b733ce9cc8fdad9d3
402 F20101111_AAAOYA wang_y_Page_003thm.jpg
2bb1475174be50e369a2328a2fc6be60
412b6777cbdfce138c6b6cc82cc0ca918d4b0efa
6908 F20101111_AAAOXM wang_y_Page_115thm.jpg
00ef0199dcf818013a647671c32c34a4
02bc7a046f15f8f84340e66da0ff0e7120783707
26494 F20101111_AAAOWX wang_y_Page_112.QC.jpg
41ea17218f6537741f6c6a945704e29a
16a0106c3fb852c124ab431e83896a52b5b1fca1
33643 F20101111_AAANTV wang_y_Page_038.pro
39af52e8f531d7e273146196300159ab
fb7646cf1ccb06e31feb1b928ec23c8ba94dddef
51233 F20101111_AAAOAE wang_y_Page_060.pro
aa384e8425a9e32e680c5d54c1288b76
f91ae6eb0a44599826bfe54d2d7563ec0ec26c45
24650 F20101111_AAANUK wang_y_Page_157.QC.jpg
1f1485460cd9049c261c2af0d23db515
b394efb52f2c88cd7ded53ec890733618773b69f
4041 F20101111_AAAOYB wang_y_Page_004thm.jpg
c3c0b82d3cf8030cf2887f2c92f3739d
61e58f174e212eac486eb0ddf9daea4ae3c4568e
23174 F20101111_AAAOWY wang_y_Page_093.QC.jpg
fe5071b6123b688d12779b1c35783db5
ce3f400b37a8d2154246271b50a1040e98cf55c5
108429 F20101111_AAANTW wang_y_Page_119.jp2
cd1b3d1ff40eefb06cf3a1bc96b730ef
7dad581475a8d775c92e0555f905d630793101c0
27122 F20101111_AAAOAF wang_y_Page_088.jpg
247651bcc361980231e4398d5341b331
1913d2da8e8b51773c22cb030044110983e1a5a3
19213 F20101111_AAAOYC wang_y_Page_006.QC.jpg
f5ae47854d8fb97e80f068cf981d142f
f6fa7eff3f5c6a7bc4399b768564bc9a981dc837
27310 F20101111_AAAOXN wang_y_Page_148.QC.jpg
4c5aa36aa39745186d296787c5a5b2c1
5edc0d21dace9a9ae1b914ccd64e989bc85e74a7
8867 F20101111_AAAOWZ wang_y_Page_131.QC.jpg
2eeddbe97db340820a028c7943a49585
d3ae8a0645ade6505b9eddbf856b64aeff2514a5
73781 F20101111_AAAOAG wang_y_Page_009.jpg
204f7108322af6e4c3d0ea180d24e2b9
800a53260ca56dd5ea8ab634e6d4c869bf098266
85009 F20101111_AAANVA wang_y_Page_140.jpg
185ce3c89b2ea9be89754092b7f5c618
bada09fed9ed2b723a79453924e51c8d81cc280a
25271604 F20101111_AAANUL wang_y_Page_008.tif
3076c0d5ae1b7d3562558b2c2260bed6
3e79b95b2b28571d90ca9b90d9271547ed74dc21
F20101111_AAANTX wang_y_Page_066.tif
97d7ac8dbb0139ac0a1076b32c1bf65f
ef799d0aeba049992ae017ec6784ee3b28bf867f
4378 F20101111_AAAOYD wang_y_Page_006thm.jpg
d89c3d68610c3c1d87914931344e5bfb
f0b382be9a57640947d51b0ce233ca82b345c850
15919 F20101111_AAAOXO wang_y_Page_127.QC.jpg
4c34f40ca1e75ade69f32cf2883bd17c
240c26c05aaa8549bc86721454e2b07c9a25f23e
89787 F20101111_AAAOAH wang_y_Page_152.jpg
64706dc932bd984c3558979e924b4a1b
6dede2d18bc6b0d74fbe535eaa4daea01d150ceb
13393 F20101111_AAANVB wang_y_Page_132.pro
b41a33f34778e972abb9294ce52c0e89
513003d6b44d457ab440cb03b7a5850284e2e6a9
90904 F20101111_AAANUM wang_y_Page_097.jpg
0c38e9114d418c3fb3a3fc68dd0c9414
25d34384479a4e106bcd5337025704742fb72a16
1680 F20101111_AAANTY wang_y_Page_001thm.jpg
e5e1288dbde299f88a1f8a6461283a50
fe4870a2b631eac5b2373511f08a6d4c169858ef
6219 F20101111_AAAOYE wang_y_Page_007thm.jpg
ce294152fca0e1c207bc073e631d06e0
526d45439cce922e3af1878543401e38cb9bd2b2
18101 F20101111_AAAOXP wang_y_Page_133.QC.jpg
0fc35bbf442ec76ad8bff5edc152d095
b61c00aac86d6fb48f59226ffc85f1245ec85d51
6744 F20101111_AAAOAI wang_y_Page_092thm.jpg
b3cc1d94ef7a124833d999235465ffee
a0ad892e2d46ea1e348b2761075b15db9ff81ec4
88717 F20101111_AAANVC wang_y_Page_065.jpg
d173a5d452eec14ede6df4236f7f1c2d
84384c70443a61906115e1966a4bf6f40e7361a6
55949 F20101111_AAANUN wang_y_Page_139.pro
724bc5d52c475341eaaa0b2f029734ad
127aa4689981e9ccb00332eed03368f640589e62
110712 F20101111_AAANTZ wang_y_Page_096.jp2
c6feb43d2e640802e0a05cb0ae16e6f3
628c30cc727c80b68e89aceed58813f0a78f71c6
20650 F20101111_AAAOYF wang_y_Page_009.QC.jpg
b4b80d03b25cd214f60d3618cd85a52c
6059ba611cb688987bf28cb627aa1ceab89ac8a7
27745 F20101111_AAAOXQ wang_y_Page_059.QC.jpg
5975d31c7d3823ec27c7f869277d6ade
017c5379f0b07891898b7f92ace9abf72c89359a
58045 F20101111_AAAOAJ wang_y_Page_054.pro
8cd7fbbd2d2127e8cb3d9ce3accddee6
3b7b97688f6a701dc4de04dd7e0e072f38bfe010
17324 F20101111_AAANVD wang_y_Page_158.QC.jpg
5a5231f0be041bb933942442feebc68f
79d85eacb749f71bea886f8ae8b1c0186ce2a39f
53745 F20101111_AAANUO wang_y_Page_065.pro
d6fb35af77692a96fd7b95f6e9be34b0
376edd0aef228a8953d2049c0f4d19963faa9d59
5577 F20101111_AAAOYG wang_y_Page_011thm.jpg
d896fc21bf5268f609f5462f23fe8cb4
459eeb9b9b4de42f49a3e72deffaa194027059f2
27308 F20101111_AAAOXR wang_y_Page_042.QC.jpg
6ea04af05beb1000b7e2e19d392542d3
d99f467c0a0454b9f3b67908e1cd3dec9d5b5beb
F20101111_AAAOAK wang_y_Page_043.tif
ed09bd56c8cd9576acad27e096d6b253
9043bb9d23a3868db006b50fb3baa7125ebd56c4
1051986 F20101111_AAANVE wang_y_Page_005.jp2
d1c434f3b24a0eca87525ed8335cadd9
30e225f03f5c611e186f0332663690b3b5108902
25549 F20101111_AAANUP wang_y_Page_113.QC.jpg
91dddcdfa2daeda94286041394ee477c
1bcad85d91671ca308b0426a09cf1087da850966
27411 F20101111_AAAOYH wang_y_Page_012.QC.jpg
60785ab0f238551f41596475ab94fcf8
13775f4e5346c27760b306e240fc76eceaef2c9a
3407 F20101111_AAAOXS wang_y_Page_120thm.jpg
3ebe292f8c48ae1208e4d14b45445ad5
f40bab981deb0354728071665d3d05585ae46705
F20101111_AAAOAL wang_y_Page_106.tif
e36e7111126910ea80fb7cfafee3aafc
02b7b7dec69a59b45dc01938d56ab626345b7a14
1163 F20101111_AAANVF wang_y_Page_004.txt
6277a1435d8ba7a3e2961c6d7785a708
c6fc011db5864b5d5b3e9aded709bad376702db8
4429 F20101111_AAANUQ wang_y_Page_158thm.jpg
5cb833f09c94124786cf0d00d060cb78
67e80a9c2bda21da5b4dff12cd1afa93a6609903
6329 F20101111_AAAOYI wang_y_Page_012thm.jpg
b498cd4384a950fe16cb9a8061885599
9aeab1a5208a0c89f8006b5dc71961b41cc94f8b
7716 F20101111_AAAOXT wang_y_Page_124.QC.jpg
f3d66269e1b77d96a747a77f34cc8f86
26fdb94c75b4bb5a8bbebc6be69f95bfd8425bda
957437 F20101111_AAAOAM wang_y_Page_122.jp2
cf12b8cdb98d7edf98306b7df7722be4
16d51a9b9705f4aaafbb6276fab7a1bdb119c596
28893 F20101111_AAANVG wang_y_Page_027.QC.jpg
778a2b9114c4b88dfcdd66461c7cf55b
36b8740f950596ace149d1b54faf06f2e5c94119
88754 F20101111_AAANUR wang_y_Page_102.jpg
a82e2042894f76c4afaf3c936d9e7b65
045001af0fbc65bab9a78fb1295a2cf4b059bbe9
F20101111_AAAOBA wang_y_Page_022.tif
da1cfbe7247499611b193d69dcc4a502
14a82bb34d2cf9de74b6a0f5ec6b0d192d399f95
6852 F20101111_AAAOYJ wang_y_Page_014thm.jpg
f21f3615d0ff6e4090dec83aeb9cd8f8
31bc6c9129095fc7ede19e50dfc083ca571193ca
4936 F20101111_AAAOXU wang_y_Page_168thm.jpg
f216a4d0f22a4ae97e3e83cb1d3ace16
04f8cdc0013403206ba3d3cb07ddba0d8d17a8a1
6252 F20101111_AAAOAN wang_y_Page_032thm.jpg
416ba11c018efab8a0b7c7ea92fcf273
890586c2680123c22e76eb04a86a4fe7bcfba162
30428 F20101111_AAANVH wang_y_Page_117.QC.jpg
fa9de140e72f7d9bce6e6c4c7eec2e62
1f12fbb150b366906111801112d80626a40de359
2104 F20101111_AAANUS wang_y_Page_064.txt
1bad1b3d8df3736e5a6fb12158b84b07
49210482ecb39d2d69aa2f28eeb4ac10e3f4c4ac
29245 F20101111_AAAOBB wang_y_Page_043.QC.jpg
6aab733d11b510e110094a7307bc226d
75db3395c2f3411323217abd4274216fa0d92dd4
27902 F20101111_AAAOYK wang_y_Page_015.QC.jpg
9ed0ad15bb4b2dda28c415788a38cc90
07cea90e7ccf684b2597f71625ac540b5bb42301
F20101111_AAAOXV wang_y_Page_137.QC.jpg
35e8b4ecc83b82d22851fc65fc7f47e7
b1159e8b469ed215f1f716db267932e84be0cbc7
116340 F20101111_AAAOAO wang_y_Page_063.jp2
0d7c111325899be5e388be3ac5c30129
1850f47cd598b4370ff5f4a3fc09b3e8736e88e8
F20101111_AAANVI wang_y_Page_060.tif
81da1cd812f01d8824770bcdc6b2f180
f307bcf0b5f38ac521d08387a45650b8b07ec21c
432 F20101111_AAANUT wang_y_Page_128.txt
c599de1a92dc2cb2abdf60aef87cc141
de16c026717ac1e0eaaa82eb4ebfcae0024d3309
87977 F20101111_AAAOBC wang_y_Page_138.jpg
cf0eff1b100dd52369b2b56d512458f9
06fb17236092fb2710684c3ea514a8a0ed0f3fd4
27687 F20101111_AAAOYL wang_y_Page_017.QC.jpg
78051ccd36dc968e55b5053cd26d1b0e
071ea1125885f67afab96cb188feb22a81a115b0
30248 F20101111_AAAOXW wang_y_Page_014.QC.jpg
76197475f07dc29e5b7f09f2a87ad902
771bfce6a484e07fb967e29437b76df5780e4ac4
13363 F20101111_AAAOAP wang_y_Page_080.QC.jpg
11f5efdb92b4398788bebeb32e6887f4
ec1e7f6eff1c91687da61a157cfe0c00addec07e
6684 F20101111_AAANVJ wang_y_Page_087.pro
4d5dd420bf41105a479ca06e529c689f
56edfcd7e1d555c01ff5aeaeb878c4d748c9e4c4
116389 F20101111_AAANUU wang_y_Page_051.jp2
3ff9b41e9e152def79898cb0c1a2fb77
a9969428298deda0aac29dd233f602ad0472a8ea
5191 F20101111_AAAOBD wang_y_Page_156thm.jpg
5fad57fb54d9c6832d62412a318b030c
758791d4a259aa8a29c76bbd12a0fd4e21368616
23038 F20101111_AAAOZA wang_y_Page_033.QC.jpg
9c7c524005bc10d44be017d9827e53f1
8c4166b7a73d76659e8dcfbe1ea0c7f95a65104e
28833 F20101111_AAAOYM wang_y_Page_018.QC.jpg
b81378b137d7882733d1e19270a2e565
a0ae1a5e17625ef3ca660e235a23e38593d016e7
2588 F20101111_AAAOXX wang_y_Page_134thm.jpg
a58fdfd511d7a72bc1d410616671e4b3
c2c1500e7c44a6db256d034fb09c53107c16c090
123250 F20101111_AAAOAQ wang_y_Page_117.jp2
6639f12fa4d500f4a81b96e9486a52d5
3da5f4af7e2afd7bf61bd235eff9549ee7424a20
F20101111_AAANVK wang_y_Page_154.tif
724ed5bbb188da593df758c7d7a9f723
9407f72a534ac9b7cdb52ebb0d11fc80e5e03175
6354 F20101111_AAANUV wang_y_Page_150thm.jpg
e12e6c80695ba7e65b7355351e356300
873bfc84f0c66b459c61affa26ed42288debf941
116806 F20101111_AAAOBE wang_y_Page_100.jp2
b702f16c0e9bba80f64d2d6ced8f1271
a7a6c03cbe8c99c5d8e6516d3dd78d896b5f386f
21361 F20101111_AAAOZB wang_y_Page_037.QC.jpg
ee59455ceb43c29fa1e791880ac04fd1
109dc969d613d80093f8b659ad3d460bb1ec95e5
6395 F20101111_AAAOYN wang_y_Page_019thm.jpg
36e45e2ff7c763b98dcb667140f6e372
0bb864e09fa158a00377e322d16c5d3b0af6d87d
7438 F20101111_AAAOXY wang_y_Page_087thm.jpg
a9655e49cb967e17eb40a1bbd8b4db2c
e98719eb4db229f6b2575fad4f41b8ffea4080ec
100552 F20101111_AAAOAR wang_y_Page_105.jp2
c12c9c0a178a0bddd8ab321ce5e3c9f6
465dcb8acd64f3bb87b5ec60da4f6ebeaceb6039
1051970 F20101111_AAANVL wang_y_Page_006.jp2
22576841f1110cd2b438261f40216908
6981d694f6483ff77988fbb1b4a619e8cbe62b1d
54974 F20101111_AAANUW wang_y_Page_016.pro
b086067829ab9b2356576cf5ef592fa0
500de2e74f8f1a01a1259fb9be5b311bd8f54eba
6538 F20101111_AAAOBF wang_y_Page_040thm.jpg
ab8f14c7ed79b1bc1da13ee986110f4e
ef538f681d53e30069adef21523913f211620685
F20101111_AAAOZC wang_y_Page_037thm.jpg
4e011fe058331561d639e574199d9c5a
3c4f51e67621bd4d59b8330c8b0431ab8c9f67fd
249678 F20101111_AAAOXZ UFE0021935_00001.xml FULL
458a8fa6fc886d12af13732ea2e80df2
12bbc3cc9eaec4afbf4bd55a38b15f71660ddfab
F20101111_AAAOAS wang_y_Page_057.tif
531f65ee181cba7f1d26f1173c8a5816
fb0d8ae5e63a1a123a8b83622f2749a750a3f6a1
117333 F20101111_AAANUX wang_y_Page_016.jp2
1b6f0a3c566dce58683ad8736edcb3da
2420d6f0c87af4bdf878ee3060e6329f06f71b0a
6549 F20101111_AAAOBG wang_y_Page_018thm.jpg
913d4ac29a121e8c3850183285e17c63
82cb9a473ac64a5f0dbf87091542e8e3d08678be
118847 F20101111_AAANWA wang_y_Page_052.jp2
60b3d4944f46ed5992ac591d88d94233
d6beda92e0ae4995f78eb6b2ca757ab421823c37
19495 F20101111_AAAOZD wang_y_Page_038.QC.jpg
f4d18bfad0300aaaea85837a8e73404a
5ccd47d0acb4fa627237ee4ccf16096c30cf006c
6835 F20101111_AAAOYO wang_y_Page_020thm.jpg
0fe489938aa7b372e737470fb961fac4
50260beb9690de41cc6069092ed2f6b09f41f38b
F20101111_AAAOAT wang_y_Page_162.tif
fbb41d5d89a5c49ef9a43a83720fa5c6
4559f4365893f2882612125607e02ba39241c442
49969 F20101111_AAANVM wang_y_Page_121.jpg
6837791d279b241e182ac22cb3605c59
6ac89850a47a376539199e27175e76b1de1bcc58
5238 F20101111_AAANUY wang_y_Page_034thm.jpg
23cdac8fbdfca9eddb36c1e2eb875c5b
da3ffcd27e70c7542b259ce09324d82f20b0297b
20831 F20101111_AAAOBH wang_y_Page_076.QC.jpg
c439f85722cf22a4bda61aa2419d3826
fa8d60407d75843673d2990d2f16eb2cc16c17a3
51034 F20101111_AAANWB wang_y_Page_112.pro
44242181e43b95fb2f41c04fb455b11e
f14e9e99d00cefaf5f78217c85506b0a328b8efb
23722 F20101111_AAAOZE wang_y_Page_039.QC.jpg
33b7b9ab8aa862ed440560eee765e62b
4f2a5d5b2cca9eaef08ada817807ac3732f513a1
27472 F20101111_AAAOYP wang_y_Page_022.QC.jpg
373f03b5dd705acc2d169a6595e41fd7
2932bb704ad752cf1ade592138cba11d9761bccc
18202 F20101111_AAAOAU wang_y_Page_034.QC.jpg
b297e5e4bdc06ae2a53c57767033bba5
75d652286ed094893b87ac058cfdeb1aad391035
86266 F20101111_AAANVN wang_y_Page_119.jpg
4f89249ca93c6c0580d5a2b780998772
5ff2de08d84143c24f71bde0ac10c37b2557ad11
F20101111_AAANUZ wang_y_Page_131.tif
462792286507801d35a200890e01a2af
39d3987f906a3ea78078a96aa3375fb4ff14e74f
2030 F20101111_AAAOBI wang_y_Page_067.txt
6b833ca26f357f389d6b101b854bbecc
62103e860a17098fc1d18a7db1cbc2ee831c2463
83220 F20101111_AAANWC wang_y_Page_075.jpg
444ce5599546d3c601229f518be4e3f0
caf4ebe7370889bce79f809b968b20fa6c84eff4
6116 F20101111_AAAOZF wang_y_Page_041thm.jpg
6c88a564cb3f168140fa4aef784b2cf6
81747b2d0d250ae507589e16f4df13e1ed452b30
6394 F20101111_AAAOYQ wang_y_Page_022thm.jpg
d66117571b147ddc9463177e4bfd8ea2
87930077a4d18a147a04ddd76b0d8b9d1b842335
26113 F20101111_AAANVO wang_y_Page_061.QC.jpg
d4801007969545adac735fe8ddeecb12
e75789a89bf319f19c72c3822790aecaa5438922
35504 F20101111_AAAOBJ wang_y_Page_045.jpg
2c81b5fcc1934531c3d363d16d6c7750
12168739a9c317d1b35fc35233d1c5e8bdfe0e9a
83841 F20101111_AAANWD wang_y_Page_061.jpg
3ef1625b80e74b3bd1bfe84acfe95eec
3e760df50d3af86130bf7100db6a90a89dea4057
F20101111_AAAOAV wang_y_Page_155.tif
a93eacc17971dd0f6aaf6a34e02d2763
0d7995df01a9d57b7a22ff4dd06db5a35775c602
6880 F20101111_AAAOZG wang_y_Page_043thm.jpg
e49da9e63ebc43ed907477ed60b805c7
a9ae13d06609f541b6dbe1e5a2f35e32ffb0c2f7
6544 F20101111_AAAOYR wang_y_Page_023thm.jpg
79b96f8c270957024e79783bb0d26e18
2d495fefee8609720b6dad4a4cc0c718ea2e997f
20784 F20101111_AAANVP wang_y_Page_106.QC.jpg
3174fbf82d72047f76c4c9a6b4797b37
5d0be5ef34be1750ed96f9180330094a0a6c0dc9
6393 F20101111_AAAOBK wang_y_Page_015thm.jpg
d667424f836453c610f8eacea5852085
eb81fcecd59378ce1dbc99728ac66792491f549a
12199 F20101111_AAANWE wang_y_Page_137.pro
c498012879c1ec0524e1923867a74ea3
5e13e0bb894e009f64e175dc00f4a1f682fffee8
22200 F20101111_AAAOZH wang_y_Page_044.QC.jpg
f3c6ac1875b194b570d63056215bf390
f730f7052845ace1a31047b063b8349651dee549
6629 F20101111_AAAOYS wang_y_Page_024thm.jpg
dd9ad3267d18826567c351dce7c8b886
5d63881a3fd42de7beff664aa429e4e68cae8cd1
50641 F20101111_AAANVQ wang_y_Page_119.pro
2e1198f22c92ab5c9bbf3851004cf439
3999ab1667c137620054568ba13f0fa759756728
111446 F20101111_AAAOBL wang_y_Page_032.jp2
0e64c18ae967f68db9e094cb2d172a93
1ae9045444fa5fc5f70c03acea1f5bc7aaa735e7
3733 F20101111_AAANWF wang_y_Page_129thm.jpg
31735150bd0483cd8e4b928efd5328cb
836781caef406e27bafc20e4b6abc60e6bca841c
F20101111_AAAOAW wang_y_Page_051.tif
3145f372527c2b441204ae81e6d7222f
cdf2d3166662c09826a394d124e13a53d1dad512
13908 F20101111_AAAOZI wang_y_Page_046.QC.jpg
0602a4db233bac26935aa39964aca720
ce94c45ccbb94757b13738d7b572c7aca4e24617
6431 F20101111_AAAOYT wang_y_Page_026thm.jpg
5f321950031a77aef05daf3ab508fddb
301b760df0d6edcc43d792fdbfeac6198974e017
85348 F20101111_AAANVR wang_y_Page_006.jpg
1942c74d542dc67bb5498d7f8f304299
1696197403a5f9991544a71a154bfe0461ccb813
122427 F20101111_AAAOCA wang_y_Page_165.jp2
25475bb7fd69afee4bfb054989c237f6
ed08a12a22a7d635113cf7e318845ca6b1fc257e
20338 F20101111_AAAOBM wang_y_Page_001.jpg
07b31325d6ee3fdd092c980aa51bc5d7
b97f550e3f0326f2c414d9ccd1eb2fa1c31326bd
108443 F20101111_AAANWG wang_y_Page_091.jp2
a788da570286b95b7e937643da1d6c67
46fa0220dd0813cb594e95f26bd750f98a228123
67550 F20101111_AAAOAX wang_y_Page_135.jpg
597fab4c353a1eb591cff1f8675bb18b
e6fa99ef854ead79e0e9a2487539f1d15e551a16
6535 F20101111_AAAOZJ wang_y_Page_051thm.jpg
453b0fd2ae95d3b339f755b38ce21386
2ad2aa065d3ea6b4af703835cc71e42591e358a5
4349 F20101111_AAAOYU wang_y_Page_028.QC.jpg
4c9507561d01b65b74f7e484c7543cf3
5c3199cd05e5985dce993b1a8902a4811b4cb044
110270 F20101111_AAANVS wang_y_Page_035.jp2
38e901d4f322b09486cdab15b2369dee
901f17ea2aa036193de289b5a1df3c20e58295aa
28680 F20101111_AAAOCB wang_y_Page_104.QC.jpg
abecf5233d2f50a37e4c990d4e976ef7
77fc96fc58b21eb7b165c32615efd67fca91b2cd
58207 F20101111_AAAOBN wang_y_Page_036.jpg
f4c4bcf830a11ee217ca8563f540178a
c768af9bc33a88fe739d3f1be766cafc3cf2d07a
2093 F20101111_AAANWH wang_y_Page_024.txt
342f79f5ed4ab8066804bf6009e52172
3bcbc4e810cf31a20e6a8d2f0ba7df801dcc96b5
116695 F20101111_AAAOAY wang_y_Page_102.jp2
f760c0adbebb3cd1b38a0dfc539e87cb
3706e28fa05e2eff1cb93ebc3ce94373a9592c41
6725 F20101111_AAAOZK wang_y_Page_052thm.jpg
b0f2e09f5bf5cc99be662c92ae3f5f35
e58895f61f4ef7a62364847af5d1b48378546ed6
1179 F20101111_AAAOYV wang_y_Page_028thm.jpg
5f023c1c52c8df4cee657e7b61d48aa9
255f368c9a93e015b16f6d438bf1aeaec6ea172e
76252 F20101111_AAANVT wang_y_Page_128.jpg
6d3d86d39c01af424a82e25314689e42
7a8bc41e6e19bc1567331573bb373dfee9c78ea8
27342 F20101111_AAAOCC wang_y_Page_013.QC.jpg
e942850e3c8079ad2bdc68692c91dbbc
c22d6ad544150dc68758dcf41dac2aa08837eceb
1978 F20101111_AAAOBO wang_y_Page_095.txt
52819b9dc7aeb55ab306f05fb035b286
016dfca4ad278065fc9946268c69423b364723c7
2216 F20101111_AAANWI wang_y_Page_098.txt
b7df60081b0aaf7dfad3cf38512a5b38
ec9891eebea9fa33000d5dfd47c802154de1931a
24003 F20101111_AAAOAZ wang_y_Page_120.pro
8435528f92afbe1b2adbbab0884e5834
05568a3f116ba264a0d8fad0fd1972a6e59f0657
29260 F20101111_AAAOZL wang_y_Page_054.QC.jpg
fff62ee38354d76c43ab5c101e4f34b7
47bc00b09b85a483b7f9ab3223cfea93cdd898b0
15049 F20101111_AAAOYW wang_y_Page_029.QC.jpg
579e1b5eeb84ed0e72a2ad9f7b5b4233
e785018dce89fbfe534b5429f4f8bf7b6768cc52
88176 F20101111_AAANVU wang_y_Page_025.jpg
2ca5ad21dd84430397089ba0947f850a
efd05f148c8c714d0fe3b5bdc6f63cf60879b74b
89214 F20101111_AAAOCD wang_y_Page_016.jpg
e33cb6ad4b57d89956a83d08de1bc12d
5a04cf1181a41aca8891719ac58e81565470ff63
23567 F20101111_AAAOBP wang_y_Page_058.QC.jpg
0d2840440e2696627d95dc755643ee5e
ce002f454a8479fa9405d11a9bd6daa6d11ce830
89325 F20101111_AAANWJ wang_y_Page_073.jpg
277bbcb4188b20f83f6cd72f5ea6dd0f
6ce733de5b1f89e003db49afd20c2246ec91aa17
25714 F20101111_AAAOZM wang_y_Page_056.QC.jpg
40d95b871d653a7bafc5b9dfee1ead03
5c1a9af0c384247453104814ffcc49a69eac52a8
4804 F20101111_AAAOYX wang_y_Page_029thm.jpg
67f7f22aacad2066ca24e940468696b7
91dc9fc95c5f9c17e1d9ca43798fcc9f9cb33dc5
2139 F20101111_AAANVV wang_y_Page_091.txt
429bcdbb0695612e72b93d229ddcc9fb
b9b4e4a0096be9362c8a86966666b161864d13c2
85356 F20101111_AAAOCE wang_y_Page_089.jpg
14278be11bd4d8181c661bfa45e3f8f7
00da2a355eeb26bb9ac106a35d4840b1eb5118af
F20101111_AAAOBQ wang_y_Page_101.tif
94086300ba2e2d2f3990377cef0ae73d
bb72cbb1ab841eb6633809aecd1d79785738b8d6
56278 F20101111_AAANWK wang_y_Page_130.jpg
2ad60d9b952d6f7e0fee68574599cbc8
342022812ff6263231673d8c9f966409a48d2d4a
27496 F20101111_AAAOZN wang_y_Page_060.QC.jpg
2a1402f3c0336e28a7493aeba2518c23
bd4196b21117563f228fd0646bab80c9ad182258
6742 F20101111_AAAOYY wang_y_Page_030thm.jpg
855c0b3bb5bb25dc71a9421a865d1d4b
e91389ff2a3c8b10e3e5f4500bc81e541bd8b665
34719 F20101111_AAANVW wang_y_Page_036.pro
0eb4f58f237fdda4fe17a5e5a6886e95
c2e70abb0f0fc194d05d7121a43ccd53ad9ce044
27817 F20101111_AAAOCF wang_y_Page_138.QC.jpg
7c82d7575ddb7ef9091ee3f301749d08
6a6446e0b88719e78eec5fe8367d28a2c4f6f098
2773 F20101111_AAAOBR wang_y_Page_162.txt
e66041c724765ec01b33cd2e6c6c5d0d
eff460055bcf7b41e630907f6cc574e2f7f49f97
28299 F20101111_AAANWL wang_y_Page_151.QC.jpg
1de803205424b9f6d234371d3f875560
aabdfec56138f9a92e0cddf80441b2464652c3a4
6653 F20101111_AAAOZO wang_y_Page_060thm.jpg
c964dae21ca6be42dbb11991aad609ba
c186029fe8ce501539b179808b1a05fa0685e714
30284 F20101111_AAAOYZ wang_y_Page_031.QC.jpg
17911d7a97e52acded5c60d7ac03981b
bc8596a1a1b8f2cd26814b92b0b403a74ae030ab
97831 F20101111_AAANVX wang_y_Page_008.jpg
3674d04c39352ba485d32045412a31df
6e82137d7b95a1a892c2183fa03093c8b35539b7
108775 F20101111_AAAOCG wang_y_Page_157.jp2
508d61f042e16899f06337b807020080
30c0e0a06a19f8e633bbaa8071808dd4c475c6ea
6857 F20101111_AAANXA wang_y_Page_099thm.jpg
bcfac6a72a13c7b711ca0f3503024161
888048574a1ded794af5f8f6425de495c9585234
F20101111_AAAOBS wang_y_Page_140.tif
a6d26182e8e76fe4fc64b8558e7b3277
027d9d74d71363e24bc59fdf503814daf2503574
71708 F20101111_AAANWM wang_y_Page_158.jp2
fd3e9a338ea382b149e59c55e7162fc2
a0f70f2a01cffb949a9a3df6000ec022e912a7dc
22832 F20101111_AAANVY wang_y_Page_005.QC.jpg
bf0337b8314abc5cfb934f85e5fce60c
70d309bca38effdb00d4fa981e5b347bad7323e1
14900 F20101111_AAAOCH wang_y_Page_077.QC.jpg
2aa49774558e8313af3f9eae04cd1f65
e93213ea4795dd08e3ba767e022a833d932f4b18
5337 F20101111_AAANXB wang_y_Page_044thm.jpg
caa0a2b3276d45b23f1841d7a00a0e09
f972739a43a5a1a155c15e202d3152d2df9ebb1c
3127 F20101111_AAAOBT wang_y_Page_123.txt
2107a334a2028ce9f575247e636e62fb
d4a46e7fac9e6da15bbdfa6d45bb2bcad29c08f2
5987 F20101111_AAAOZP wang_y_Page_062thm.jpg
120bf385a4b6ecb70d2f123e8fa8122e
55894acdac551583dc24b767986a57199e86470b
7092 F20101111_AAANVZ wang_y_Page_123thm.jpg
97f0ff549f33cd64e991f8c124e25f82
495454d40e453fb165cba715c632eaf2065cb755
26719 F20101111_AAAOCI wang_y_Page_096.QC.jpg
5c1a1cdc862b4e909ee9ee16b5e31cdf
e9d19bdea599f276b62b13e747d630c505484b53
62636 F20101111_AAANXC wang_y_Page_034.jpg
2fd6a797938be272f5224ffff2042d03
a1d06d02c79b5a2f20a8ac03c4c4afabfca69b28
110761 F20101111_AAAOBU wang_y_Page_110.jp2
d6578df45d6b15b643e70315905f23b5
15ae21a03cfc3cbfc11bb86b8a95018cf5486d62
2054 F20101111_AAANWN wang_y_Page_072.txt
ba3e69be171e1b47fee0d53daa5cf7d1
9dc3b247dfdd918386694093ab325cb22faf33d3
6609 F20101111_AAAOZQ wang_y_Page_063thm.jpg
94d56f70149fc756a23ccff113d6d462
8aa51d57b75b8f5b839e59a62ddf6b33046052ce
91151 F20101111_AAAOCJ wang_y_Page_147.jpg
64521ae9b666bac511a327bd8c8c9368
e41e09364c6bcd21b0d1a8aa3b560d99528477e8
50053 F20101111_AAANXD wang_y_Page_095.pro
c18b330475381f0b3be0137d03f14bf5
771c8f0ee615cd194cce982b1d680d7a8051b6ca
737882 F20101111_AAAOBV wang_y_Page_029.jp2
eff5ff47bbfd68bf2490607427bca9d4
08f919221b018054b8251a5e974fe8d2d84aad0a
87263 F20101111_AAANWO wang_y_Page_071.jpg
40de257dffaec1ed18158b749f8e0524
b0f48a856ee472e6f1e0539be87079116412d556
27738 F20101111_AAAOZR wang_y_Page_065.QC.jpg
8daa06e9233146084185cd1bc0b34ac8
26b23ad6c59083a103b6049d883be1f29efe0222
21263 F20101111_AAAOCK wang_y_Page_124.jpg
fc30974933eb1b84a05f678f8b503608
a761bbaa498f2cb90f9e224f6975c32d607ca1a6
52000 F20101111_AAANXE wang_y_Page_009.pro
4934ca59a8b8a8aa9182a35a6e8886ad
e4727e39b9ddfce54636fcdd56e8c98c2bbe2367
51597 F20101111_AAAOBW wang_y_Page_091.pro
9fdfd0e1045dc2514a1b99b282e112da
5cdee1c8bae75ad77589ef9be038481d240ed126
80055 F20101111_AAANWP wang_y_Page_113.jpg
c10483bb104b37de17d7e633fd3d75cc
f08191fb6c9d85e423889b251a8341403868bd93
22189 F20101111_AAAOZS wang_y_Page_067.QC.jpg
a6ca69ecab8d98a7063f4dda5418b581
22af2cc297443a3a9c04aa868e997f15db7bbd6a
F20101111_AAAOCL wang_y_Page_166.tif
07c0c02d83e896e461a5f522025252ac
2b64d546abd743289cbf91f5d0f68b1a2855c9d2
2192 F20101111_AAANXF wang_y_Page_021.txt
3852c0e5680657c5095464021dea204a
da6e47b45da58168628e3675fc5cf3f0f41b30dd
55110 F20101111_AAANWQ wang_y_Page_151.pro
6eef992d8054e72a1e34f2ba0720f4da
13ef5143579f5527937005d4b8faaab3a74a7a15
6001 F20101111_AAAOZT wang_y_Page_067thm.jpg
d85b26c0df41b20eb50383ce058537cb
c0163a7ff7574242b0e4cc03a7a044289cb36531
6676 F20101111_AAAOCM wang_y_Page_102thm.jpg
82fbb5cbfcb5b6812e0d51be3635c162
78a3ab8a62681b26b011ae1887b2a3fb93b0f05a
25710 F20101111_AAANXG wang_y_Page_145.QC.jpg
397e34e556163321342eceb38fce729b
158876fe9f40c00a2fb81a56f142b58bde8e404e
F20101111_AAAOBX wang_y_Page_157.tif
c0927bcc34cecab08b5757804134e0d7
461ba5c467288c881efd8c5419a9288c9655a737
1622 F20101111_AAANWR wang_y_Page_076.txt
a13b7538dc63695531485dbbc2d3d4df
5a91bdf3e1ca8d14a918f3cbb45122e26d946a80
2999 F20101111_AAAODA wang_y_Page_003.jpg
bb3bb5730ce05f5c507c51a11d8b2ac6
a0b222ba284bb50b428e8ff3902cfe91c973ddd8
25019 F20101111_AAAOZU wang_y_Page_069.QC.jpg
0577318a4fb05eed942c9bf853718ade
10860d400a6d75c40ab37661fcad183bf0a9be7d
F20101111_AAAOCN wang_y_Page_092.tif
f834bee160060ba12ed457ad42869f89
da38e832df1f6bd90ad8204047e1ee19acba0f7a
98671 F20101111_AAANXH wang_y_Page_005.pro
3f1ae41674bbee123d1de6eacd87b715
8123f9fe1ced884f67051d4fed5584977cc5caae
39716 F20101111_AAAOBY wang_y_Page_078.jpg
6c1c8473a82bf655d9db46b0649881fa
b687702eb9846413fff68a2158b25440176a1f46
26560 F20101111_AAANWS wang_y_Page_026.QC.jpg
b6c45f0d2ed65a8d84e177447b37b128
f597e5dd7a1794fa9d0521e4b8e7e20386688cf3
50954 F20101111_AAAODB wang_y_Page_004.jpg
a280c2b7f7c6e0ec49c680c1d36bfd98
e8a904b47bae65f517b867691f699b53f7866982
6410 F20101111_AAAOZV wang_y_Page_070thm.jpg
15597714f59e08e5e7b226744e5b8afb
1e7ab2edcd16eafb568a4f4025ca59d76f835e48
5746 F20101111_AAAOCO wang_y_Page_108thm.jpg
cf8d5dacfb3a531c85adc3785c41c50a
9f8b74e7edd0d724939ca53f9bc226c679dad920
52719 F20101111_AAANXI wang_y_Page_022.pro
ba817f91a1d840d653fdd846d84455d0
fd0a6c1f59fed53c3c83631418f94e646ce9ce2b
56618 F20101111_AAAOBZ wang_y_Page_144.pro
9eb2e752d98d1872cddb00c2bc463b84
f8b6b675b927d0dbcfc57886621b9e9e3aafd62a
2002 F20101111_AAANWT wang_y_Page_119.txt
a885c7f4cb72da17b01d91e9a201f8a8
6ab3914cb090cfa76b80860d9f34e609d226d490
103309 F20101111_AAAODC wang_y_Page_005.jpg
849e57e80c81a7b91f7ec93e0ff2c1a4
b81b6f4d396179a2d267a060e3c56921870fea87
6303 F20101111_AAAOZW wang_y_Page_072thm.jpg
73ad715d5707adf7ef6349fe8723ea79
3106dec2c8df6ce00782c4798a603c5551898780
28686 F20101111_AAAOCP wang_y_Page_066.QC.jpg
a78a45e789662f28d11298866fe54194
abd027911bfee7d25370f2566ba7dab1faf3f9f2
64386 F20101111_AAANXJ wang_y_Page_161.pro
bd2a8edd8bbd39d975c4d216e9d59b62
5e71698f0ef8cb6717d1f50e5968c3acbc7241cf
88923 F20101111_AAANWU wang_y_Page_024.jpg
a4b23f4152dc06b3680b4546418555bc
50ceb626b1fee9d24255b42f784845701b34157b
95936 F20101111_AAAODD wang_y_Page_007.jpg
599b7b3e6e4c7030188ac7e55c7a33b3
4501b0f1a40babcface1ea23496714c0aeeebd10
28140 F20101111_AAAOZX wang_y_Page_074.QC.jpg
33c22ea7f1bbadbb043cf5c9be4165aa
e0919a17fcefa936781eec782bca0debefaed168
1648 F20101111_AAAOCQ wang_y_Page_160.txt
6a2b089e38ee3acb9814158a50286435
9e141fad8a020525e7233f6c93244a22ae46d8e3
3128 F20101111_AAANXK wang_y_Page_160thm.jpg
56852373ad8a1b237ba8487ad9013247
eb7dc95406f7700d3957bf7ef0fdceac1ac957f8
6722 F20101111_AAANWV wang_y_Page_142thm.jpg
452f794f95354b62eb0c85023eb2dc33
dbf2c422e460de7ac569ac3fcb6d7a0746ac9eac
78169 F20101111_AAAODE wang_y_Page_011.jpg
b9c93dd8547a7cf1ae5444e2b68ebd99
1004cb3a5f3569f0753b327d96f4be447e69af01
6709 F20101111_AAAOZY wang_y_Page_074thm.jpg
a49ab261ca86264e51b27ad0aeb9cc66
9061ac217e7a5041610ce459133c3f32a9040f55
28821 F20101111_AAAOCR wang_y_Page_161.QC.jpg
e2b258ef1fbe1cce7b7f738b523c222c
9110918e3a1b61510b183f548dcf382523a82d91
90683 F20101111_AAANXL wang_y_Page_052.jpg
57e97deef7c5696fc88e85c180f8478e
539e5601eec9494875178f72b6fb9c56c9b6a634
6209 F20101111_AAANWW wang_y_Page_059thm.jpg
d9d2840d4bee1b5a33e3f74e5039d093
e995efc68fb111f1393abd5ad3f61900c7a980a0
88353 F20101111_AAAODF wang_y_Page_012.jpg
2dba621726a0c3513a3bba7934e0e238
2940ff6d56698d212d80064810c0b360ac69275c
5069 F20101111_AAAOZZ wang_y_Page_076thm.jpg
d9c3e3a1ab49efc3faafe1ee78c1199c
dd09d40db3d929b3c58ca69a821735ef500ba0d1
98853 F20101111_AAANYA wang_y_Page_058.jp2
3c6c2f7185324baf0e4375e443c659a6
ce508a9c650f5ecb09f03cdb9e3bb3c9826ad5ea
6504 F20101111_AAAOCS wang_y_Page_016thm.jpg
cd43d0ba54873bf5ce26a04ff3b5d10e
e1f9c7807cb5fafb9dd6755ce288cef8e35f9255
57101 F20101111_AAANXM wang_y_Page_020.pro
1586ba70fe1bb0a3874295e8a0805b51
1ef4306e330261a87998442a08a5258106eaa2b8
6706 F20101111_AAANWX wang_y_Page_097thm.jpg
62438a378098d593b2203c42399b6668
69cdbda583aad3bd293f0cbd0c050b0f1806b0f2
96573 F20101111_AAAODG wang_y_Page_014.jpg
6a6d7a6cacad891ec3f5e6d7a1986f9c
28835d700d6cb7c03b51de35f34481c564b69307
90614 F20101111_AAANYB wang_y_Page_149.jpg
a30492b9cefcbfe4181e97ce22ef2a5f
e25771a3d79523a4afc908a0f29327ebebc2f724
6587 F20101111_AAAOCT wang_y_Page_025thm.jpg
21655594cf18dd255b2931ea09092f9b
90b32db46039b2742d318f4498fbe10d63103750
45819 F20101111_AAANXN wang_y_Page_058.pro
e79074cfa84766ee626910d340095d83
b2f20a0b45ad9eb7af2498968f5cac295646183c
114911 F20101111_AAANWY wang_y_Page_073.jp2
5ce8280a986b51476448cf038a841e9c
706702fc5dd746cb93418cd4b09e90c9a58cff90
86941 F20101111_AAAODH wang_y_Page_017.jpg
b95d316057752581c7d58672d326fd2d
0d3b615292a8b0bc944d8171156fab2e8d4ee08e
29108 F20101111_AAANYC wang_y_Page_020.QC.jpg
1705632e6ccb82346692e7c364dc3012
016d2dec71dd41352455ad7645673c3c8fb9b0c0
24400 F20101111_AAAOCU wang_y_Page_081.pro
c6db1b7c5639375f92eeeefe99104ddb
f8d091a6f65b1a04cc42fd672f8c11619f9196eb
13454 F20101111_AAANWZ wang_y_Page_122.pro
77f4dc409fa773a3a04e136db553dfd8
11af01259d03377a2e77bbbbfa41f9d489a54ca9
90939 F20101111_AAAODI wang_y_Page_018.jpg
6b609646dcc7ad0aaef4b1592c45cbe8
2bada3c6269bfda7290fd5f498600d85c00afac5
54771 F20101111_AAANYD wang_y_Page_023.pro
6fe06dc813ab07a68642fff44484673d
7e6c21b79f60dfffd79544991ef06574bf52a60d
2081 F20101111_AAAOCV wang_y_Page_010.txt
6361045a409b7afe74c34e66cdc95afb
23e2d6972294cf56b1fefb3114bc1d187e6e47c1
6007 F20101111_AAANXO wang_y_Page_055thm.jpg
bbe51a5605e91bf004770ce2b8b887cc
40ade40e374788cb3277b498f8814faa9b6b45a4
86958 F20101111_AAAODJ wang_y_Page_019.jpg
d15d34e6b39d747f0e83da584bb8caf8
10015c50a550b7c69e46e0a6a3f39d5a70e0e14d
6482 F20101111_AAANYE wang_y_Page_071thm.jpg
4fbeb9fe5351392a9a4032034fe019e9
e44fc8cfe2a434d6c382106ae3a48095819b3b61
192884 F20101111_AAAOCW UFE0021935_00001.mets
54c3a71003c2dda63a06b322950e187d
51ab54e4a52909bb0990f13f5f31471adef3e35b
42798 F20101111_AAANXP wang_y_Page_029.jpg
b1846e88e7413a6592dd10fba38de0f2
0f1aea860c12a7702f667b837aef3cac6e66431f
94085 F20101111_AAAODK wang_y_Page_020.jpg
680d9243d21f1f24fa81b67678f47140
2dd33297a23089b6d4c995846ef668b8e42caf7f
F20101111_AAANYF wang_y_Page_144.tif
fa4e5679c6d346cc07a0ac67cd0e885f
06bfe81498f84acfb492e00f558c5f0ad412f3c5
41754 F20101111_AAANXQ wang_y_Page_107.pro
b39ebf0d4045165fb5313420b625502d
d3a792070038b203c488c72eefe169ff6ea28280
88426 F20101111_AAAODL wang_y_Page_023.jpg
813a9ee7464216285bb4912fceb22b94
dd3c19598f304a1688fd1a4d86303bd2077a4663
2214 F20101111_AAANYG wang_y_Page_030.txt
97820bcbb77cc2b806bed30948bfe955
c761a50fd2dc34d9b90b7a98616af7b443850845
135610 F20101111_AAANXR wang_y_Page_161.jp2
5fd0825b1b4e6859050b5a021400e126
283d317665696c5b276dcc44ef170abfd32326a7
73137 F20101111_AAAOEA wang_y_Page_049.jpg
8245cb6cc59ec6d75995a7ba6a7dd253
e50f6a641490af6b3ae4bace2e4a09a201927aca
87195 F20101111_AAAODM wang_y_Page_026.jpg
a9b020ab38b913b535f860130d40020a
6a24e07ef4de5557eb8eceb98c54949780836406
F20101111_AAANYH wang_y_Page_135.tif
ff5d7cc16d118c5d0749cccd9b0a680a
24666c04b0cf7c3f361c84eebcfd42f3e819b838
6665 F20101111_AAANXS wang_y_Page_149thm.jpg
1e5869db229c48963d2b6b984809f95c
fb7dfe3294452cce546eab905bedf17f6f942f19
91085 F20101111_AAAOEB wang_y_Page_050.jpg
3e1b22d2f139fa0798c1a53372d42e1c
982cacda449901e7ff93ec4b53ecbe26ee53e1c5
92391 F20101111_AAAODN wang_y_Page_027.jpg
6ce39ce8bce6f7fd3c25ae03d3985f68
4adf12f99aa332682dbddc77d2d2f70b3576f440
28760 F20101111_AAANYI wang_y_Page_021.QC.jpg
dd3947d82840f4045b3b74408445816b
d5d0c408d4df4208ac686aa785445ca5b9b1ef5b
3305 F20101111_AAAOCZ wang_y_Page_002.jpg
85b2c0000803434f1428d8e1ea46c3bf
0461dcb78a595a810597c09819d1cbe1b32f008d
F20101111_AAANXT wang_y_Page_129.tif
132bb158b8127f139ebb43a7688ed0bc
d77402ba0af1cc11a5693dc9115203b14dd1b402
89268 F20101111_AAAOEC wang_y_Page_051.jpg
af9ed851bc07deec8a322d710c112747
307cb6e5aae4c988fd496758cc2a3eea3ddfb50f
12542 F20101111_AAAODO wang_y_Page_028.jpg
2678e4a9f13cf8aeedc29d223735ec1f
1d65f3f80367954aca1add66951a4a54ad37a09f
47232 F20101111_AAANYJ wang_y_Page_105.pro
dca0ea72741df472acddf081823c9199
114daceaabfec3b92c97e9cfbb3b2eb971fe56fe
21604 F20101111_AAANXU wang_y_Page_136.jpg
8141b20f43e10d53ab06185774fb8a62
baa38d7ac53bce896dc6e695aab68bd7890a5e99
88973 F20101111_AAAOED wang_y_Page_053.jpg
a1bb2791e5c16c04b942a2c9bd65a0f6
4ba66f0ae0825be3a4a2462fee4463903e16245a
96784 F20101111_AAAODP wang_y_Page_031.jpg
138b73e346022f15aec5e230f88d09f0
09fa3de8d2dbbfd53055ff9c45f1e47247679fc3
110762 F20101111_AAANYK wang_y_Page_075.jp2
9b5cacd22fad4ff3b5d452c0ab09d078
34a8f8c56fcd567045823f0e6833108ec88a9d19
F20101111_AAANXV wang_y_Page_153.tif
87f3e8951162f853eab85341e06085d1
f5fb9a556ea98a98023de3476314acf30fb37b3b
94108 F20101111_AAAOEE wang_y_Page_054.jpg
fa269e6bf99521707db83f01036d5447
6a151d9628d9470be5d29716b3d866cedcedbbb8
84332 F20101111_AAAODQ wang_y_Page_032.jpg
668fff452e16c4dda97b682108b9d163
efb0787ce7492d4c7bc6ca806ab9f19ea29b246c
45389 F20101111_AAANYL wang_y_Page_169.jpg
42c4c64f086f575ad30069e16f3e0b96
66c4ddac1df35d9c8fed7d814c407ae15f314507
F20101111_AAANXW wang_y_Page_084.tif
a041433b49c0da1e2538260939f11996
0727c84d1993f6b1486099eff9beb15baa6b4f7f
81555 F20101111_AAAOEF wang_y_Page_056.jpg
27332bf4f14def0047ea472d142195d2
1b0f7a018af0325517d17b635243db430d6ba618
71349 F20101111_AAAODR wang_y_Page_033.jpg
82cc974cf943f72746a45d8c07275529
bdfe44b4e8a8c8014157fb9e73b8c7fe3bf12465
27558 F20101111_AAANYM wang_y_Page_048.QC.jpg
8462946edd19f95ec5aa4f5e4f3905e4
24f3da119fd9ac912d36b5d1b52806064b1f135a
F20101111_AAANXX wang_y_Page_009.tif
840ea3c4a38f8e470e769c8a9ab45b93
de907ed094877c699b7bfe9754b03017025ae8df
80685 F20101111_AAAOEG wang_y_Page_057.jpg
b55c9e520b37fb8f4ee824d905074319
c74064ce03e7c1fd8954b08d09e53a93194ba2b5
23860 F20101111_AAANZA wang_y_Page_049.QC.jpg
b63b04d2410ce825a1f4428ab2ae9bd4
e27e4e92f2655a989dd942f425b3763ddb10615e
85681 F20101111_AAAODS wang_y_Page_035.jpg
d973abeb991a8b1f8a3d7ea45fea7c4d
486e0c82067c283c4c4aa1da8312e3e757d65a62
1957 F20101111_AAANYN wang_y_Page_094.txt
6133a7e712f2cbe3972116e6d2285ded
0030b5a804ee147c3e18b75f3ae8b7870b67252b
F20101111_AAANXY wang_y_Page_090.tif
278571d1a09035750c69687377bd4ad7
04929898e241dfe16c772dbed972f9411a9ea891
84871 F20101111_AAAOEH wang_y_Page_060.jpg
59c0dec073acf38be4173796062b3a11
195ebef3dccb0573605aabe50d5c1bff53a5a340
53454 F20101111_AAANZB wang_y_Page_146.pro
f34013647c2314a9c357da8b3265cef5
6cc6fd171d831d7535a951470ac1b803da9a1aa6
56521 F20101111_AAAODT wang_y_Page_038.jpg
5c0efdc51cc9d4b151a9e23ed28486c7
e6e13b32622476c689926e55f59ced76fe77c8a1
26502 F20101111_AAANYO wang_y_Page_072.QC.jpg
86d7c19570b6356c816631bdd6ccde74
cf70c7804d70fc6b16a2777738f34bad7a6e8e94
7292 F20101111_AAANXZ wang_y_Page_161thm.jpg
6224e451dd2caa59133a1c89cf42afe6
e74505d472a66d3ffc87fd570ed4d1826824e054
92844 F20101111_AAAOEI wang_y_Page_066.jpg
aedbef9b5eb773f39208728979d1c82d
1ad7a009f85bd86e649f37f2c506fa59598f8b8a
F20101111_AAANZC wang_y_Page_088.tif
40bc5d0efb977d19c805346de23f3529
18281cd1ec7c2ec01cc8c596d40ae350a9218e69
72268 F20101111_AAAODU wang_y_Page_039.jpg
238a57fcd9dbc4ea2e16e756dd197e52
3a421725a9b61403dc73e9aff07d06d391d6a535
74827 F20101111_AAAOEJ wang_y_Page_067.jpg
b4972d44c5405736d3d68a35d2a518c4
c1c6256298c1c1dede5e928df58a2c740a1c68a2
431 F20101111_AAANZD wang_y_Page_002thm.jpg
076ed19d3163ac47607ecc2b2e2199cf
30ed689294e028db7f2aea235d69515b8dc9c72a
85181 F20101111_AAAODV wang_y_Page_040.jpg
756148a09b5cc82eb875aeb2546bc0df
c9d0deef8abc53db815a7794ddf9d8dba423b854
112520 F20101111_AAANYP wang_y_Page_072.jp2
1add442c6f486b5557df26a5bc2c5319
26bdf56ec0ee2bd7aff2a63ec29a96df5b1b4b83
83274 F20101111_AAAOEK wang_y_Page_068.jpg
c92bbb64dc6aac2a37a266e608943b61
33b8cc7b13ced993c52b11d294b5f8a84649d538
1894 F20101111_AAANZE wang_y_Page_108.txt
810d9d193cc6aa789221afa4e91d02e8
5df4aa27144d9d16adae2a1501cedddd263d6dc4
83361 F20101111_AAAODW wang_y_Page_041.jpg
00abf8ee02a6fb431803b115a1aae7e1
ac6679813f1d5fd3e428839ed6f2975b0a9c0008
6336 F20101111_AAANYQ wang_y_Page_118thm.jpg
476cc7109f66bf7d5497e798bade78d2
98c1c23b34a4c834623e0456e8b359fe5bd1fae1
86823 F20101111_AAAOEL wang_y_Page_072.jpg
cc78d45f3889479e666b0ba392d600d9
ff762dc8b0ca9d367ceeb84f6ede01e286e3717b
F20101111_AAANZF wang_y_Page_082.tif
dd06a984d13d081eb598074275eafb1f
0c343d99e4b46c01b75b1e0862ad1ac3a6f53791
94630 F20101111_AAAODX wang_y_Page_043.jpg
9eaf07de079eb56c6372fa0d565dd5dc
b39113d10de281819e60296450e33175df967d5e
20546 F20101111_AAANYR wang_y_Page_168.QC.jpg
0f0904ace71de713beefd9fc8c9e662a
1e47e535d7622c913e7977da869b2e2d29325cc6
89870 F20101111_AAAOFA wang_y_Page_098.jpg
8d8903cdd4244424655a5143c0539b0b
59350ef98d48eb9ee0fe5bb4bce79def044f32da
90720 F20101111_AAAOEM wang_y_Page_074.jpg
966f061f7dd58f34ee15fd39759c340e
e989e941acf63730d0f6598825f4e4617a953312
20050 F20101111_AAANZG wang_y_Page_121.pro
367e3f8468e8144979ddbb8b2114dfa5
dec1d664877ef0a0c871c4e1593bde51e3cd1be5
71504 F20101111_AAAODY wang_y_Page_044.jpg
e8cec206c1ddbb19ffd1786626eed520
7a73689b08852c21eb97e27cb19487a020abfee0
58366 F20101111_AAANYS wang_y_Page_014.pro
37270a5e92e4d52e4a4bec25473ccbdb
048f19ede6bc1e06297079e5087ed00066233006
89719 F20101111_AAAOFB wang_y_Page_100.jpg
cf22145fff0cd5666109d434902777e9
1180fb0fed31181434ce8a7be3b3d64b407ec668
68981 F20101111_AAAOEN wang_y_Page_076.jpg
093be2c968f76084632cd42c5f85ddc2
7c0cfe4c2de4bdefb2bee649335472ee8f33d2b5
F20101111_AAANZH wang_y_Page_152thm.jpg
2e36f43b18da9d1240ece59936f1b14c
2c6460433ef632bb3982a3a122d3a26516564e42
F20101111_AAANYT wang_y_Page_109.tif
6b6f9aac0085dfd373d011b086f93361
9b8fc0b815875a08d522289897d4f4ae82df46b3
88947 F20101111_AAAOFC wang_y_Page_101.jpg
30d2f07c23fcbe77a3b0bed73a93bd0f
7a0581e88ff41649cc55e65f4a451188118c11f3
45599 F20101111_AAAOEO wang_y_Page_077.jpg
ee71f84e46b91f1a977a93387faa3a4a
ecef97ebd6466bba3908317d1e3e6ea633991ee6
54474 F20101111_AAANZI wang_y_Page_073.pro
fb9a5fc1c0e939d31b05b784057be8bc
935114447510d3ab88ab3eff64560846aeb254c6
86288 F20101111_AAAODZ wang_y_Page_048.jpg
8e2722dc4eaf888d03e30e6dc555852b
bf8e24bb3b88ae739afed4db7010a280cea70004
85877 F20101111_AAAOFD wang_y_Page_103.jpg
ec29ef66326170d5c96c77370f20a880
d8447a9c966e9fae8f497594f3a3fd1f692bac78
41572 F20101111_AAAOEP wang_y_Page_080.jpg
bfafb1ffc8648df76915f7b5c4145496
72eb02ce0d002f17a581328ff19c2620fa74d575
13630 F20101111_AAANZJ wang_y_Page_120.QC.jpg
caf70f246e71eb54cdbb688979189a2f
9421d7434e3998509336800145297d22f2053660
82398 F20101111_AAANYU wang_y_Page_010.jpg
a8075b047d8e38ae9a48de0b80f3b714
b7dc58d0767d0ea61a8ad60faa538af788d97ec1
90850 F20101111_AAAOFE wang_y_Page_104.jpg
cb60ddfb30af15bb7a385c12614e37c0
3592c3e687d7f68c667465148ba880be409df85b
67831 F20101111_AAAOEQ wang_y_Page_081.jpg
f93ddbfcedfe71ad747a82484f2a7941
a92316e97dafbf774c49125b9afea698a7682ce8
117053 F20101111_AAANZK wang_y_Page_149.jp2
9ecdc05a6a877dd1fb18f0492223fa44
a276616b100dc3f7b62f88c152b3d34463912e70
F20101111_AAANYV wang_y_Page_016.QC.jpg
e2f3e668656afa80071f6d1b9f4f90a2
61c7fad8b3ff41b341e747f0090d499d8c80e079
77093 F20101111_AAAOFF wang_y_Page_105.jpg
47e606b556cb5d9e34801d8ee580f00f
0bebc73153e674aff23bd5e8d2112dd0db15a810
37104 F20101111_AAAOER wang_y_Page_082.jpg
b1a6509b05ae35c6b058ce478b2e5343
1b0803bbb3e05a23312f11ffe337bba95b1b9bfe
12337 F20101111_AAANZL wang_y_Page_045.pro
14d3a600647bbe32a72a270226afd9d6
2fcf8f575a8c2084e317fbc2eed92c427b66c0be
38438 F20101111_AAANYW wang_y_Page_037.pro
290002da9158d5b412c8a70b7ac3f898
a033f89ff5738c3a6b39a0a7fd983dd35ff7444e
71645 F20101111_AAAOFG wang_y_Page_108.jpg
9a26d2c2a3c84fecb7138f6adbedb650
165e7cf35968e8c0ca98c3aca816c671aa51c94c
35350 F20101111_AAAOES wang_y_Page_083.jpg
7e9720b7babf67feac316df63df0238c
48a8a1e52ce8e2063efe5342252e1f84d1384434
2131 F20101111_AAANZM wang_y_Page_015.txt
ff9f75d428f52cb52f90b395c50676e4
db81c20ee9a04bfa72b28688e84e004c3387c0e0
54661 F20101111_AAANYX wang_y_Page_099.pro
e5071b659e2e85d5bd76c5994a2a62c7
c4274c5f7e2a20b76f9493e01e66cf141203f21a
76974 F20101111_AAAOFH wang_y_Page_109.jpg
625a5a8812c34bbfb89e5589c31dbe46
b02888583a60e1cf35cd50251f6d6d5201c5aff8
30166 F20101111_AAAOET wang_y_Page_084.jpg
56d0238f27c082dc9674935986d37a75
9822c1ca4be5e86fefd32a3be4d4ee8da1b32427
84239 F20101111_AAANZN wang_y_Page_069.jpg
d8f89646ff34c18a004467da9d1beb09
4b1da9aca9927b73a54fe9c4dd43b29926aff214
2158 F20101111_AAANYY wang_y_Page_118.txt
c523104581c22ff3149c3c1f4a5a19fb
a8e4f7f74f5659c37e6d3941d81cd9bb03f2741d
84088 F20101111_AAAOFI wang_y_Page_110.jpg
5ce92e6dbfaec0cbdaa775e2f9823cc4
bb7a5f8d09d943892dfc3bd427b7fd5d2586cb71
24543 F20101111_AAAOEU wang_y_Page_085.jpg
f677b580cf8a89edf8f5ee26e81c3775
47606661448d45842f78e1d72569ad5e19bafb84
F20101111_AAANZO wang_y_Page_071.tif
d3e34bcad94a767671772714872f9158
f4d349f84b1a83e8d32963cc2df68c9cd6c8fca3
86022 F20101111_AAANYZ wang_y_Page_042.jpg
04d31b415a98b47e5ad768d5baa689f7
4a13068a61fdf81261ae5b7575170e48dcc50e81
92659 F20101111_AAAOFJ wang_y_Page_111.jpg
cd33331ca731da91d830f71b1ba48a9a
04f08156f94f4e00d5011a3a81456f0df137ecbe
31907 F20101111_AAAOEV wang_y_Page_086.jpg
2d09026fb62a040bcecc3c41be797fd3
f83941a4ba38509a0f8b1883f4adb13be18eb41a
14476 F20101111_AAANZP wang_y_Page_080.pro
bec55553e92dd1aa155ad17b8758e2a3
8d21297b80f850ece384d54b50ae5f852279e788
86172 F20101111_AAAOFK wang_y_Page_114.jpg
3771ffbb6dc0464ebfcbf10c29c0c930
a6696c7005c6b44edce4fa1800ed48c04437a211
77740 F20101111_AAAOEW wang_y_Page_090.jpg
321b7fe6b666e9acf9a5a1a9c692e750
5d074b636f9943069da77f521f024585b30d5b44
95164 F20101111_AAAOFL wang_y_Page_115.jpg
34146f82b448e5e56961d0e49621714f
0999a5616cae1f460a8a8eb7001bd5137eb67c95
84439 F20101111_AAAOEX wang_y_Page_091.jpg
49f5e120eaab721b69dd76bc2abcb31e
7d473f5c88eaf65020ae3285810ef9183a210c6d
117517 F20101111_AAANZQ wang_y_Page_097.jp2
b721d91c766ed848cf90ff982c498def
1cd8704e5abdb3467b269397c15f26b7093f113a
85434 F20101111_AAAOFM wang_y_Page_118.jpg
1ce0a75457aeb5144f953d725f58ca0a
28efb0663fe2e83cb5183fd3bc89bba61cc6d35f
84202 F20101111_AAAOEY wang_y_Page_095.jpg
2fef5b56b7362a47ae825e84a160ea96
1e06ada681b1e6100206fd53fcc350628e2bf2e9
2911 F20101111_AAANZR wang_y_Page_124thm.jpg
aac0f2bb3cbca04e90af324538acb061
ebaf7a56ac9af13a9b638ca8bc3db13aa492a6b9
86029 F20101111_AAAOGA wang_y_Page_143.jpg
eb44da55cac3f57a64736a88b516eade
be6cdaa3277792fcff35ebedf1e6064fd69ffe53
43080 F20101111_AAAOFN wang_y_Page_120.jpg
bced089ac742d7f04239c1acab276881
fd50a4b9fce430baa7e59a4cd2bc01de976e9f87
85708 F20101111_AAAOEZ wang_y_Page_096.jpg
49ef0074564a360b2f39ad82863d743c
c769993469479fa99e0a7e25d85c818214e33717
2114 F20101111_AAANZS wang_y_Page_058.txt
81ac1bdafc17f1eb0659e103e18f25f4
7f1543d4ab7a6752aebb6b85bf98942493e5cf39
92395 F20101111_AAAOGB wang_y_Page_144.jpg
2a7c7c1a99ce21e36f4672fe1a46a8d7
5a00014f58da198d63a4f417143ff2513b5bd917
79510 F20101111_AAAOFO wang_y_Page_123.jpg
41155e5260db0034c30ca9f83cecd14b
ac1ad00e7199daaa34540aa9383e044567bc8645
51785 F20101111_AAANZT wang_y_Page_070.pro
df860a0d7c03b0b19a34e44216e0fe7f
c2f6e8ba0cfacf84a63e393d6e40f658457d60af
83476 F20101111_AAAOGC wang_y_Page_145.jpg
c0346483735d668e190de1422e99c248
3190a58be572bb2eb5f5c4e26a7c3cfcf67adfd0
53362 F20101111_AAAOFP wang_y_Page_126.jpg
5e0de475f12e29c3a3a899dc2ff0a279
60f36f6cc58eb8cd3f16328018b76f8772382b67
F20101111_AAANZU wang_y_Page_110.tif
bbd75f64d89f15ece46b52657a1d2e0a
215436b12e8fc6ab004858592c7b0ce44f8e9291
88908 F20101111_AAAOGD wang_y_Page_146.jpg
5fae69d3fb77da5ca6194d7f389ca4b5
9392a874ee3c4238ab28d044b467093731d55a01
51857 F20101111_AAAOFQ wang_y_Page_127.jpg
3fbe6b3bea532ae959afc723f8017048
1052c0a0c6fb3daa830dad484cb4753b8aae648b
6797 F20101111_AAANZV wang_y_Page_151thm.jpg
6f9a910ade06579e32edbe3218ee2187
796d851742bd6593f80f7e6d2848cdc034c1405b
86280 F20101111_AAAOGE wang_y_Page_153.jpg
060b3dedb0fff2cfc7f8b53f456e7705
29ce53996325a7754494e3ffa88e442f65fefa6c
33545 F20101111_AAAOFR wang_y_Page_129.jpg
be2038a3b34c1713446e302285a3cfa5
033603abb0575e997ca4f17faac6d69ba18d2c4b
27824 F20101111_AAANZW wang_y_Page_051.QC.jpg
5308c2dc10ab0457cc1d9c0858d6a49a
7a008c6a8767c6998df85599c9891fa8f7427d49
81867 F20101111_AAAOGF wang_y_Page_154.jpg
255fa422778960f9a87de2a8e121b673
490a8f2f029a99d1d112fcdc9372cc6839785225
6355 F20101111_AAANZX wang_y_Page_148thm.jpg
c80a24a22091cc30b4f0b93a3d0dc536
96c37f50ec856cfb9326166cfb63479fad1815e3
72787 F20101111_AAAOGG wang_y_Page_155.jpg
2329283cf9654f64b6ec215127081ad5
f091d46a147f574065a04c8fbdc86a9e69e24bd6
26954 F20101111_AAAOFS wang_y_Page_131.jpg
d59525c82a782c926bf656c81b8d28b9
35d3dc7b9550aaf23e7c6236af05890cd6928530
25017 F20101111_AAANZY wang_y_Page_007.QC.jpg
5d9df41a534fad5810fd61dc1ca2abe0
4db6917e18b07e6604b1dc9532b7f9d25c1e3615
58058 F20101111_AAAOGH wang_y_Page_156.jpg
1476ee1c84b1f92ea15d49d739c40d6b
b6f1f5535dc2f31bd67a5349e6304c130e0704d7
20411 F20101111_AAAOFT wang_y_Page_132.jpg
d7702d0580cf1037563b4586c0aba619
48dd9fe582d0e34da8910487fc3a4d0deb8be439
1165 F20101111_AAANZZ wang_y_Page_002.QC.jpg
d98821752321cbe5ead65b9d7c87c192
314baf6a4a8c60950e5d270a4e957a599a7e8c0d
85320 F20101111_AAAOGI wang_y_Page_157.jpg
a9c17456164560c4198afe3467989fa1
55201e4fb1201864d3e01427ba733b48b780ee0d
58356 F20101111_AAAOFU wang_y_Page_133.jpg
1db03b2a40b1a5d801a8743d2f5f53f7
36336ac7d7a10e9271e774368ec1755d73ad89ea
55179 F20101111_AAAOGJ wang_y_Page_158.jpg
2edf81589111da72e5c07bb27b1f9330
d2e9036857d4755a4d6cf8b2bf59d7b309b30953
21596 F20101111_AAAOFV wang_y_Page_134.jpg
24e6169ad492c3ff04be8a6b09584b05
418f27d9e9ecd6fc842bf9db4efea4a95244995a
64639 F20101111_AAAOGK wang_y_Page_159.jpg
d6019eb24673889015d85cc93b0198b5
0bcf22808b904710873525c1cfffb254543c1d77
20648 F20101111_AAAOFW wang_y_Page_137.jpg
45ba3f7d24b83507fc47d419423feb5b
08e6f09a121ad866b7213a03e8ff47849ad9a8a8
31064 F20101111_AAAOGL wang_y_Page_160.jpg
b55a2ab4780dd5e876af8e972c0f7ff8
c093d570c32ac9b9f6364ce9fa532cd76b4c64bd
91068 F20101111_AAAOFX wang_y_Page_139.jpg
352c0066c9d6508837a023c055c9bc8a
bb4022137f3c67316332ee62b965fbee16393c6c
118423 F20101111_AAAOHA wang_y_Page_018.jp2
84eccc3a0454815cab0cf57715f57075
37d523d20890379e5d10693de203dc534fdfe7eb
115435 F20101111_AAAOGM wang_y_Page_162.jpg
559997ee226fa9aa8fc9462c2847f4b6
5018b3843f7d55dd05a27dcd74f0abb524f0dc30
92596 F20101111_AAAOFY wang_y_Page_141.jpg
0fc3ad9b7f24ddde34ae0ca7ac527d07
0d3d0b9e71773053e17df6fede16a6c174f32118
112732 F20101111_AAAOHB wang_y_Page_019.jp2
079bf3a787e4c3309e01e03e43676fbe
96c9af1af21249075fe5fab05505111417de2b8b
113432 F20101111_AAAOGN wang_y_Page_164.jpg
46961162eb6aacd21d72a36b6b0a4786
0d98c207e2283987147d5dde432d4be98f7e99e7
90145 F20101111_AAAOFZ wang_y_Page_142.jpg
bc54dd48cd3ea8b5625d22018969baf6
c5da00fd66b1c3adb4b55d4ca9df33b2a89d35d9
118321 F20101111_AAAOHC wang_y_Page_021.jp2
e9c5dc9a739d1b3a9e409c8792aed19a
bf0c83de57823706b5d9fa9ad0cc9e956d82a63b
95393 F20101111_AAAOGO wang_y_Page_165.jpg
e3eb1fc97380585b5c998809d734d62f
ebaaf8978bc96102c590b4ae18cb541689931572
114267 F20101111_AAAOHD wang_y_Page_022.jp2
201e54e98549a317398ac8812ce01876
87d25ebfac5b00889cc6f917346f0718442d813a
117320 F20101111_AAAOGP wang_y_Page_166.jpg
0a766fc8c9952f92ba9fd5eb8daa04dc
8e7796a60f9e9031de6373070b25fce3b9d886b9
116117 F20101111_AAAOHE wang_y_Page_023.jp2
db62d2b2d0028917c5a56bfef51af77d
c7b60e90bffe58b591a7daf1509b1d0c853774e5
102235 F20101111_AAAOGQ wang_y_Page_167.jpg
e3620dfba2d8d6a1a0ed7b59e5e5bc73
20aba7e934bce491f47019b7d98b81c6f048264e
114521 F20101111_AAAOHF wang_y_Page_024.jp2
4564aa8c1a53140f1552dea82583269f
f4bfd734128ad2a01d269e2e3025d720f87ea7a2
76630 F20101111_AAAOGR wang_y_Page_168.jpg
d9a4c58f8ebf39ff841ee3f14d3e2b8c
3a829b5497846624f6f9ed40eb4a27d542dfb9f5
114739 F20101111_AAAOHG wang_y_Page_025.jp2
3c4c48c024199423d1affc90efb7371c
a010f35519a037d4051cc31e9c18636055cbb76e
4503 F20101111_AAAOGS wang_y_Page_003.jp2
914c0a106c889dadfc7cf28e0c46938f
e204e63dfe95f71753eaeb5ee02a567be6cd3028
120973 F20101111_AAAOHH wang_y_Page_027.jp2
59ab5138a41609fef88e6a685bc2c7ba
ca0a3e3280503dcac23f264209d09e64abec6a23
63540 F20101111_AAAOGT wang_y_Page_004.jp2
3a3504bcc3f262cea28402c1998840d1
424798ab08d5d8570afbfe02107ef7c3452ea446
15652 F20101111_AAAOHI wang_y_Page_028.jp2
3f7a9b8d33615cf7fb00e11b24833fa1
220ef171b191419740a300dc552ebb8cb656e006
1051982 F20101111_AAAOGU wang_y_Page_007.jp2
919ffb345b4c22b9659c27c35d713fa1
d40ba12eb92286d8a324a187f4a1f6f534e4d470
116336 F20101111_AAAOHJ wang_y_Page_030.jp2
98473c7e379109d14c1b77b40687db06
1f2bc9c417b81d1599c87d839b904de2d1907fa8
F20101111_AAAOGV wang_y_Page_008.jp2
3f51f4983bce467eb33c453b4623f973
1876cfc83a0d69c013d91d3cd27a8f9e9c3b57ea
124760 F20101111_AAAOHK wang_y_Page_031.jp2
c37c7b0df546cbf353da7ebd44c48b23
17815fa070209cff87a2db8188a04692239eed72
F20101111_AAAOGW wang_y_Page_009.jp2
fafc54ecaa519801158234370b98ff3b
54cd75c5b0c57a093c78f9ad62dac2a0116eee5c
78464 F20101111_AAAOHL wang_y_Page_034.jp2
3194c272d4e1bf7c0df905bbe285255e
581d2600b316b5db69296e107abb81f232f459d7
116916 F20101111_AAAOGX wang_y_Page_013.jp2
ef65c8dc1db29aa510ce78c4fe03d421
79795e1f42d1781071c028e6ccf0b69864daffcf
77529 F20101111_AAAOHM wang_y_Page_036.jp2
6ef362b853c101f0fc4063a23fd12ab1
5d88b00f6751f342abb107be2d2c4f60737d09c7
125655 F20101111_AAAOGY wang_y_Page_014.jp2
06c8ceba35d0f294df6ff1fdeefc730c
8516ba2c267a4898b950706a5118f498c6ed2591
108047 F20101111_AAAOIA wang_y_Page_060.jp2
5831631a0a4e1dc150f951371e92932d
d518b47a332fc12c95fce27bc7302989ef63d28f
73555 F20101111_AAAOHN wang_y_Page_038.jp2
07b1ef43722246328a2a059d7cf8a19f
f62eaeca04f0071fd9c6be9bea551d6f0b530eda
116459 F20101111_AAAOGZ wang_y_Page_015.jp2
c39df6d0d4a0c2f182fb10646d01fd0d
afc427cc0df9e11965ca2df6d7c019e2ead16cd4
109277 F20101111_AAAOIB wang_y_Page_061.jp2
4d5922142216b4d1874c8d19db7205d9
75defd2026350eb4f3beb0f65bf24b115cb93239
95400 F20101111_AAAOHO wang_y_Page_039.jp2
5dd5d3687f36306fe26752e2c4030682
fceba48e5b4ec02aa57aed5d54006b1375f5c6d3
113703 F20101111_AAAOIC wang_y_Page_064.jp2
badeb00acb77a27b1b5e89ccde500fbc
b6b8bfbcce22b2907b0078fdab5c5c29d3e8b072
122862 F20101111_AAAOHP wang_y_Page_043.jp2
2676cf4750c122fe2a8c1495739af156
a9c6343142f41ae2760fac73b770a765233aaf86
115030 F20101111_AAAOID wang_y_Page_065.jp2
af27524128cf9322f59ebcc22598e50a
47b3d9fcffcd38e8e34e662814b0943ccf1e7efc
94604 F20101111_AAAOHQ wang_y_Page_044.jp2
909ef0e88751619c202b391f86d83d33
1bd4bbe7a0a040683ae3a2ab5e3a71f1eac89ee3
96673 F20101111_AAAOIE wang_y_Page_067.jp2
606693080c7ec217ab3f34f3c6949454
334b212ea299b67876349976231bd3b6033e98c3
515243 F20101111_AAAOHR wang_y_Page_045.jp2
eef05eb567681d40580f745fbd03e62f
371cc8f738bd833b258e96b81ec8e75bd6de4874
106957 F20101111_AAAOIF wang_y_Page_068.jp2
c4107ec1c7426a07a4cba55c6edfacc7
0463a77a76b8df436cced02b87de7413dd39d272
861835 F20101111_AAAOHS wang_y_Page_046.jp2
e6e41212080a068d8f95fdaf3e55de5b
319dbf996424860f57869854dff68bd88bfc7285
108686 F20101111_AAAOIG wang_y_Page_070.jp2
c07faeb4f459044ef128df870fc14b93
8eb063f0b0b618ed5f2e2cc14ba6d2bd47df2ee6
268863 F20101111_AAAOHT wang_y_Page_047.jp2
fd9c7848afcd7b27f5bb3a962846fe5c
83bd9bb4c3d59b76a0b2056a6648c39d96a8ceb8
111495 F20101111_AAAOIH wang_y_Page_071.jp2
9a72b6ec152daf6d434e8e8feea73db5
37dc4838cdf0afa45c24c7098fa239b7c7e5e270
97654 F20101111_AAAOHU wang_y_Page_049.jp2
50fcdcbfff685f62940f50059e700753
16a8103513a0c038da65b4ed8261839027e6d188
118077 F20101111_AAAOII wang_y_Page_074.jp2
90c648a17511211fb0a78a7da573ea8a
8ed6fe5ce89c8424b9a754570959d0dad69b5739
121420 F20101111_AAAOHV wang_y_Page_054.jp2
3922d9a79f0179d1af947b7b4f15c6d2
ab3c9cef0602f4c8e3cd0674a9fc01551246ff4a
88474 F20101111_AAAOIJ wang_y_Page_076.jp2
c3033b10e3db6db7be6c9980d7acdafb
fc2dbeb86b97685d449beaee431029dcf3db5dfd
106391 F20101111_AAAOHW wang_y_Page_055.jp2
44f30b2c1ab3f19fd69151425312a250
cb5edcc8c964838bb3c715c8d28ab0321fd31ffe
566952 F20101111_AAAOIK wang_y_Page_078.jp2
86b6daebe1648ec9627d837221cd7464
96802468cadb9159bc05332132457ff0fb14b3e2
105684 F20101111_AAAOHX wang_y_Page_056.jp2
2bd114b5b1a910b293b24c6b677e6b0d
4dd07d53a9a609052e139bb9cf4530b958c16974
517573 F20101111_AAAOIL wang_y_Page_082.jp2
be281b2ddb5779e79afb79c683ed3562
35c07349f818c69afe13e068a46c842437da1b5b
106003 F20101111_AAAOHY wang_y_Page_057.jp2
ee767cc4388947a9f2e4eec47dd67f7b
efaa355f1147962e38e5cb5eac46854b7b4307b0
113779 F20101111_AAAOJA wang_y_Page_114.jp2
de2bf7b558c70bd190f0136504331fb8
ec3268c746dd37a697bde1a3ed231f62228032c4
362151 F20101111_AAAOIM wang_y_Page_085.jp2
10dada1baae57fa9c06989679af36047
e4d78fd3fdc8b6a3116316cc1d89932a28506464
109372 F20101111_AAAOHZ wang_y_Page_059.jp2
c1524ee68964a3b9c2800b8a6ba61f0a
71c1a415dcf637cd68c874f8d591fd25904416da
111503 F20101111_AAAOJB wang_y_Page_118.jp2
79cb3a65de0437f549f78c05ab18340f
013be46a888422c47b79a5e6ec85dff7e13ffdf7
362114 F20101111_AAAOIN wang_y_Page_086.jp2
6e5118f2613f9e7b2bde49dca9e076f3
106783e47cc6692e79c6211582ab99fa8d091c82
55356 F20101111_AAAOJC wang_y_Page_120.jp2
72db2f203a2cce59fa62f1295ec948e3
a54cad0118cdfa1342a5e95b62e85d41f77463c1
F20101111_AAAOIO wang_y_Page_087.jp2
f579fd6b5e29f29ed96d122de50006bc
2356f78a64277077a0281e6323b95510e254fdee
460982 F20101111_AAAOJD wang_y_Page_125.jp2
58682de75ed8fdcf9a01263a63758416
4a27cb5e27295fa2ba8729459e7e1111eb49ddf5
108908 F20101111_AAAOIP wang_y_Page_089.jp2
cf4f62da2290eaf83d5eece6696a2c05
783253aa9710562a32cf03acef76091bdf83e5c5
65586 F20101111_AAAOJE wang_y_Page_127.jp2
3781953659202ede018c6473cccef31f
288c2c65c61534cde1afded562970fc50e7852ef
108314 F20101111_AAAOIQ wang_y_Page_095.jp2
dcd53805ae70a8606ef836945c2cc190
537751abdfc7d2404b974f114ac8bee3134f73c9
1051979 F20101111_AAAOJF wang_y_Page_128.jp2
b4371c0e001cc6317de3674b63b6f3fd
5ce2adf43f3f75ad9d96ab9d079ac2b4b3ac6848
117954 F20101111_AAAOIR wang_y_Page_098.jp2
30f99b0a1123acc9120e547f2f749694
a6ba179e0f8138bb67dbfe15f7b274dd02ad56b4
69103 F20101111_AAAOJG wang_y_Page_130.jp2
ed209f21fbc80ca7d49e90a788b37438
e2391916eefd00bc7fb51887f3b2406ca4c12fcc
113467 F20101111_AAAOIS wang_y_Page_099.jp2
fd4f714430d4aaf04858eb8ef34603cb
cca1a1602d854351b549367e43ccad1cfbd8ef44
412230 F20101111_AAAOJH wang_y_Page_131.jp2
538814b8d29d7908663a5c8123528d9e
28ea04dd5a65130cc1503d135b0ccac4731194ea
114553 F20101111_AAAOIT wang_y_Page_101.jp2
b798845f1969342f1d775603c9cbe45c
85c6fa9dc3d2ebb470a2307cbdebe35d77a270cf
320568 F20101111_AAAOJI wang_y_Page_132.jp2
3532bc5a1d54176c981b44bc8fabfab6
87034be6bc896ab1ed77873c97eee4fa5c9a062f
112851 F20101111_AAAOIU wang_y_Page_103.jp2
db2fe5d993ac01900ade6d1fcdefcf2f
16a6ebcf4116cd06dd0a1d937c231f055d1c062c
719148 F20101111_AAAOJJ wang_y_Page_133.jp2
be1f8738e877bb8582271cd4464ee9ed
75f375f4c86f20c11d3dcb38f4a87c2916cb51fe
84337 F20101111_AAAOIV wang_y_Page_106.jp2
71f656f169a3d5496a09cecb8d02ba93
8219662a4d5dab4fb05f7818027083deb218e84c
282097 F20101111_AAAOJK wang_y_Page_134.jp2
79b2974e230f22cbecccf8d1874e7a07
18cacb2ac1fa8c43fd83ae5ecc9dd3ab06396645
86663 F20101111_AAAOIW wang_y_Page_107.jp2
02d21e8bd3149dedc9d17a2f538bf66b
bbc060dd8b82a5c2a132e4d1b59584befdaaf9c0
85226 F20101111_AAAOJL wang_y_Page_135.jp2
40ba0cb05163e2791d035f13342ff7a0
ff2fe8a15db077eb3aba4e4e0cbafc54f6aae158
98715 F20101111_AAAOIX wang_y_Page_109.jp2
b2092d5da2a69aad9692c56ee57d0497
544fdd1ac351297e1e4b713b73792695a0ec35a2
104877 F20101111_AAAOKA wang_y_Page_154.jp2
fd38abf713b50973689568cffa79c974
423f096e078f68b27419414b112f7a21149333e4
298271 F20101111_AAAOJM wang_y_Page_136.jp2
5a687bac3bf788e3a150e98e87ea1b07
36bcad0f8f6c9f41ebb267d3b0eeaaeff2990bfa
108292 F20101111_AAAOIY wang_y_Page_112.jp2
9f0a5c9549ce723f2a117c3fe0a9ee08
554ca5bb287776bacc8011dffc82502fc4328cff
93847 F20101111_AAAOKB wang_y_Page_155.jp2
73f621717c4cbf6fdf3237331096e664
bda5e5bf9224e96d9f3324972df04bf31f9a3081
291974 F20101111_AAAOJN wang_y_Page_137.jp2
fdb79db3392c5eb6805b5d1eee0ee903
a628b6dcd2a4f9b68bc5523fa95cef531c90d804
103849 F20101111_AAAOIZ wang_y_Page_113.jp2
258a022ead66ed4b4d90d93b6aba1e4c
d01f78855eab109020ce4af50563044e604defa7
77231 F20101111_AAAOKC wang_y_Page_156.jp2
ce3fcd05d3812478ae9a0ca9a83689e3
78cfca1dc1b05c4ccf036a263ca6e07df96d238d
113356 F20101111_AAAOJO wang_y_Page_138.jp2
e2ea56fe634e70b343059ce453a768ba
f82ad6494f5a4f613867b6c4e6b874c1bbab6f5d
1051985 F20101111_AAAOKD wang_y_Page_159.jp2
ea6847a5f340fa303229827817fa879f
ba296026431109cb5f7aaaa41b88088a7ccf1080
118544 F20101111_AAAOJP wang_y_Page_139.jp2
9815af609d16e782c6cc15606de02772
17b861b167a1d5629585d6c6496bd592d8ec3aa9
711780 F20101111_AAAOKE wang_y_Page_160.jp2
38120773841f64f6563c43cfbda9f65a
e8111682934c6d1782098788e8be173e9031ff9d
109671 F20101111_AAAOJQ wang_y_Page_140.jp2
0e72eaac9b01396ef49014038f8e05dc
20137bef6eec9b26da377a2d34eb7aa846259a8e
143002 F20101111_AAAOKF wang_y_Page_162.jp2
93bb306ddcf16facebe687709b2ff306
d72a2c27dda7c5caa730eed2153ce55220e4c966
120661 F20101111_AAAOJR wang_y_Page_141.jp2
4925121707addd24c230fcf3571ef794
ac2b4836f99ffd5c325e9f028881187681f6da86
130759 F20101111_AAAOKG wang_y_Page_163.jp2
e1211a7d7e1f661c0090dd3f5e1d5680
6ce1fff51ac3a221d8234f28e79b2d58a9aac8fd
111878 F20101111_AAAOJS wang_y_Page_143.jp2
113e148e17daa59df040c10157f942a2
776b1b82e50a5803cf28f541f54306151f93eaf4
146424 F20101111_AAAOKH wang_y_Page_166.jp2
07bb9d8d79d3f0d4f77c5a3c424aa3ea
b3668fc8897e5f18eca73e2306032ae82a62287b
120596 F20101111_AAAOJT wang_y_Page_144.jp2
a065cca9116f565d7248a6824bc5020a
6067a251939c9ad0cbfb43b6d5059e7af5d9afe1
137329 F20101111_AAAOKI wang_y_Page_167.jp2
35c2ea7fa63dfa2367acbf075624f904
e5752eaef52b7c92c783f9098c7e50fef1b8e43f
108888 F20101111_AAAOJU wang_y_Page_145.jp2
1ed9a05d1cae8c4d8fa2f6946566b0d6
9d19daa8f7e9e7ba9164eed1bf9615c363ed2087
99452 F20101111_AAAOKJ wang_y_Page_168.jp2
4e57ca3097b3bf9ca1239e62d45416a6
3e9ae52afaeec1945a9475aab8c6b2b3042b322e
116060 F20101111_AAAOJV wang_y_Page_146.jp2
a62ee796434d1b8f2e1cb84e33dd9d63
227d03395294e34f27d2b40f5d4d5631ff5c203a
58412 F20101111_AAAOKK wang_y_Page_169.jp2
f0c6cf89eca0b3264586ef87bfdc62d6
ca32b75a1fe54a0bab0b6caf930962bb7fa14590
118802 F20101111_AAAOJW wang_y_Page_147.jp2
6f6de2644b03ee6865c0100c7d23c955
1e4b81d518217d22e8e85e02eec944cc641cdbe1
F20101111_AAAOKL wang_y_Page_002.tif
4829f9c599ef20d8d19dbb5b6b75c668
c017c59306aa5fbe7c20b385cce09f139cf7a489
115008 F20101111_AAAOJX wang_y_Page_148.jp2
d21bb7e3d27f031355323784ef2f879e
01320f5d007c3c7e090f6b2ddb0b6bb46463daa6
F20101111_AAAOLA wang_y_Page_025.tif
85f627815484af59ea77192a9eabb70f
7b5de26951ad74fc0fc8058d81e718d842c43cd9
F20101111_AAAOKM wang_y_Page_004.tif
bfc0afa26798001fc3fe10375cad8fde
694ca0d05d218e3c3e589b4a73eae5147dba3583
117717 F20101111_AAAOJY wang_y_Page_151.jp2
65009a13b9c1b839ef1408a2abb6b920
6c62e1722b06423f4f3b3e0e5c07daf72b938d33
F20101111_AAAOKN wang_y_Page_005.tif
5a20b616497a90e6804dc9e0c94b87eb
bcfc04591aafeaf8c3c7e48c27b5089dd734f06f
111087 F20101111_AAAOJZ wang_y_Page_153.jp2
7e2d812714197be050d416c368e8cd52
31e65b91ca392afde520c7d70eff1501c167b6b6
F20101111_AAAOLB wang_y_Page_027.tif
dad0bd8cff8066ea9cf7da88049c12cf
ac718dd26ac6bc381a38e0ca6322beccb27e29f3
F20101111_AAAOKO wang_y_Page_006.tif
6990691186fab461319e2cc36511633b
4258425c65c6480f821cfe621ec255d744de17d1
F20101111_AAAOLC wang_y_Page_028.tif
ca1e301ae78e147bc9d97d1f70edd062
b2ea24dc0b45e71d3608c94009b3b9c0fa8f8313
F20101111_AAAOLD wang_y_Page_029.tif
cd2a4ef39906e096464b4a87497babb7
4d3a9fd7b0f33d3bf881a634a2f2394dd37e2360
F20101111_AAAOKP wang_y_Page_007.tif
4cd270e56c7afb5f715117d9a4802fda
8590dc58740ae0a86625cc1e4211ae3082a2eab8
F20101111_AAAOLE wang_y_Page_030.tif
2efa3a02a3c34d5e5c2325068e65150b
e2cfff984ab64e02869a4ec65f7bd7db761873a4
F20101111_AAAOKQ wang_y_Page_011.tif
576f9ea9b8c6455734d29bee0ece73df
1d18796d7603af0a5766a319b454231baf978c69
F20101111_AAAOLF wang_y_Page_032.tif
c574ea548262e5b3720faec0ba4e66d0
6e99480767254f9251b6637618ca31fb659e38ba
F20101111_AAAOKR wang_y_Page_012.tif
aa5f97ac556940cee91dac64952d6331
0f9f6f7a588b97252a379e794997249328371aba
F20101111_AAAOLG wang_y_Page_033.tif
b99434a5a19bd94eb75fce1d63287c12
6f1b763b73554c88298c4091937ecc382222a71b
F20101111_AAAOKS wang_y_Page_013.tif
85ef93a53dd63ccfb395702560ae54f6
14763a598e6894a97f85dafd3dcb1cd40171895a
F20101111_AAAOLH wang_y_Page_035.tif
aee3c01a30049edda08e2a1c46650970
297c7239dc134a6b9c1dba288e2a0fc3fe4bc75d
F20101111_AAAOKT wang_y_Page_014.tif
4b614dabc4dfe23ecd47682b51cdc3ec
3ce2f456bbd708397826751020a414deecffe121
F20101111_AAAOLI wang_y_Page_036.tif
aa8a6e7d4caaffb470efb751ccc9c9f7
1f02b8709f20cfd010de1a4740f3254a0b8e8167
F20101111_AAAOKU wang_y_Page_016.tif
1c3852b1c91cf07d8e1c97306cd9573c
d07b0e3a727460ab0938b4efb5fe0895fa0510f5
F20101111_AAAOLJ wang_y_Page_038.tif
ae5172aede1769aadd9127f7d1090d08
1a8f309a129e2e68e985883697b0380f63dd1ed0
F20101111_AAAOKV wang_y_Page_017.tif
8b4ea654b6b976154b4d6de92a1f13cc
827e4f0d0196e1b672402bc8c9c318cb9e25d380
F20101111_AAAOLK wang_y_Page_039.tif
deee9c3851439489bd8c59fb1eeba6e1
425323154055c244517dd59e8b4e68cb090d1f0b
F20101111_AAAOKW wang_y_Page_018.tif
994137092f8ea96868fc7ce8121db977
a2e8cea59d4e3b006da19da11feacf135181ec22
F20101111_AAAOMA wang_y_Page_065.tif
ebab6e76b3f6070d716dba187fbcb719
f872aa2324bf7d89a42c096711fb3d42d471c32b
F20101111_AAAOLL wang_y_Page_040.tif
efba58a6b4b56e419fc58b03f1bd2a2f
6db26c84e7bd5edd1d78f859b121f46fe4e273b4
F20101111_AAAOKX wang_y_Page_019.tif
3ff777c5ff543aa7f8fa1d97eb8c7d5c
a5beec29534a14a31c5eeb4f06523c5ae09f83ec
F20101111_AAAOMB wang_y_Page_067.tif
531d4cb983c9ac0aa195eaf5b66bf5c7
2d0d79554fa337e1d67e195725aec98e94f8f7ca
F20101111_AAAOLM wang_y_Page_041.tif
38260e595cd931f7784dcce6e5aa3f14
eba3ca60b81f15a62e086f677921c6ab4ec99d5e
F20101111_AAAOKY wang_y_Page_023.tif
ce09c01bd1b670d8b7ae581a8fc89702
06d92452d96563aba4ec26c1c6dc69b3155221b9
F20101111_AAAOLN wang_y_Page_042.tif
9e0847a6015d1d11fa48ff92c311e414
a212347c488a890995faeaab4608d7d8af35c3e0
F20101111_AAAOKZ wang_y_Page_024.tif
6a37cc48e75c6cea4e3c6a98af65be19
5b17b07d3d49b15afb32b589a8a39a7050591310
54507 F20101111_AAANJA wang_y_Page_100.pro
439dc2931fe3a1667dafcf1f4dac921f
802c04c3d5afbd1340644ccd7829c7a8c7861b94
F20101111_AAAOMC wang_y_Page_069.tif
21e34ad3fdc20fcfe6ae3beb1ea58623
d7f09281fd7eb3addb4579b3674c57eedb05a121
F20101111_AAAOLO wang_y_Page_045.tif
18cda8f7e4f40c5a46b9809d08f5773b
e47e5bb5abdef2c162fa8123fd4a152b6dcd04f2
6696 F20101111_AAANJB wang_y_Page_073thm.jpg
adcc6777b43d4f0a33d7b95499cac4bc
e799decf7415a994c199314675907851be2e27a8
F20101111_AAAOMD wang_y_Page_070.tif
a0a9c3130e89e109ce65cee4d2171888
691ca8e6dc3f74fb4c6e0a836f7a9978bc0e66b1
F20101111_AAAOLP wang_y_Page_046.tif
74140f35e2a688862745f44b76ab6bdf
ffb435f037b2cb6134c38a0934a8c331b5247590
1920749 F20101111_AAANJC wang_y.pdf
e2de26c5892e5db26b8e65f74858c5d8
51255d95306023a741158055c5e5e7f98254a185
F20101111_AAAOME wang_y_Page_072.tif
3fe6f7da617d9a45fa3e77fea056a3ff
74ab1c26f97e3becbb5721dac99bc7ca29b7f93b
F20101111_AAAOLQ wang_y_Page_049.tif
b63342de973134d28afb5017f30039c6
7f2bbe98f07eda26b7c91cb95b51eb884f3f1d87
102253 F20101111_AAANJD wang_y_Page_090.jp2
1919569dda56832306a03c2b697b34a1
cc0ee2c513bfad5ea731b11ade65055b9e2df1cb
F20101111_AAAOMF wang_y_Page_074.tif
7fa57676d56ab64adc3f6c80af29fb53
846f9cc3d2f16cfec4208990a8cbbaab22323834
F20101111_AAAOLR wang_y_Page_050.tif
df8953b4b5e71ff3561476a60e8312f2
fcd1c4ffe92d355da9f857115015ca62a7d194b3
118171 F20101111_AAANJE wang_y_Page_142.jp2
732ea756abe81037a9c3d49c27626e42
75df4c7ff27004e283626457aafd5e4f292fef92
F20101111_AAAOMG wang_y_Page_075.tif
b62f962add8e48adf4d3fac881a8ac80
92900fda3bce475bdff1b0ac2b75a30f59b2f583
F20101111_AAAOLS wang_y_Page_053.tif
ea242a187709fe1fa7baab03026f8e5c
b40d5b2b43d2be356f0d69fc8513bcc8c056feaf
6265 F20101111_AAANJF wang_y_Page_112thm.jpg
d2a5c19dcbdb8e9871238a26f30c9810
35b0df725810f228b3f05cf840ed5c38d2b360c3
F20101111_AAAOMH wang_y_Page_078.tif
cc42432dde62f161d8bee097492e479a
3500b63ecf2b78c086031309f3510df4413e198d
F20101111_AAAOLT wang_y_Page_054.tif
5e2071d7afbf353352bbcea235dfeded
56934c5ba1f97bc9d765696aff472e09a1b6c7d7
113732 F20101111_AAANJG wang_y_Page_012.jp2
2ad9bd8869eb5d4ca3ab1f695492d254
5be5cae3dbeadf4c4380842a949d2c9873c0174b
F20101111_AAAOMI wang_y_Page_086.tif
0f6608426a77624845ce9593944907b6
97895374fd84e37d572cff920517d46b29a05715
F20101111_AAAOLU wang_y_Page_055.tif
0a49eb02336964e057163c8dc99fca3c
77db2b5e5b3f0a56a82e3fd0b766f3e2cea53662
36470 F20101111_AAANJH wang_y_Page_130.pro
85d4035d6ec453923ae36ef9509712cf
6badac911b941ba845508ddf64e7b1b4447dfac4
F20101111_AAAOMJ wang_y_Page_089.tif
f25eb9dbf770a95fc9e54314bd91ed67
2c33f2ae473bb420d8b50c9eea207f4a39f2b2fd
F20101111_AAAOLV wang_y_Page_056.tif
80564f1bc62b9e075b900136d8c9765f
6efd9ee5c44f0d2936ada31031e5f46415b58123
27488 F20101111_AAANJI wang_y_Page_053.QC.jpg
bc5ae75e18815f912ae048371002e01f
64374043282e31e79ddcbf4fc5f558ceb7daa57b
F20101111_AAAOMK wang_y_Page_091.tif
0ede81012f44776404d10b809b17a45b
46b233057d159558af8bf35cd3f2d88db2b65c4a
F20101111_AAAOLW wang_y_Page_059.tif
73f5e8e911e47656a40f67f0ec8bed27
535714ca35e90bd05e61db1892d834b3176d683e
F20101111_AAANJJ wang_y_Page_083.tif
5380f82d66ba28149c3bce1c9bbc28ae
ef812450f5018acc1182a9de823799b16f78bd9f
F20101111_AAAOML wang_y_Page_093.tif
c782040c9cb48803763cc7d1ae140516
53306d0f3875d3a4671f49b2086843b9f7378827
F20101111_AAAOLX wang_y_Page_061.tif
6e09c0fe61040b9dceeaa3f26bfd1c22
f05f0eae3f2c958d0d945eb2df25099a6726965b
F20101111_AAAONA wang_y_Page_122.tif
94b54ac79f82be59cebd8dd2866b496c
06543d0cce8c5eb7ab467b09a7136add3e093400
2160 F20101111_AAANJK wang_y_Page_013.txt
57cb6b10f9c843259c1f308c490b2b20
bc52c5d7c810611e6624d12ce47be1373973308b
F20101111_AAAOMM wang_y_Page_094.tif
4c24fed40f5a9d1ea98f64a00aa10f44
7dcc328594104af14dbb19751cd0f6763f888452
F20101111_AAAOLY wang_y_Page_062.tif
36f00f529b9e9720557e80885908a6cb
51d2f9be3bc952f4c7ab7ac90c69bf53589e5c96
F20101111_AAAONB wang_y_Page_125.tif
ca7e215dbf4715af13713fc840e0bc4f
2a2a1a41adfc8d613888e519a265a9e92c137b18
F20101111_AAANJL wang_y_Page_031.tif
539b33ffc393341d3a011bfe2508374a
b7249560009c55e2b789fcca00259efab6b30578
F20101111_AAAOMN wang_y_Page_095.tif
78da5ebd8154429b1e26ba52bd509b14
566cb5d3492b071c9bb2c383c66f805cc4a3b516
114793 F20101111_AAANIX wang_y_Page_017.jp2
12433454d3de486d990ad3d0e32534b9
0be931fefc50f2e31f48227c4005dfba969bd5f9
F20101111_AAAOLZ wang_y_Page_064.tif
78dd0b11831d946181e9fa9b91d699a7
dfeba5713acb6c071f3a8efefdb40d5c58a8c653
42468 F20101111_AAANKA wang_y_Page_044.pro
1a51ed09c44e42adbef18f156266a73d
c915a16fbc6ba18d8f8e7456c8a027b6c76e2333
F20101111_AAAONC wang_y_Page_127.tif
b66f754d3f57b044ff7be4befda8c581
62cdc3ba540466c47432aba61b23acfc53c316cd
5723 F20101111_AAANJM wang_y_Page_028.pro
ed5884495e31c6241e4140f80fd6ecef
a2f4231baf92d43849c412e044b90d2da459ffba
F20101111_AAAOMO wang_y_Page_097.tif
94e4ee1d633827a229b117faf60f0735
0dc18b7504008d7396902760783412bd5e5f5273
2624 F20101111_AAANIY wang_y_Page_136thm.jpg
765cb1793832fb6e3b9368db20b2010b
135cd0545fc3a395929d631bb2fc742e8e357785
F20101111_AAAOND wang_y_Page_128.tif
82de0ff9c02006a33a9cb3522cf1d7dd
4aa9ae0f52f2f8beb8d3ab6ab2a6ed2ef2ca54a8
1453 F20101111_AAANJN wang_y_Page_085.txt
f1f0b5effaa21af7a98dcd732c853d6c
ca0cb1e642dfd5f5e51e8dfd5d0da87b64652b90
F20101111_AAAOMP wang_y_Page_098.tif
4c3aa2dd6bdf8d1130657247f6853397
d6303f1ee2a1d97eda520d006cfa94e9307113e5
25305 F20101111_AAANIZ wang_y_Page_010.QC.jpg
aa446373b69186081c18b1e2bda4e60d
e1665a5bab6c7ea2a252c296fc7f11bfc5f486d7
7464 F20101111_AAANKB wang_y_Page_164thm.jpg
341ddf7cf858557ae0e900d49aeb1bb1
ec5736322ca9bcceb9c135d27f9d18f55d047eaf
F20101111_AAAONE wang_y_Page_130.tif
b537dd2cfc7b2714c2d5a1bdc435135a
d31daa195489ddaa041ca1bc9d8248896bf9bca3
F20101111_AAANJO wang_y_Page_073.tif
72cb31934651b0d7addcdf13b3b3050d
275e81214fd1d65eb1595e87c85090226c81fce1
F20101111_AAAOMQ wang_y_Page_100.tif
180a3d99383c721d421165726b97ce60
22f3013f0bfbc03d8b863ea005a871aef93d339c
52790 F20101111_AAANKC wang_y_Page_148.pro
0c628c0f450d03ab2c3d47dddd6f3b43
7694458525cea72f93fda776b7af1bcbdc34957a
F20101111_AAAONF wang_y_Page_133.tif
28bf50e316570db28423cb045795a52a
6f44bcd56a459d01af8e15da3dbc393a3a58a019
4592 F20101111_AAANJP wang_y_Page_135thm.jpg
64e045514964a8a55eed34541be99c43
a359a0071d23eaed379e13a47933454066aac7c5
F20101111_AAAOMR wang_y_Page_102.tif
c18e4ac24ada16646ecdfd00a568e26f
7e386c919fdc8bceca65575c76d4759fb62c2ad5
91270 F20101111_AAANKD wang_y_Page_087.jpg
5a50e5d017104735599aa52537561f5e
6cd2d31cbc53e1de7308307fbfa17823d6b3bc3a
F20101111_AAAONG wang_y_Page_134.tif
00568f8f3a4d769c5c8dc207ade969ac
c412d191121c15b48644a59c35cd2294c795235d
788132 F20101111_AAANJQ wang_y_Page_126.jp2
adecfcf117a96b50df8f22ce31820db9
fece20468887788bca59a5425b359919a861c726
F20101111_AAAOMS wang_y_Page_103.tif
8f396c25c36c85874a9b4a866aa80b62
a0813380a3ccc94d57d0028c57f36b7a5c2e899d
64014 F20101111_AAANKE wang_y_Page_167.pro
60b75f3754f89b234d917af22fff9f86
274b4078d9caa3f2953ee9cfee78655e3c6b53e7
F20101111_AAAONH wang_y_Page_139.tif
24008d83498b1ea522471e92f685f0ca
c69d47b1d5d11a15dde389b9d4f5e0eeb668394c
6287 F20101111_AAANJR wang_y_Page_103thm.jpg
7bcfb31d79766b441667ab4faef6aa50
55ccea5113e42366e23fc22769717a44c4d3df78
F20101111_AAAOMT wang_y_Page_105.tif
b8fad0edca4f12c517fba8937d17c937
788953ef4293b35d81c953e4e62a3eb0f51cef4b
51429 F20101111_AAANKF wang_y_Page_042.pro
977ccdb3911800463bebb7cdac0a5a64
1282b823ed8e6124dce8f8713d5ba816fc5d4324
F20101111_AAAONI wang_y_Page_141.tif
57fabe7ba0893d2ee0e4a17a6cbf1344
38dd092c5583ab4e82936725b78acac926816d9a
22334 F20101111_AAANJS wang_y_Page_108.QC.jpg
64411e8c1e334cad0b959c73034de7d8
6fab18d213d807aa9a32c889ae262f5d8d890ac3
F20101111_AAAOMU wang_y_Page_108.tif
a1c44f02b29624cafa17324390db8971
b3586c7d359a180633dc18fef401b3aee1e0c628
53845 F20101111_AAANKG wang_y_Page_025.pro
9a3afe882714517afecf12a72576c800
c3ced083b72a696fd279d8d7846f4d2790b8ca52
F20101111_AAAONJ wang_y_Page_143.tif
2517b5b16cd7b913c35298f11a0a0c70
4ac598987f84f73af4ba987b2382f1ca70558e43
612230 F20101111_AAANJT wang_y_Page_129.jp2
caf22e2778fea7457e140e6a77f79d80
d5cfc77fb71f86dd91b501709c1f30a889f5954e
F20101111_AAAOMV wang_y_Page_115.tif
2e90c3ae4988e66760811616d48eaa60
edf932a291b7016a4e6f1adb7af408299c0720fa
6270 F20101111_AAANKH wang_y_Page_057thm.jpg
602ecf62a2a1fe26f301b72f3eaf21a6
12f960ce0bba2daf323f954a535f974d22a00256
F20101111_AAAONK wang_y_Page_145.tif
4184be7e990ef91e0ed7e66ab5b9148b
f0e4290d67a30e953742c232abe187807f38ac39
32955 F20101111_AAANJU wang_y_Page_158.pro
09b774d11db1a756a5d2ff60f7cad899
2fb0d46539cf49c63ff4f796bf04e128133a058d
F20101111_AAAOMW wang_y_Page_117.tif
40a972dfb162d64ac4ba933e67ae8e48
e6d70dd301c6d1bbd936deea4f117b1c465000d6
851093 F20101111_AAANKI wang_y_Page_080.jp2
c6d822d6c83bd08708075d77e9042d1b
045c6c13cdc6517a42aaee3b5a994ba15f8a71e5
48519 F20101111_AAAOOA wang_y_Page_010.pro
2eb4e622ebc3a9cf9de20d561db68e09
5b869f0851ca014037158f7ca1f2a96fbda9ef0e
F20101111_AAAONL wang_y_Page_147.tif
c31b5ef39c9c78c197954638f6e2cc17
dc2bb811356bf6c7272c8c34206b5bee20dc0d59
F20101111_AAANJV wang_y_Page_079.tif
e383331385a70ff73d52e970f77de8e8
fe06e319f1f474148c52253803e9296344282ec8
F20101111_AAAOMX wang_y_Page_119.tif
54321bafadc0afdb82c77d08a4807c58
27bf88ca8d874437b8a7274631b7861c10b77a8c
F20101111_AAANKJ wang_y_Page_151.tif
3e097abb0c9e7a6e502a37503d159e9b
f6cce6cafc2b59f6a0950dc0d659eebc6a4fbd67
54423 F20101111_AAAOOB wang_y_Page_013.pro
67fefe0d8b8191e82cedd09ada900def
1e56c05af419a1f84950761763770537a98dd1eb
F20101111_AAAONM wang_y_Page_148.tif
0a8bbd0492b26441d76e78d95e588298
dffabcf6152ec6d927644f3eef66f98f41d924de
F20101111_AAANJW wang_y_Page_087.tif
2396cc85a37ce00c3412468d608ce8f1
b234cce357bea6d900aca257f53a1ce6b7097f95
F20101111_AAAOMY wang_y_Page_120.tif
e257ff610b18568064d79bb5e6dfdc51
67b21da68a8617a12c25cf1a6c7f51897a1e77c9
6783 F20101111_AAANKK wang_y_Page_144thm.jpg
9fa537258f81e79610816cd945cde1c3
f7719b4d8c6080086eefa7cf67c757167c5542cf
54126 F20101111_AAAOOC wang_y_Page_015.pro
9dbe983ce0761346cf2ac24d27e1b0ad
f8379e613381419a922642a9a10bad9131e4c1cd
F20101111_AAAONN wang_y_Page_152.tif
1a82886dcb741b54fa92fd4f211af6db
1afe69e86ef4cfd4eba0d9f57e4114274140d05a
F20101111_AAANJX wang_y_Page_080.tif
453783ab214666c963d977db51060b8e
259b0dcdb0248c3cbcdb7535a2184c21a426b972
F20101111_AAAOMZ wang_y_Page_121.tif
3be3db0825ad43c9cc343db56771bfcd
aaa313bcf53ff102dcc7768a70bd9be6e99f180c
26385 F20101111_AAANLA wang_y_Page_118.QC.jpg
4626ad2b04d95d7d031707b0078d1a3a
cfd63b3c7fe5c905e30df6e47954f0e3f4685393
F20101111_AAANKL wang_y_Page_126.tif
021574b549107a8f6f42b6fe11758232
d6f6a7a29430b3059cf00aa1930d12e91e719fa1
53609 F20101111_AAAOOD wang_y_Page_017.pro
12631059b3a5b7b185e8b4d334cbf1d9
7326efce9a45bd2d44a55ed295d48ba7fc5261ef
F20101111_AAAONO wang_y_Page_156.tif
082013de2b27eaace34443b215d1258d
ec92a7815f9645016d20ef708e6fe18d30d14534
26873 F20101111_AAANJY wang_y_Page_019.QC.jpg
48c94e99d12e98ded72d7b58522cb323
ce84320caad3f131f14aaa84766cd54c3d5b39e6
90432 F20101111_AAANLB wang_y_Page_151.jpg
e835f3341ee3a061d6d2df23b7c9603a
6a4c7a584f54def8527be48fc23bf92cd5861688
104504 F20101111_AAANKM wang_y_Page_163.jpg
d46bfcf46543f32c6214dbf0fbd176ac
e5af1b6246c8dcee6f476e8013cdd47ee96db398
F20101111_AAAONP wang_y_Page_159.tif
b778080735a0c3c600d0ceb827b33f12
14164824169153c4ab4d1c2b480a57823ab6b337
118781 F20101111_AAANJZ wang_y_Page_111.jp2
3bfc9dd27c20b4027185faa74f33da57
209f163a42573874b02b703b496bce1292d06b37
2077 F20101111_AAANKN wang_y_Page_148.txt
70fcffdec7aefa298c3be9b4c9197500
68fd7019659ed770dfcb44837024873e41f3b8da
55054 F20101111_AAAOOE wang_y_Page_018.pro
0f1576ad95722fc9672166b5faa7fd18
c51a4b0d6f8b424e11ddee4aecf21616dfb0f608
25265604 F20101111_AAAONQ wang_y_Page_160.tif
a2b7d612c5e10ced395caebf0bac8017
bb2535b921395f592d1005830383b7d036702c1a
F20101111_AAANLC wang_y_Page_047.QC.jpg
e425ece4845047fc6ed0b0bf74f43b14
726444a7b560c20ff51aa16330395d57482309f0
72392 F20101111_AAANKO wang_y_Page_006.pro
783cafe1a42c628112a212a6beb4090d
7b11be49444a1bf90851f6dfa3eb1c7a327176b2
53136 F20101111_AAAOOF wang_y_Page_019.pro
bb885c710fff8010255e5f5d39833bd5
495e583aaf48de2eb8ed4dfd1bdb410b3b5dd445
F20101111_AAAONR wang_y_Page_163.tif
aae6d82aefefe6f623cc37327bc38ee8
7d07064d61df41e14effbbf13382d348be4a8820
55164 F20101111_AAANLD wang_y_Page_142.pro
6a808a89f283d3c3ebfaa9fd21924ecc
39c95173a9756d6600c25b00c31c8b8467a26956
1051957 F20101111_AAANKP wang_y_Page_123.jp2
a272a9045fcc62a0265c48f6c28620a3
03b46f1898e60ab8fe90d2911e0aff967725f083
55765 F20101111_AAAOOG wang_y_Page_021.pro
9eadef7c7b03e0ba0884ae664bb6a4b9
1ffccba906d6080dde200e48c81a005c5dc8f8ba
F20101111_AAAONS wang_y_Page_164.tif
c05a8943280a2ce8e9f63eb5322e8ca8
7581c4ea0c8f6759a12f7813606be1e4c1513c71
54861 F20101111_AAANLE wang_y_Page_051.pro
1ed0862041021529e577c172f2028f83
1f40cc1519e275ffe77e394b3aabe7797a194ef4
6426 F20101111_AAANKQ wang_y_Page_068thm.jpg
591f9cccb055f397acc0970e7024c72b
df5233c9a7c6803510915f298c8b58f51edbb335
53324 F20101111_AAAOOH wang_y_Page_024.pro
e34bfa55fc235a07a731af4e1384024a
646e5d4860df726f8cafe2cc6faf7e45c6d227a0
F20101111_AAAONT wang_y_Page_167.tif
40d130d1c9f94f7444d8c037802b2a0a
08af8c6845375b579b1d9e666b1a41b35125d79e
F20101111_AAANLF wang_y_Page_081.tif
e65863ca277447d4572377aff37c5d8f
8ed6de4018afe733f2fa6f25c62c06292ed102cb
55848 F20101111_AAANKR wang_y_Page_098.pro
1c89d225aa4e25f1099537e27bf50cb3
c63105c2edea9ed0770916ee769f5ca63f61ff75
56404 F20101111_AAAOOI wang_y_Page_027.pro
5518eec82f4155e56fa98dbc0797f34d
fb627933e544fc9e9fe5afa548bdd3b38eb62d7e
F20101111_AAAONU wang_y_Page_168.tif
edea7417ae7f12654f783477c1f02fbe
75c241a4c1d0cacdae02bf5c8965699760267fc2
F20101111_AAANLG wang_y_Page_015.tif
e106db19b5cb95482f44e0428cba2cf4
00712fb6a8a679f8ab8e4c5bcda335dc03916467
44448 F20101111_AAANKS wang_y_Page_046.jpg
24406189033a767325bc0e57e052e7df
9230663470710eaff14d6d5c25e471bf11275d7f
6724 F20101111_AAAOOJ wang_y_Page_029.pro
8d2ecade19e3353ab64d78cbbd7343a1
ea9799267bb7e2aea8ed8011ec893e0b1dd2a471
F20101111_AAAONV wang_y_Page_169.tif
12b3f36fa4d0b6e2ff8383c390ee1c7b
703e1eedc674ce212ba7876ec3f189a42ee6b1ea
9755 F20101111_AAANLH wang_y_Page_125.pro
39c0050d37364c7caf7e7bbeb3e8baf0
40dbe2ee0002ac9a9b6c8360c193fde1bcde855e
24275 F20101111_AAANKT wang_y_Page_011.QC.jpg
84993901a8eece5b19a7b8a8ea7c9fa5
e1c9552c8374850d4b98fdb81ccab599428c2b0f
54442 F20101111_AAAOOK wang_y_Page_030.pro
38c6df7afeaa3cb8fd4e9b13196bdb11
72c382210b9c577d3a90737c302337226bdd2c11
7705 F20101111_AAAONW wang_y_Page_001.pro
e79e4e89ee113262e268d326d06c8cf4
5e2d541951c9c0ccd9fa8aa8bb3ef74e29497869
4280 F20101111_AAANLI wang_y_Page_046thm.jpg
a9c55baee6bf2a39be43f57ebc76a764
54b39f8473929689ee941c9627b2fc74ab032981
2076 F20101111_AAANKU wang_y_Page_150.txt
b82e42147a9265e024f6d3bf891c4b7f
258da464dc894913ccc37ddd09bb159967c6f725
52863 F20101111_AAAOPA wang_y_Page_064.pro
59f5068164842393b0d99c984e7d3d31
b3731613a038919ac435affb7ff0e9606bf9b020
52185 F20101111_AAAOOL wang_y_Page_032.pro
ae8fa8eeafcda53b5c424175a77bc2a8
fb11bb1055ef3c4728ea2c87035a6c5482820350
729 F20101111_AAAONX wang_y_Page_002.pro
e967a8f9e5e799ea167462f3b4d89503
2b7837a4dd49f82960ed108907db66b4a6ef2a10
2876 F20101111_AAANLJ wang_y_Page_007.txt
ae3adbae9663f5d846a669d94f0ca8ed
2d8a4d851aee378930ac0ca4d483138f9389f59b
2060 F20101111_AAANKV wang_y_Page_061.txt
050b8296e81572a453b327a1b3e75163
357ae61434bb9b76a4a3f289d6aa730c614915a0
57161 F20101111_AAAOPB wang_y_Page_066.pro
0e9a0665fe17e14f57359a0abc4b24d1
8cef40ef6b505593b4e0238e8fc3b7c38725c5af
44545 F20101111_AAAOOM wang_y_Page_033.pro
c5d8df79ac4c6dbd312d2f05568a1200
e554ca041f7851164311304ace630a1a7938330c
597 F20101111_AAAONY wang_y_Page_003.pro
c6ffd9d66d1cc64909fe20b35d9ed43a
3170271139377b651942c62b8630d06d9ff84f39
59313 F20101111_AAANLK wang_y_Page_031.pro
941d18f1f55a116c21110939188cbb2e
fe87c9d56d5afc598a08829fef9b3db9aae0355c
F20101111_AAANKW wang_y_Page_052.tif
6bfd86d0a2a8c67f92197d5ad5d1eef3
2059b989901f6e568b47144bf99a1f8613098894
46237 F20101111_AAAOPC wang_y_Page_067.pro
a7ac625245219839fb34aecbeed4ab83
aa6c4b5e66ef661a2abca24608d789aa2e15bf1a
36061 F20101111_AAAOON wang_y_Page_034.pro
8ec265c78a97e4335b6596127c49eece
41b9aa1fe792953a69b37b7aa73cc95939bdc3a9
27874 F20101111_AAAONZ wang_y_Page_004.pro
be4704006b9e25a119720d96a16ad08d
d686d28d55a24c53bfba066b28ab233ef57b382b
2296 F20101111_AAANMA wang_y_Page_117.txt
28497e0b0f7fd59c4b28ad748b27e968
7c74f2850de335a6212e82e748558ac4ebdea24d
40975 F20101111_AAANLL wang_y_Page_122.jpg
2dc968b7695e9bee88307dca457d561e
49789ab5d35ced5d577eb92f6565ecfd34b4a12d
101116 F20101111_AAANKX wang_y_Page_011.jp2
cf21633d670a7474f2cbd09db5541f2b
040c7734593cd1eb674ab8d22836810c799405b4
50906 F20101111_AAAOPD wang_y_Page_068.pro
0cc07d610b692b5c8c8d1dfc337a6b0a
094808703f9169ed1c5330f93c3458d56dbf291c
51604 F20101111_AAAOOO wang_y_Page_035.pro
ed1275f6311435f1ef16059ceba0e955
530ffd6c65695302b81121a54497f28214d66de0
1040 F20101111_AAANMB wang_y_Page_003.QC.jpg
243a1745a48a04d044e629ee0a0f661a
3f9c048ffbfa70a5474924e8636ee1010cb68db7
F20101111_AAANLM wang_y_Page_123.tif
c62315673d37efdbcd4c3e11f139175d
abfe8bfcacd900e8fdd4e7ffab8c491a8b081e57
1912 F20101111_AAANKY wang_y_Page_126.txt
906579bc1afc0c40d011e6e8d0bc9a34
2980adf96844a5d2a06f09ca73301ecec0e1ed99
52006 F20101111_AAAOPE wang_y_Page_069.pro
9e95298e584adb2a647cf7108c38cc32
587435efdc638e751e78b07e672212429b6ffb85
48681 F20101111_AAAOOP wang_y_Page_041.pro
dfd7a512e0258741c80a8e101ae687f8
2f79f4c62dddcc651ceb0273da4de23a4cda52c6
F20101111_AAANLN wang_y_Page_020.tif
2fe1455e725ac5e53d35cd13b13a2b24
a0a8e02fb9f341d4b567f2e0d416b394e1eaddd6
76237 F20101111_AAANKZ wang_y_Page_094.jpg
24ebc0694ba8685bf43287a631915687
a925fea9081448d04b6ff7c3304af3440f29335a
2187 F20101111_AAANMC wang_y_Page_051.txt
f653259db38ca623c1f7561e5097ba7f
70f0526adc6073fe9e3f762ce8f3bd7f8e3ec3e0
57885 F20101111_AAAOOQ wang_y_Page_043.pro
e2123e8b00cfdd616412c01a8e468689
c52239bd6149dad302b3ed36da099eecd9413804
109155 F20101111_AAANLO wang_y_Page_116.jp2
5c580b8e67ff90e6e5782b9fd6f313c9
13b41dc19e707cb18bfc8236e4084a135dd26924
52170 F20101111_AAAOPF wang_y_Page_071.pro
8ece66583eb5583819793b47c07dbffb
1cfe62fa8f21f4f97d66a4a665f59ef6e4792c8f
9013 F20101111_AAAOOR wang_y_Page_047.pro
8a77b205f97ced75d46a3c591c4a3eb9
b617e294cf4617b97af3ba48dc123606f6a47628
5182 F20101111_AAANLP wang_y_Page_002.jp2
213843bfc872313717df19ded9d46bbb
056a511316144f6319fff17c0d102e9adbf094a2
66668 F20101111_AAANMD wang_y_Page_107.jpg
94e79b79c304c679f85cc03b3dfd3006
0e788c6117dc9e7a70ff3163c8299a369d4a63ef
52320 F20101111_AAAOPG wang_y_Page_072.pro
37e3a8313d10be8119112fb4fe6b945c
2094ad18b55a98c82c19d1804e946f814e619497
44456 F20101111_AAAOOS wang_y_Page_049.pro
066940d9d9412beb103867bbe1d213fa
1eeaa202c1ec3ba3f9326391cc1a724d974ac41b
2023 F20101111_AAANLQ wang_y_Page_157.txt
28b04ebd74446876cd9e12197295eb27
6ac8ab93954959a27c41fb2a6f9077b80b8bb34a
F20101111_AAANME wang_y_Page_081.jp2
3eff13b911ddb56da03cf0b940add7d9
2a4c9cbfdfb57e0c80b3813de527cf1e4e61f25e
55882 F20101111_AAAOPH wang_y_Page_074.pro
1dfc36b6c1f81fc864f80207a6c0baaf
6d84c78104780db7869d7d7508ca1fa9ea455c68
55198 F20101111_AAAOOT wang_y_Page_050.pro
5bc9ad7e77219c9247df6a066a8b942c
b14f3080b9a21e1d717edebee40b0bd08c73484f
F20101111_AAANLR wang_y_Page_010.tif
895e2340a269fef23a7e7e67a3b33829
65882e6978b49fb448b61ef40bf26a531468812b
2165 F20101111_AAANMF wang_y_Page_142.txt
fa01abf5d28e1043105dfc71582d1b67
7b5188ce77902cebc904e62a41e5730f316bcfa6
51299 F20101111_AAAOPI wang_y_Page_075.pro
38831dc0e602dd5e40d22ba5074ad2a1
9c70e365635b8080be0361d7425ebe25fe715da0
56023 F20101111_AAAOOU wang_y_Page_052.pro
5297f93ab7a9a24ea3f25f1bf6662ce6
fb0f6d8e52786c9fecd8e1cb040a04ef80e59057
F20101111_AAANLS wang_y_Page_003.tif
13e6138308ff7dc39bca13ce6fc6113e
36e53171473c85fae5044b13d9338bc4533d2cf7
7299 F20101111_AAANMG wang_y_Page_134.QC.jpg
4e34b8f0699858e03bec21ffcd711bbb
8fd62892b2959b238bbd04ce139a1cc80eb9dd48
40852 F20101111_AAAOPJ wang_y_Page_076.pro
5edd3a3a898e24a10701797827c159f7
1ec28cec68ffb472df5a09a7d0f8f29bbb860899
50858 F20101111_AAAOOV wang_y_Page_056.pro
921d916b6459c560abbbc9bb490f9311
d208822cf4daa3ef5664cd9a0a07a35935825df2
4703 F20101111_AAANLT wang_y_Page_121thm.jpg
341f934beb8f1a95bee7f5aa64116243
eb1e3a7705b876fb729ff81677e5d444da7639a7
121382 F20101111_AAANMH wang_y_Page_115.jp2
a8e6495faff6b153ca10a27cded12f4a
5cbceba96cb128d3d9b58824a668628a9ee8965f
14249 F20101111_AAAOPK wang_y_Page_077.pro
be47f64d454d87658568473a695da57f
b7741eaad618af458b20471478e0c88b4e299691
52202 F20101111_AAAOOW wang_y_Page_059.pro
615b11a113746984b35eee0df2387f83
39049212f5c7885679225c24f79ff731c985887f
45373 F20101111_AAANLU wang_y_Page_094.pro
5fcb219bc762ce6da53a080924adf7c2
477daf21141e689854fb346435bd477c861e1457
F20101111_AAANMI wang_y_Page_149.tif
06d11b229b5c10d65d28e41c7cc65f87
46049b2350a3d37c4eebe4ab0bf3408a34ca2c09
43853 F20101111_AAAOQA wang_y_Page_108.pro
9a8bc490f3730b9489603ec7167dbaf3
23505ebaa6717786add377e14bec23617b763470
14945 F20101111_AAAOPL wang_y_Page_078.pro
f91cb5b6a987e5bea108cb76784951a4
33e3a204becdae3faf70ba37f015cf0597f13cc9
52295 F20101111_AAAOOX wang_y_Page_061.pro
d3742993883931db313e661063c2d1b9
c425356593640644c414fda648e71ed9edd2b02c
2150 F20101111_AAANLV wang_y_Page_023.txt
eb9644133107451458cd39faec033472
b6763fc9946d731848cebfdb10a4727c487229b0
88208 F20101111_AAANMJ wang_y_Page_099.jpg
0d835d5a95c5ba5d2d151c36624ab5ab
09fe5a8d8a6e22d53c28671bf36b2961590b2abf
46468 F20101111_AAAOQB wang_y_Page_109.pro
a89cabaa17d86dc89ba6cd54673336f8
d111e94dd7539650fea7cd4923e2e50ad15f42ee
17821 F20101111_AAAOPM wang_y_Page_083.pro
a987d68fd5929063c896c5d11efa6a7d
5c646f86fcbd7b11b1734aa25d080ea44c1e9d3b
50486 F20101111_AAAOOY wang_y_Page_062.pro
e795485801e953b1595099fdac638618
8d0b4afdabbb37b2caea233ce75e9ced5aa8e56c
F20101111_AAANLW wang_y_Page_132.tif
8a8757b7703d3b47d07504bf0e79a7b6
023cce2515db9a73d23806f7178dfec62c5e40e3
4871 F20101111_AAANMK wang_y_Page_038thm.jpg
9fe7a83e473cc3c083fd4b0b5e74dcb6
016294e6a6b9a04774c66fd297a1a1f1a58d68de
51173 F20101111_AAAOQC wang_y_Page_110.pro
a23e56d09c3dd35dfc61f98e5a6cb0fa
a819a91f9e38b4aeab8b1b7ed0ca06f0283fb6d7
15503 F20101111_AAAOPN wang_y_Page_084.pro
31b50fbca250a0b9c395c5a6579418e8
e5b10171101425215331147bdbf415bafd6e4bff
56188 F20101111_AAAOOZ wang_y_Page_063.pro
ba0ccf329839250b325440deb1953079
4af7dd8f8acd97b1d25bcd27a9962c8669fc51a3
120650 F20101111_AAANLX wang_y_Page_066.jp2
55a3c3a5a778aadacfa03473f5446ab5
4d028ad221a32c284d3a4188d3882083d60192d1
6796 F20101111_AAANNA wang_y_Page_165thm.jpg
c0a3992bf18f28152a6989fd3ba9300d
d5b89efebc13e1454a880432afe631e04f1aafef
F20101111_AAANML wang_y_Page_001.tif
22420f1d6e4676e054f839cbdbefb978
6bfe904d9f402027ac73dc1e81a2f67a7052cdff
56449 F20101111_AAAOQD wang_y_Page_111.pro
1f7b284d598b862122dfac059a60fde1
c7e3ed626329f8e0d8428532cd05714e7ea5cc08
19738 F20101111_AAAOPO wang_y_Page_085.pro
808cce7cc776873ea908961fbb9e6882
622e228b91239bb9ee726f8a9cf6904530056d53
104763 F20101111_AAANLY wang_y_Page_041.jp2
06df1e5174078a862d1c769c481da142
2eec65dff113e77afb2565f9daf7b79b5f7f0804
25552 F20101111_AAANNB wang_y_Page_057.QC.jpg
db78ab3ff6d09674b4aefbeeea2d7669
582d97e4ac9eb55c1d6b04c81b26785577b7bc07
F20101111_AAANMM wang_y_Page_146.tif
67a3ee3eb9179e73d2d3b3acde72049b
6fac552f2ab73f24919ffb8d31fb903051d98466
47730 F20101111_AAAOQE wang_y_Page_113.pro
08f2698d0273a979bb4038dadcd7f0cf
964a582c28894878c96bbcd7a316c7cd0bb12960
12641 F20101111_AAAOPP wang_y_Page_086.pro
2c2b17aecd0d881fd86aa9b8ada67986
c53f87c161d3ffbf786f4012fdcb741f646e7969
F20101111_AAANLZ wang_y_Page_158.tif
113c95204066ceb63d83bcfbaceff5a3
af310fa276edf95ac8cc014f603247e27b5af603
141209 F20101111_AAANNC wang_y_Page_164.jp2
ed7d5801d04f972547f06fbd35887aab
df7a0efe6d6f753e6261f8f02bcd67a8c46b2b15
3401 F20101111_AAANMN wang_y_Page_122thm.jpg
e3ee00069cc306ca5fe935f6244aea4d
4739119e813fb4114ac19e2f4d84eb3a86b9c775
57544 F20101111_AAAOQF wang_y_Page_115.pro
8bbb78442876d56311c07264d62f234a
40c65c329fc75f50984ff475c1ee6694813f0fae
13220 F20101111_AAAOPQ wang_y_Page_088.pro
51eade172b2ebc32aa4de136e4d5381d
a0f1e18b5665c33e84741c80d13069f90ff0fd29
96691 F20101111_AAANND wang_y_Page_117.jpg
70c2f2eac9477dcd2f1e3200c6f47134
27c8b0586aeb873438db8ff1fbf932376192dc27
6829 F20101111_AAANMO wang_y_Page_066thm.jpg
02eec2a7554c85f1323a0e9717e28fcd
497e0bcb969d5f6d062f2aa9547c859ba8a1d543
50758 F20101111_AAAOPR wang_y_Page_089.pro
39d379b1794a018e8f1bf98115395cfb
175f7a87fc02fdefe38efea837151f1c46cc20c9
27509 F20101111_AAANMP wang_y_Page_165.QC.jpg
575afc90f1912f075488d3732d4c13df
36a5e4df48c3d6a31a3a7d9475584eb21d92e927
50361 F20101111_AAAOQG wang_y_Page_116.pro
50313b9bbfca721ea3b41963b7be5664
1b946ca239df294e68945f3ecb93d68aaf7e1187
48728 F20101111_AAAOPS wang_y_Page_090.pro
edc7a02e870aa4338fe81632ec13130a
4b70c7211466c48c28935fb2b97671734036deaa
28217 F20101111_AAANNE wang_y_Page_052.QC.jpg
a4d3569ead68a8c0c0030a392a0a89fe
bfb7dabd31bc2e5cc68ae4695f42950532b81fac
46637 F20101111_AAANMQ wang_y_Page_011.pro
8b179f99da7cfbba82fb6aa79963c667
85d456824c33d13ecfeeb90173b0bef6bd6fb3d5
52525 F20101111_AAAOQH wang_y_Page_118.pro
b9d1ee3169d9618a29e06dc7281dd786
ee6e9c0f31ba1c8807c24ea5bb6fc65f404f4e9b
55252 F20101111_AAAOPT wang_y_Page_092.pro
4bad0fc5fe2f447962e9d7a3f938c6a3
8dc3487b2611c576bbf372733ecad6672f8e95d0
2110 F20101111_AAANNF wang_y_Page_090.txt
a60428a5da098699e68b74ca5d47b072
727e5e4989b7baa57ddd7c645db7f513b7d6b15f
273564 F20101111_AAANMR wang_y_Page_124.jp2
e8d6a102fb8d79d7e4a6fa917af5eeb2
574cd9da2ba5798fdc4294fe4baca1478f60c5d8
59922 F20101111_AAAOQI wang_y_Page_123.pro
d6b95f970cbaaa20090139c366311489
ddb5bf2b0009d8eb00cb77ea00b48130602288c2
43057 F20101111_AAAOPU wang_y_Page_093.pro
71636d8d54d13f211f0065310f4a0c06
cd79f11b9b7d75962bf570b739e7c9f4c2c166a0
2846 F20101111_AAANNG wang_y_Page_166.txt
3d401cefc03b8a70ca105dcbe0ea7858
85129aeee29d36539e9d78673510859a0196f749
111117 F20101111_AAANMS wang_y_Page_048.jp2
92ec31ed5a5a1c8a1d1d1764ccb5ea6f
aab321fb4d391faf81ba3dc47df1c13d98fde0d9
7077 F20101111_AAAOQJ wang_y_Page_124.pro
e33adf4e6dca633937ddd895b4fc7e25
de86d36475e0f60c1493d636f4711ee6db4c1c98
52168 F20101111_AAAOPV wang_y_Page_096.pro
edfc9686542eeaf41725a5f7a78707db
89baddc7c03d4cba30b98cc2d6ae4dfb9a9cea85
4848 F20101111_AAANNH wang_y_Page_129.pro
81098b065133ae0740a72a07182595df
2cffb0ea624d1c5c640f58f8481636165bf18d3a
53772 F20101111_AAANMT wang_y_Page_026.pro
caf10f34f4a102847cc01e5ad3d0ab44
142ed52b11cc8b32588380a4e30f651984736219
34208 F20101111_AAAOQK wang_y_Page_126.pro
8a7be7a26f53944d7ffab69c894f316a
2a7f348a2575eb203057e94794439b8300303961
55332 F20101111_AAAOPW wang_y_Page_097.pro
51d8eb2a0752c724e90b61cfd54598ee
fba79ee1ad90b2b3bcbef585618781ce1207e90c
85371 F20101111_AAANNI wang_y_Page_037.jp2
a80f3c4c3c92161ea3c8376c0be6ff2e
1817de5cad0783e68d7cf30760dd1ac1e5fd2e89
2167 F20101111_AAANMU wang_y_Page_149.txt
4c89a64bd9d30c8112c3dce03f1227d6
7e1d36941e1999912615f3da41cf9a56ad2f3cb2
51318 F20101111_AAAORA wang_y_Page_157.pro
c65be16e949a990215fe94c4a404141c
d2bab4fcff85be570e6595e2c2b52d125dc193ca
34307 F20101111_AAAOQL wang_y_Page_127.pro
9296cf72ae807d6085f24b06762c01b7
9d3786dc176827804a273327257187fd52d1a5bc
54200 F20101111_AAAOPX wang_y_Page_101.pro
587864c750f0d221bc4e93cb41f18329
bcaf3d4ce81b9eac4891314e4443483814a9dc92
91697 F20101111_AAANNJ wang_y_Page_063.jpg
86c61a4836126a5bc5e94465788b6a05
10e6fc7009e6081105f4ce0bf23a6346d0f40b75
56837 F20101111_AAANMV wang_y_Page_165.pro
7dc9198f8d6703aec6f7f1a216330bd7
7aefccf3ad07632acf8f2b882ccef1c2668a16e6
23620 F20101111_AAAORB wang_y_Page_159.pro
1fbbcdc90ac5923b84ab04adbfe59d6d
150ddb10737f116fdca559296d0e36325eaa854c
7582 F20101111_AAAOQM wang_y_Page_128.pro
f1e97e941589b07d210845dc9d2081e5
55b84ffe5984803489b9adfd451a29b3d022c418
54946 F20101111_AAAOPY wang_y_Page_102.pro
6e4f321a36edf5884ea3411d028eff59
f43063456d30dfd4cef4a957e6fafce11925e3b4
6047 F20101111_AAANNK wang_y_Page_010thm.jpg
3b872946d21c7dfc9b62a5903a0396ff
c3a2a97b1b051ee18512121323ffb8fb22135d34
6306 F20101111_AAANMW wang_y_Page_140thm.jpg
ea3498c2542a0dbfdf35c87bf3f45803
ab721f4e123d09a589cd2bb8274cd81a2d4ceba3
24138 F20101111_AAAORC wang_y_Page_160.pro
60e9e5f50ca7d0a3be8db02569cc101b
814bbe5105bab8648d6cdf982633605fc011362c
18886 F20101111_AAAOQN wang_y_Page_131.pro
a67f0d3582f9d847eb5c1181314d9ee8
afd45f9f8f01714b7bfe1e3cc653d1f666412b96
55364 F20101111_AAAOPZ wang_y_Page_104.pro
83383fc09f719063cfe3d03fb85a4f0c
176ab801df5a2c3c33ea41c0b2c44a0af8d3287c
6656 F20101111_AAANOA wang_y_Page_053thm.jpg
40e49e1a86d33de72d774a584dd4e680
398dd81ad86f1f0a73175393c9d2bbe488691189
F20101111_AAANNL wang_y_Page_021.tif
c3dd3490528c1f2b35599d64c99f041d
f52bc8af86720cd9c7bc7036a10124c064965aad
111240 F20101111_AAANMX wang_y_Page_040.jp2
04ac4b0ab79b5a03daa6f38b058ac742
c54f412fb3129524f8391002408e708dc6b9e8ea
62037 F20101111_AAAORD wang_y_Page_163.pro
db03c012e221946133de5323115e03eb
0d9d1202d2f3234765702910a2ad512baa6c9686
35239 F20101111_AAAOQO wang_y_Page_133.pro
4819516306e5f272542883764db8e7b6
fbcd4b81cce5f1320c5e4dafdfa0f40da508f6c3
90510 F20101111_AAANOB wang_y_Page_030.jpg
bf9abf49bc2b1b07dfb5dc31b1d4fe2e
b35102fff33006324a59213a03412105e266cc42
78238 F20101111_AAANNM wang_y_Page_055.jpg
7b3f76ed25845ca710fd03f8638b4f1c
4c3620a4c2d5276aa5a1969328a7e1dfadf7f370
94270 F20101111_AAANMY wang_y_Page_033.jp2
376c1eb1d7b0afb630ce3abf1f4edb4a
8b4b2f137d89a150337a709eb617edec5e8954c9
67000 F20101111_AAAORE wang_y_Page_164.pro
d9d07391d5351dd10fc71dad285c7e6b
48445acd6181e84441d8b77ab71626d69edd8e34
9292 F20101111_AAAOQP wang_y_Page_134.pro
e9d03440b964fe2b855066738eefa395
9c8953aee116056761fc22bed1a5673977b73fee
58222 F20101111_AAANOC wang_y_Page_117.pro
2ac2598d32c09077d4f7825f97d51417
ce028004917abd09a742894b3c48588d13a026b6
69763 F20101111_AAANNN wang_y_Page_166.pro
8c2c8dd1d15841eecb3b7ab3603cc030
356c7a1f1b5c82a69f01d87df3fd2e557d1471c5
50742 F20101111_AAANMZ wang_y_Page_048.pro
950ff1c5dbce971587143d84b8a5bebc
138887b97c898658e3cb229573e8ab5e87b5eb48
45876 F20101111_AAAORF wang_y_Page_168.pro
e917d0a803794bea72f3656b1a14a168
2a14e1c86497568589b6a65498ebac1ad1f119ef
37404 F20101111_AAAOQQ wang_y_Page_135.pro
60340d5eb12f7db2080d02da877752a5
530510c6051162dbc303ced10554224e9f36fa79
90747 F20101111_AAANOD wang_y_Page_015.jpg
9bbff131828f84888cdec5444a9d0bbf
bff4e794bbf604c4d8b68e7fd78d829444667029
23654 F20101111_AAANNO wang_y_Page_001.jp2
8ba318238707ffcc0bcac774c5533a56
4407bb608692ddee4536e9f751c30aec7ce9e5fd
25359 F20101111_AAAORG wang_y_Page_169.pro
038fd501ed5659e070606128a19244cd
e13401992c9e7797a6c50f130e0b441cf8a0ef87
12577 F20101111_AAAOQR wang_y_Page_136.pro
d1eed6731f56509fe06f419146cff0fb
5a1ed7fc2c0e4ed0ad5679a959a3b223e2c25ad3
F20101111_AAANOE wang_y_Page_008thm.jpg
21e298e0cb08e0e2d6b691949b64e8b2
028d003904ff1a2775b87d41a1d0d64bf37d05d4
F20101111_AAANNP wang_y_Page_068.tif
857722cfa0b610f96d686411e2155f07
b158e7652d30f6911115b938a177a16d65d7425d
52952 F20101111_AAAOQS wang_y_Page_138.pro
c5858132d3b0a3c957bd8aea4e8ff00b
f4028cdb31a2534b71219c5f6b41aa8b0d9803c4
6325 F20101111_AAANNQ wang_y_Page_001.QC.jpg
ee8e553d8b6ea9990b779490806a4229
1a8faf80298f62931b1556cfc7106fcac6fdf09d
427 F20101111_AAAORH wang_y_Page_001.txt
0d15cbf4a130a04cc87189d7205c35b2
d57c8bb51f57b19d5f6b0488501c85aa3b56ac2f
51159 F20101111_AAAOQT wang_y_Page_140.pro
0869823e259fc8bf343df6a06aa922a8
db9e1598768c7de58e78fa658e6e49af2a3bafe4
68560 F20101111_AAANOF wang_y_Page_162.pro
fe1caf64feb9e1a84bff0092333be224
3c9482c1b9f6f849b6429c910455e01c16f0edb5
16200 F20101111_AAANNR wang_y_Page_126.QC.jpg
58c5b25b5eb8ecc12ea4523270d2db1e
37191c300cd51ae09beaf1874c42f1917ce8a0a6
77 F20101111_AAAORI wang_y_Page_003.txt
c0fa3df3a92f681eb090f61ce2d4b584
69f9b7770cbb58247e5aa83212e07ac0fc26d502
56091 F20101111_AAAOQU wang_y_Page_141.pro
183f2c172e58c1c35c79fa78a9d4cc06
aba32689a92f401b8cafeb590ffee9f3658fabce
84049 F20101111_AAANOG wang_y_Page_070.jpg
1b2d155415e32524ca69e1172ed18144
b765f36a126b842a9c366a04f10ac538f03a2190
6446 F20101111_AAANNS wang_y_Page_104thm.jpg
0c3ce8467e531883979a026c5100d2b9
e34f9ed26faec7d6d1aa3dbab816abb422c9c395
F20101111_AAAORJ wang_y_Page_005.txt
ace4d743603801769572c470fcdba899
251473ec3ca519d7c3d5cd0651a4e1a462479db1
51811 F20101111_AAAOQV wang_y_Page_143.pro
5ce75cbff40400170607bceecea4b692
c66de6ebc983bebc25cc34edf234b71294e9df15
F20101111_AAANOH wang_y_Page_104.tif
96bd5a8aeb6147d86f4402d3dbb6aa84
2f5c0d391737b5ccddb9bff00a44626422cad2ac
44225 F20101111_AAANNT wang_y_Page_039.pro
18bc4b7aae0c0b1c241ceee280ddf490
656cbd0c58525f71863e553c3c3a3e97858f3dbc
2996 F20101111_AAAORK wang_y_Page_006.txt
092e3ffcfb2d6c275967c3614eab0e15
918fb0150512bdff0e3d681db17ccc3b948dfe9b
50873 F20101111_AAAOQW wang_y_Page_145.pro
f26e60d0f660386f2de90dcd46ced8ff
7ac8582d4a3f2af4afedf895d9edccce28617546
27334 F20101111_AAANOI wang_y_Page_024.QC.jpg
16134f0991e0b379aab7752f57f26fab
6ce6000112bbbe7feff57ba589e29b4b487c60be
10772 F20101111_AAANNU wang_y_Page_083.QC.jpg
90c620b38628e755b86fe7c4df0e2c7e
1a6e6254b49721a30c7c445e7a9474fe27ead9b6
1878 F20101111_AAAOSA wang_y_Page_034.txt
1ab84c4adc4a2a4990162ee76593939c
a85298494d0fd0d63bd75c305e52453d53f7b755
2962 F20101111_AAAORL wang_y_Page_008.txt
e4f5f5421d410457ceb1e03408f5b0ad
0c56908f3244eb713f0c052c7124f323433df882
52392 F20101111_AAAOQX wang_y_Page_150.pro
33c074ff3211c996203e626ed013085f
57b9d05db4324d7021f06584c10f34f326ef456a
117742 F20101111_AAANOJ wang_y_Page_050.jp2
99af8e82b6042642e1c05af27142a984
a0c26c58726edccc3921536485765a6e2f90364b
86 F20101111_AAANNV wang_y_Page_002.txt
15b852beea28f07077d6567cb3f9fa1d
8a8f1da93623623f7012b058b13e11c81512eef9
2041 F20101111_AAAOSB wang_y_Page_035.txt
96867d82cf6da9065d3b83907a2ea78a
0ccb53ce5ea27fb1b3fd4c1192380bdb63142189
2092 F20101111_AAAORM wang_y_Page_009.txt
c7f75522bb2bfd3c6447e0f6b86346b6
e5ed394f2e927025f8cdc0d6762cd8db4762ef9e
44201 F20101111_AAAOQY wang_y_Page_155.pro
f4b0c40a4e18663404c0ad928b1a39eb
c1a24fa6c36fbf3a922892657c0ee013e4d1a29f
F20101111_AAANOK wang_y_Page_112.tif
5e5ebf84b8e46b1d96cbc203132f6f5c
e6345535270ef0393022d4f9f1d269437dd344b3
26162 F20101111_AAANNW wang_y_Page_070.QC.jpg
b0ff6dad5f9cf3076af28cf42f78f026
60377a6bafdb8c68975321ef5020f41595b8b646
1982 F20101111_AAAOSC wang_y_Page_036.txt
6ab64431c648f328024af5e38f253042
cc3359cf4ac4ee4c68036ea44d4326b989121c1f
1850 F20101111_AAAORN wang_y_Page_011.txt
56b0d3b5388d5d64be2a69fdde4afca7
a2feed7d6b54b7accd4640e2074bb3feaa4b008b
36707 F20101111_AAAOQZ wang_y_Page_156.pro
2c10b26b94c3323b031e933441324484
45c8347ad5f06ae024d101ad77388f1eb91bea75
615028 F20101111_AAANOL wang_y_Page_077.jp2
f5ba20b99a1fdab30e39eec2c27c4b88
c7b4706b5368a8db70d8b77a5fc68db4bb6b9649
4326 F20101111_AAANNX wang_y_Page_127thm.jpg
8a24d00666459c2bdcc02f0275d1496e
6ffdb412efe9c36a7b506d22fcc71e126dba48b1
2413 F20101111_AAANPA wang_y_Page_137thm.jpg
3e126411d7591ba2d1db006ee180c98d
18fc62c2042f76d98ca45757f1681bc068143296
1821 F20101111_AAAOSD wang_y_Page_037.txt
f8b62edf6d1ea7e59532ba7fe39820fd
5f89da56ca897978170e4d5a3db135870a40dff4
F20101111_AAAORO wang_y_Page_012.txt
7fbe99f6acc808da6edfa6d4d01a4056
9fc0c9cd27608dc798e68ab22f97848abc78b729
F20101111_AAANOM wang_y_Page_022.txt
2d46a75976285f024e0fbb8d9c2ce192
b08d6d01139a0f5227f2b23ff83e15f8cc636383
368192 F20101111_AAANNY wang_y_Page_088.jp2
c4cd8340807a6f94dc2f5747e5f46310
efffa8a6942798c6b5550c77cfcb1441ae1a3015
91434 F20101111_AAANPB wang_y_Page_093.jp2
1fe8e9581e622d6a980b6cbe6f5c9872
df27452347cdff3afa0372353418b524f6eb0887
1999 F20101111_AAAOSE wang_y_Page_039.txt
b649c9e1d3d8aa4754e9c410dd241853
21e6e1fda8f6e025db486aced657f467abfb54cf
2279 F20101111_AAAORP wang_y_Page_014.txt
7c470dec5754eef4eea6189356946be9
d8fd222d4c2b24c5797b34d100fc63efab362279
F20101111_AAANON wang_y_Page_099.tif
8b0739cc08ae87be347125db81ea750f
d75e9d8ded591838e35afdf15948a67104df2f4b
6675 F20101111_AAANNZ wang_y_Page_021thm.jpg
873f0f89e819210505a00b452f0a3de5
f8012a9b0dee3e456d74d93ab24892dc419a207e
58428 F20101111_AAANPC wang_y_Page_079.jpg
f590a53d530a1d2866b5973f5ac8a19d
d05c65705696699495a51980b94fc3d30aa99d62
2084 F20101111_AAAOSF wang_y_Page_040.txt
ab336ac3a87a112a85ff44cd96ffc4b8
6f39a52601571fba26ccefc3b1a4c946fc20aef7
2101 F20101111_AAAORQ wang_y_Page_017.txt
ba6025852d8cf934fb23f4db012e6ef5
06d5d91e1ecced2d092d5e9fc04fd7bdc1a16e4b
90775 F20101111_AAANOO wang_y_Page_092.jpg
4c7006893ce1bcf2229b970d22894040
e935d1d21d4a144ab24613122e7f1c122d4c129b
85331 F20101111_AAANPD wang_y_Page_116.jpg
59ecbbb81626c172f8d1694e44d8b328
d22cfe8664823543c912d644bda06def5121319d
1928 F20101111_AAAOSG wang_y_Page_041.txt
2e40e609a2692dbe34db046f68e8227f
a63535f9d4dd6e7d49a1328efc812bb638eff921
2161 F20101111_AAAORR wang_y_Page_018.txt
f7d6e62f9f4b5172f8293a36a5870527
199583373a7ed671254b94ace32445241f3f1379
F20101111_AAANOP wang_y_Page_165.tif
3439f23a46c965381a095df5414e9e28
81769bb8fa3fe7da3979d492904d7fb2432fad1f
F20101111_AAANPE wang_y_Page_076.tif
f9155af039615767ac68cbb09d217609
30ea16aeb1413bb1f62f3e16b639b6a3e579c333
2262 F20101111_AAAOSH wang_y_Page_043.txt
35cf6686f43570e09271a8035842ff91
cf542a259dfffa2b96c54f9cf9c65b79a8b94327
2090 F20101111_AAAORS wang_y_Page_019.txt
66f67d704684c815e0c2bf50ef03605f
5d5d19451f7ddb9be664170d42fb52870330ae84
2207 F20101111_AAANOQ wang_y_Page_027.txt
6641369da894f02f4e9c4b95f5d8956b
88af68098fd4aa04dc2cc74d6ca1f8bc57649cd2
F20101111_AAANPF wang_y_Page_058.tif
2c811c306a7e4e1e9ccd3be1d63adae2
0698991b3fcda9fa7dad61fa6c34f65609606f0a
2240 F20101111_AAAORT wang_y_Page_020.txt
35b818119348de67d2a9fffb83b796f4
ee8a85a12debf5937e95224fa635871a60765291
85690 F20101111_AAANOR wang_y_Page_059.jpg
e14ad55848e234a7847dd4bb3ce1377c
92018cf96bd2fefe0a71614575792920f8a42e19
1734 F20101111_AAAOSI wang_y_Page_044.txt
35deb6d40b84436f6df7fdc35b5f9d4c
b69f9743d4a5731ffd6d120247d647fe4661b547
F20101111_AAAORU wang_y_Page_025.txt
4d78e03a077184cb5c66750314d8203c
210979d92c8fd2bed011a55df3fb4a6337fbf4e1
5416 F20101111_AAANOS wang_y_Page_005thm.jpg
f00a82a0400aab2340501c57781139bf
4ab7708d0f5051f0353859cf4155fefd6559079c
52334 F20101111_AAANPG wang_y_Page_153.pro
21338a10bc457801e035ba6100c12e04
892aaf0ab1ec37d394bf92d85911b3ab26842a72
599 F20101111_AAAOSJ wang_y_Page_045.txt
a1d5ea89ef97364bef352d45d3b2a366
02288b50694de37654ca925a4358fbe55bcc321f
231 F20101111_AAAORV wang_y_Page_028.txt
786f1bbfc9594b6cb06d5eee697e03f6
870f52c6c1ebd5c658f33560b73dd063d6314571
49490 F20101111_AAANOT wang_y_Page_055.pro
e40cba3f278c580b32c4d1f0d5737557
cd3f10b82425ae3597f96052a072c29ad1608750
84106 F20101111_AAANPH wang_y_Page_112.jpg
e6941bc0debbf9a656caf3ef17c5bfbf
6ae2c4da46fd7ba4f3b271b36a54a5435076bbe1
539 F20101111_AAAOSK wang_y_Page_047.txt
59fe098ed4b71c1afaabab342842f4e8
7915431f59fea50eed6cc4940790da72d1c08a2c
324 F20101111_AAAORW wang_y_Page_029.txt
0c4ba053c27e52868966c23f74697f71
f4f5f34a66fb293883dcab8a56a3f9ce1324c522
27705 F20101111_AAANOU wang_y_Page_023.QC.jpg
e98d9ab6c8136f31876eb616b1a6121b
80568596f7760f6f763dc64b0ed78ac490f01a4a
F20101111_AAANPI wang_y_Page_114.tif
502cc7b5a45e42330af06f0adad63c9b
fe07abeb4fcf9fbe29fc2a9ac59f3d94e6b2be13
2188 F20101111_AAAOTA wang_y_Page_069.txt
078ddfca20c27e581df33e8d7036b955
d7521b6d0ec67e6be5a48aecf11a1cbe67051087
2102 F20101111_AAAOSL wang_y_Page_048.txt
5521c42dbc552f90b05aaa1349a6c3e8
3b8c2d9c6b12c4ea9e47d2706cc92d1684d47372
2334 F20101111_AAAORX wang_y_Page_031.txt
f209fd6373f9850539108e5ac0895045
b851bf096be8d0ecf25a39bf7d4810c607eb6608
F20101111_AAANOV wang_y_Page_116.tif
2dfe3fdb8249e9fa02ce40e26ddb6de0
1706cdccd4502a701574b8d3827141a9fc6a6d88
F20101111_AAANPJ wang_y_Page_124.tif
dd5a3d07319de1095edf524aaad68cd7
7c57f9358c26d94b6ca4b8661ac05991259ad633
F20101111_AAAOTB wang_y_Page_070.txt
3c82857d15d3c56017c3a417e901dc62
0a865c7853ef958429e1f204d22b22bcc1cb4c33
1776 F20101111_AAAOSM wang_y_Page_049.txt
6fbdb65817c3cc06182efdcb17f567d1
a0dfa8a352e42186c64b02e44f3b2465418d505a
F20101111_AAAORY wang_y_Page_032.txt
a91e4fe7a11ba3725dd0fbfecacc6e83
52cb003e8623b5a8310b72d19593bb7883b7c046
444075 F20101111_AAANOW wang_y_Page_084.jp2
aebeecb84055874d16911179c3b35302
d3cac32dbe22d58d3ab742d85a957b7f30d4f02c
6505 F20101111_AAANPK wang_y_Page_058thm.jpg
0ccbffeeac00172140b91b961e3f53c2
ab98eb5876314c3b495dd7f03ab2723d24fcb29b
2047 F20101111_AAAOTC wang_y_Page_071.txt
1eace8729337e28977cef7407f228dad
c334e579439a1f300b24e2db8ece6959c9a8a90c
2170 F20101111_AAAOSN wang_y_Page_050.txt
418494159c72b3e65bb06ef85dab1b10
6fa9481a4c24468cc35826f40534ea02f4a7b4f2
2022 F20101111_AAAORZ wang_y_Page_033.txt
386da76d38c8cd5e49b767d6971e0981
e7998a0ca00a9d24c6d48160b3367e300a604725
27779 F20101111_AAANOX wang_y_Page_073.QC.jpg
95ab632b32dfce579d78ae8edcddbf94
e6a33047f97e119d2b768b65b16c61bc4e654316
11107 F20101111_AAANQA wang_y_Page_078.QC.jpg
23b53c3292a47499d8597a96fe7e0bba
f84be6b7610ef0cf2801f917722b0d7575c18a4e
27471 F20101111_AAANPL wang_y_Page_064.QC.jpg
7268b540165ddde074843b8b84bf0b55
eb97e616fbe82ad2b29b098f4ba42697dc866d1a
F20101111_AAAOTD wang_y_Page_073.txt
d13c55e01beeb5c3f54f00d699efb1af
9dd43933bdd6cd44efb6b0087b188f1df411113e
F20101111_AAAOSO wang_y_Page_052.txt
3e562763c87b22ed12f3b1c6a4976dd9
95fc34ed184a72c140cf759926c7764dc8c6e327
6577 F20101111_AAANOY wang_y_Page_128thm.jpg
2e1c61fcf499d969b96d9d7861ba46ac
b216d45687b93d26d6284205250ae9ad8e951d6e
2157 F20101111_AAANQB wang_y_Page_016.txt
e406c067faeb06a1c11b314eea170a60
6a945edb2c669de781a27a3998d50dac7922a51a
8739 F20101111_AAANPM wang_y_Page_125.QC.jpg
de67cc2ca61bb8929eba8b639b7cf1fb
0071f70e9142c09ad84cd1e61ad7e5997d6277b1
2198 F20101111_AAAOTE wang_y_Page_074.txt
153bc65899011c29dd07171020c3e077
9260b5b791b33d3726f4c47b555d1f02538eb1af
2149 F20101111_AAAOSP wang_y_Page_053.txt
9352d616c7c69dbc488826cfeb71e8aa
6eef23dda5d380f7a0f65d4c1ea686fb909476f2
762830 F20101111_AAANOZ wang_y_Page_121.jp2
314e10cfe1fad9d2da350bdf444e2139
ac3c9082a2a279a79be2a79b4b16a7147472dd41
F20101111_AAANQC wang_y_Page_138.tif
980198a9f15ab6682215066973d6482c
7aec60217932b9e233f3a7fef68763d141f38c38
87689 F20101111_AAANPN wang_y_Page_148.jpg
c3308aed3feeb8b14af8c4dd80264eed
e51f52366e16657567bf2f397315a12933de9aa0
666 F20101111_AAAOTF wang_y_Page_077.txt
05f5ceb57bcbf93bfe4ba4d9df1c7208
c9c29ad91cdaa58ecf87bfe544fe7920e26bec0f
2120 F20101111_AAAOSQ wang_y_Page_055.txt
a2d17804485d9452d216417576609846
e56a694a6b827a208db3643bc90c48f3e3e53e79
938 F20101111_AAANQD wang_y_Page_121.txt
5dd951f2e23649e71dd250b72e5fb50a
ad92b9af94debf1215813f262e4a389aa7b0286f
6592 F20101111_AAANPO wang_y_Page_091thm.jpg
e4b3a49a827143345eca39d3a528ba8a
8338648c938eaa471558326a726da55d3ef03ff9
768 F20101111_AAAOTG wang_y_Page_078.txt
500d6466411dfe31f946d2083446003f
22761f83ad5086f3d762e2113c610a53726ab5ae
2138 F20101111_AAAOSR wang_y_Page_056.txt
bbb1e18b817a8c46e2946a0cb4ef485d
8507591d79b5f808b4488a9a90b9b8d97cd0f119
115063 F20101111_AAANQE wang_y_Page_053.jp2
3c15bd36643b657c19b0ace1c9356c7b
f1c578f4bbf4b8814161b5467bda544fbc94d792
26520 F20101111_AAANPP wang_y_Page_071.QC.jpg
8c91d379c368358e855f8be37be95856
423cc6ad45184e03f0c7289fc85e87d3fded37ed
1955 F20101111_AAAOTH wang_y_Page_079.txt
20ae336abbca9584c05d6f6f50a1fa2c
faf15dbc1fa75551ee4c2d1b8387a01d8ecb3a26
2070 F20101111_AAAOSS wang_y_Page_057.txt
67c957031b99b4b06622f17ed33288bd
14d208ec96cd691987a25b231c2b84fea0a161d1
F20101111_AAANQF wang_y_Page_026.tif
d62fa0ec497f1ffbcf73a04cb4099b57
2206aca00b0d4b0d52b84e49235c9de586198727
70962 F20101111_AAANPQ wang_y_Page_007.pro
8e66be871eed0bf971d3f77908de69e1
e8d233cd2e3c2bd5f78b124754d5eae26a75136d
881 F20101111_AAAOTI wang_y_Page_080.txt
788d75398b389938a125572cbd4b59a8
caf86faf9bf039381a43aa913579d8329f2363aa
2048 F20101111_AAAOST wang_y_Page_059.txt
e2db00029fc3c07e9fc05978f492e509
f8254dc4a0dfd501e608f8ab51ca611798340f58
92782 F20101111_AAANQG wang_y_Page_108.jp2
e6c3c7be2872c9354e7600ed4148e33e
4abb45f6e9412929a4cd036b565f8a7a0111c8e4
1171 F20101111_AAANPR wang_y_Page_046.txt
07c034865d9f59401d9de83a968d8e76
a8f80738d094cc32aaff78a5d87bf4441e2d8637
2064 F20101111_AAAOSU wang_y_Page_060.txt
91c6d62df506a0a199cb4f866d3c1c08
c7ab7895f5502fd78ffa9f86853ecd48b6581c4b
F20101111_AAANPS wang_y_Page_077.tif
2063be9e017e0329e96360d38ff9598f
bf527fa0e40ec6da2bd34da1d196873234c101de
436 F20101111_AAAOTJ wang_y_Page_082.txt
b8ec08794bfc3f1bacee9e4f04929ca3
1670a923f1ccf3147afc3305a5b8acb051273a85
2000 F20101111_AAAOSV wang_y_Page_062.txt
556046b1e5f9da9b953bf9f2c7571db9
3f1b55f08dbb5b1a13839de3170ab8c79ccbe163
1334 F20101111_AAANQH wang_y_Page_081.txt
a2df3a4aa566a545b54fefd8595ca607
0f9fa5b2584bb370e16a3f57ceb6093b36d6ea9f
F20101111_AAANPT wang_y_Page_085.tif
e1f1d4e14fc2e3fcc523a5f02807394f
9ab51cefcebd6cb50432478b072fd67ff55558e3
903 F20101111_AAAOTK wang_y_Page_083.txt
2fc143f22857be203a8851c5e87b1d22
95b1b39ad8a109dd829f05e022305ab2df4c288e
2203 F20101111_AAAOSW wang_y_Page_063.txt
dcebb6ad15bba12b7b16ec9894cd2b93
706467f45b8f30c783e3e0895270fe850c614169
6784 F20101111_AAANQI wang_y_Page_147thm.jpg
2e1917bd69b215f2117d93fa6d4b66a3
480925702bdf2300ddff94a9cbe32a2ff09cbb20
F20101111_AAANPU wang_y_Page_132.QC.jpg
be9b1da48b699d41feb63c3ecb57bf75
97d1e57e238cb7d9b8ec64703b3c881d7b453a76
2033 F20101111_AAAOUA wang_y_Page_107.txt
1f8284a107a7dae34eb3962cd69bfa47
fa20ee571022ffc609cad6e0afdb7c2c5812abdd
810 F20101111_AAAOTL wang_y_Page_084.txt
402b7d3b18dc513742b874b1f1064005
22c5ad4a2641e0bce3b8572c296de26b6cd63fd8
2123 F20101111_AAAOSX wang_y_Page_065.txt
7c622991de5ad9db2efe147ba153afab
09a7f6123c6406afc6786efee1d4d78fcd3df832
49682 F20101111_AAANQJ wang_y_Page_057.pro
c0119cc340ab48d6a3e708a4d8c96404
60b6ddad32fd58434663932d7e25556a36f002b9
F20101111_AAANPV wang_y_Page_063.tif
3b4c5c2bbdfd1798a93ca683a9485b74
654c488cdd86d1162fd498aee0aaaa29bb58b9ff
2172 F20101111_AAAOUB wang_y_Page_109.txt
6307649d2ef3de9691ef02098c73598a
864c8bb3d884f5e386fe1a6547569909d22b065b
364 F20101111_AAAOTM wang_y_Page_087.txt
c3ed4116c3df92b0a8d5eec4c50a5889
582c30fc9d220b5b4e85d08bf788366ae1d3905e
2242 F20101111_AAAOSY wang_y_Page_066.txt
808d67606e66fd94d9a6aae42c1c1b3d
548c38741dce8a34382478f14cb3ae438a6bfc89
55094 F20101111_AAANQK wang_y_Page_149.pro
8a0899cdf0b8c830e771c701819acb59
ba5e83b0cdec46bc218e16a098ca023a4357eb63
6190 F20101111_AAANPW wang_y_Page_095thm.jpg
36ef0e7d4efa1f8791cc049df1d0b51e
4329a32043d01534bcb82e2283da58ecf1c90b64
2020 F20101111_AAAOUC wang_y_Page_110.txt
85ceb73c2603fffd4cdccbed1272a063
d0e293471dbf9b68ab3a6c24dd904b36cb7bc54a
679 F20101111_AAAOTN wang_y_Page_088.txt
bc1de68c058934da504786bf4d75a13f
43754195e08cf560ec8548e96e812533255ab3e5
2062 F20101111_AAAOSZ wang_y_Page_068.txt
2229f99357c13bc57e466467390e7265
399c03bbf8eb337dcb68e306248dbf8706049002
26057 F20101111_AAANQL wang_y_Page_090.QC.jpg
8922327cffe1814d58c81f4a273ded19
d7b7df1933de71ebdc73ca64d2f3360878beb1b6
25130 F20101111_AAANPX wang_y_Page_041.QC.jpg
b0e67ed80d47e70d20e72882670a4d47
cc28ce323c6a254981b6aa060318f8ae8003b76a
F20101111_AAANRA wang_y_Page_107.tif
26178fe319e5555734bae0c2d61516e5
71c2880337bcf3b8c00341fca8f96792034f1c9f
F20101111_AAAOUD wang_y_Page_111.txt
c083640353f0ca1cc1e6832a81bf0cab
e510684f04f0e96c90acc8544de5c8cdffed74ed
2067 F20101111_AAAOTO wang_y_Page_089.txt
deab96682c595b593b9aa9b234fe2e58
1be31bfb6075ffa92393363383623dc1b1c2ce49
54754 F20101111_AAANQM wang_y_Page_053.pro
2cf8a3fa177b62f90ed0d8b5fd2d9f5b
0354dee5dfe29b6fefc14a1ebdbd2b7e7609f97a
88398 F20101111_AAANPY wang_y_Page_022.jpg
de0fad37befeb32932e000899c2ec714
faaec2d69fb6f609138539c1c439eeaff0013911
6501 F20101111_AAANRB wang_y_Page_110thm.jpg
52fca2a2fcc5a447c33edfc0e41174ed
d3bdb126f6e28d3793756b9e3bea621ae96d46f2
1895 F20101111_AAAOUE wang_y_Page_113.txt
0b57d002082de5c3b790c1d69a134230
181f5437e4d725cb65f21ca9986e1654e0fc95a6
2175 F20101111_AAAOTP wang_y_Page_092.txt
7c1857fae0b8667ae5315d1c3967b903
34d88ac647b4ccb8c11034ca86e52b2c62794613
1892 F20101111_AAANQN wang_y_Page_168.txt
8984b99202d6bc89d619b556f6131de7
d819702820852c4b13b44288f61609f7f463531d
6708 F20101111_AAANPZ wang_y_Page_050thm.jpg
85b33c32c492a6f9a8af056f82f9b207
c069e6bc2e60bd7966991159771a1d2dc8713676
82360 F20101111_AAANRC wang_y_Page_062.jpg
c874a036e76e36eb123dceafd15b85f0
4ca9ea8c83f6db1fc8d76efaa701cdfa8fc2d7c6
2260 F20101111_AAAOUF wang_y_Page_115.txt
90e5b3e4fb281c920da1399596460441
6afe6d06e6b008a3899a9f5545554955ec6f8845
2119 F20101111_AAAOTQ wang_y_Page_093.txt
899e44e74df6fe89f61e13026de7f45e
ff04578a65d9d37801a73e2b30ada3aafb3f4ec1
115981 F20101111_AAANQO wang_y_Page_092.jp2
ad6679b4c8c1bcabf5de842abb934338
a814181e820ca2acf32d569b9d7003f2782441bb
41033 F20101111_AAANRD wang_y_Page_106.pro
16496069164be541481214d0383f1837
5be128cb175706893cd3ca7b316b0267dfa8b701
4226 F20101111_AAAPAA wang_y_Page_077thm.jpg
2db95ab297fbb74a16f335547e200a2b
c47558b34a4438fff66138df2d2e1b80c1de401d
1988 F20101111_AAAOUG wang_y_Page_116.txt
3f4f3de17cd3da473da0a80f431ce348
5a602dabf67ecf14f0ad144ccd38d0c5afb2d716
F20101111_AAAOTR wang_y_Page_096.txt
e3d5aaffd324f908d26ba643d253d0fa
e0677d885585cf5359bb1ee88572ab89a2928fd8
834079 F20101111_AAANQP wang_y_Page_079.jp2
baf249a2d071098fed0c76fb8130586f
0483e6d120f535da61696d6ec6647771fa89fdf8
F20101111_AAANRE wang_y_Page_112.txt
ff21f05280751eac29065eac91365126
e60ef69b8fe7d6fa40766b2d402d9c2eec9c7e0e
3467 F20101111_AAAPAB wang_y_Page_078thm.jpg
37204ad2aa58b8fe58998c6d7dedbf7b
d0c55277b3210a9e3dbf7b7d1a260b8db684fd90
956 F20101111_AAAOUH wang_y_Page_120.txt
638ff86eb4e0387ab126f04e96fcd47c
2d1e0b5d7b2338d343de87052851ad2a7d2fdbef
2183 F20101111_AAAOTS wang_y_Page_097.txt
2a7070ab2b7c8da8dc30d9c376009a74
47f900b91d01344128c3b099c33568a3be5023d6
6885 F20101111_AAANQQ wang_y_Page_027thm.jpg
ea0721329a6afa051e20b6dd2c777e24
be0f6b07d02d19dd765d225161625eccc2b775fe
2056 F20101111_AAANRF wang_y_Page_075.txt
f475dc49d5564bce681478c50bf715d7
638e8393caeaa85eb809af7570d55d7a468068c5
4441 F20101111_AAAPAC wang_y_Page_079thm.jpg
d2b40f74c89348af6c7b54ac3fa17d7e
1f96c5d0f74fab5fd2f16c125f2d33612e55cc4c
426 F20101111_AAAOUI wang_y_Page_124.txt
ae4defafc9e0dbc3d310442bdba7f07a
78ab090b90a0a0ee5eeb3dfa548398c96cbc5c30
2145 F20101111_AAAOTT wang_y_Page_099.txt
74e446926a73805665a0358d70ca2ecd
c54254c2cdb49e9701f60195a749926602683ab5
39458 F20101111_AAANQR wang_y_Page_079.pro
a0458e50c00cf66ec7526b3cddf341da
0cfa9ee50f65dad06e4528be50a95f3b38d0112f
6969 F20101111_AAANRG wang_y_Page_031thm.jpg
74f4542dbd56b58b207728830155113a
2546740e9982f2b285634ea07c6d1b05bbb89daa
3854 F20101111_AAAPAD wang_y_Page_080thm.jpg
883079d4af904ac7212847c4a46608cf
3b13d3184f2488dd9aac2e49d77205f8007e125b
540 F20101111_AAAOUJ wang_y_Page_125.txt
3ad70e3f927a5b33a7158176413b5ac4
09a259423128ed7fa071c5d810e310ee71cbea70
2151 F20101111_AAAOTU wang_y_Page_100.txt
1e0df4c3c0e0fcd678ef34a2d7718fa2
4071ac2a037323b1b383392c589b5bfcd9ac20bd
981 F20101111_AAANQS wang_y_Page_122.txt
c4244a2a68d9b893f4918faa80daace1
f69e95a5b6b22b373ad87bd7df172194b5a642ab
F20101111_AAANRH wang_y_Page_137.tif
d5663aa970591f5c6ef40f96e91dc28b
597022db8d3fb87b9f6a1d95e6d0c46100f1b069
11844 F20101111_AAAPAE wang_y_Page_082.QC.jpg
135273cf5c21fdeb5d82174a764a7342
84fdc5693d895031b69a32eae2b51cefcae2aa36
2127 F20101111_AAAOTV wang_y_Page_101.txt
53cc23c60fe704eba3299dea622d7422
27aebaf279eda9769a7c45d7c41a3caddc099639
F20101111_AAANQT wang_y_Page_086.txt
0d9b91a843dff88a81aa08cc75e105f4
db188c2b64b78f6392c74431b85184df9f005ff9
3634 F20101111_AAAPAF wang_y_Page_082thm.jpg
59293d8c17bc413ca705a5e066662b80
09a30b270719d26ac2cf5b73d4e27db41631d591
1627 F20101111_AAAOUK wang_y_Page_127.txt
0950363bcc1e0d8aba6cf41120be1c2c
6a04550d077d0ac0d4601441738bfaba125f0083
F20101111_AAAOTW wang_y_Page_102.txt
3991d45e9ad63ab4e8387999f2b97c6c
9802c501779c81d4ce090eab5226626d966705db
6411 F20101111_AAANQU wang_y_Page_065thm.jpg
e5a777827476f8022bdad4f31f0d663c
6d23c30a3691b6ff54e216509a7e706c6916bc03
6009 F20101111_AAANRI wang_y_Page_094thm.jpg
f2d5cd0613060787d2007bb2a05015b1
de989412662c091494508a8221aaffdc7a0909b5
3399 F20101111_AAAPAG wang_y_Page_083thm.jpg
e35784e9e3236d4d4cf9f2cc1dff00a7
ab4a4bd43971006c5a0ee52a0902f8773b8f2dfe
2178 F20101111_AAAOVA wang_y_Page_147.txt
c53855811dadb5246c4c98ab3ed0b259
0977309ba2bbc51c7aa00b98f1328bce85561726
268 F20101111_AAAOUL wang_y_Page_129.txt
73cbe83544ab3ba2312bb5aae7642245
b27ad89103b2d8888784e860dfafa207987b74e9
2195 F20101111_AAAOTX wang_y_Page_104.txt
429a7af69df448cd89d1eecb54d33d9c
362f7ccf893fade6c491dddb02c5e5e962fe7da2
6795 F20101111_AAANQV wang_y_Page_139thm.jpg
2a8813d01c60b73dbe25a99d507c9901
b5fc70ef66bd7925d4005a445aab6e077b21f6ba
6432 F20101111_AAANRJ wang_y_Page_075thm.jpg
2f64a16250fa0513ad3165e7e61e6bf3
4ca40a3bf57c02bfcb27530402cd55be21d346ea
2833 F20101111_AAAPAH wang_y_Page_084thm.jpg
1c5861e9e5af86356706e10f38bec518
a63f676eef572d582f22bd36ee470bb7ed1e065d
F20101111_AAAOVB wang_y_Page_151.txt
3063b5c6dd494c84d68a1fee2af0da92
0d9e76cdc54e968a6f75fc97b1786f2a5fbd67ac
1760 F20101111_AAAOUM wang_y_Page_130.txt
a026c61ff7409f4d9a0d62f428f7b6c5
df0c7e58c8e8c1008e3dbe4bfcfd7d3fc613d3f0
1964 F20101111_AAAOTY wang_y_Page_105.txt
45e3dd64754e16436da2feec5f07d1a8
ae39c81ed5a1ef475a4bbc73417b59f21f62552c
F20101111_AAANQW wang_y_Page_048.tif
149010f899ffdaf359cd5da4eafc7e70
a4be547470373f99f5d722a5a537e071bc4c680f
F20101111_AAANRK wang_y_Page_136.tif
88047a63d227e86de8a1d367f972ab77
a694919bb0301402b31311049b4b139dcd32b467
2561 F20101111_AAAPAI wang_y_Page_085thm.jpg
d012be8131c20c07e71e3d2d275f14e0
3712888b3b4d6a48b2ae1e52bfb29442b78fcfa9
F20101111_AAAOVC wang_y_Page_152.txt
2cbac920e411165318225e20e8e307c8
7e1a37ac7a2b3de3091d99e3aa3d9300c1d89b9e
1013 F20101111_AAAOUN wang_y_Page_131.txt
1691ea32bf884466fffbf71de9c24f6a
deda2d81b887f4a3b66c3fd74ee980a7144c7ce6
1969 F20101111_AAAOTZ wang_y_Page_106.txt
e3a8b8850af1cfd6d525d9c6821b9891
08ff3b9ab09b6f92a262365152024e3225b790fd
87241 F20101111_AAANQX wang_y_Page_064.jpg
595c6a9640b225c8ab0c1cc16564ab6e
c76fee9820b2aea15a80d716e176463ca24c24fc
F20101111_AAANSA wang_y_Page_035thm.jpg
ceee3826dce7082131d7dda1b6bffa64
df3b9e5d1aea7581b668d5d23248563fe493020b
27843 F20101111_AAANRL wang_y_Page_111.QC.jpg
ccf77d8255633a788894d4170867acda
f6c0ea31387cc43bff4a92b6c2f40e6dd045d95c
10349 F20101111_AAAPAJ wang_y_Page_086.QC.jpg
a5b7b25792f3a521feebfe43089f7a51
d723c522a2d6a53cc18c7b4ce93400d92c9f80aa
F20101111_AAAOVD wang_y_Page_153.txt
b3356b6e98f8639d53c6c05ac3f7db95
443298096e3510556647dbdc3534400703069bc5
825 F20101111_AAAOUO wang_y_Page_132.txt
9c69a5168a37b6c9af24c013ccec69f3
05b978e16c76a4afc4a0ea6d33c7dc171e34c818
F20101111_AAANQY wang_y_Page_118.tif
d59b1022a8a58873f0585e49a1502c3c
be67079a22dcc80a5d5ba01dbed4a77d08665e2d
52388 F20101111_AAANSB wang_y_Page_012.pro
b389819ad69b622f20640167da4df85d
e53941432f718202b68711dea678eb7692b3b106
24406 F20101111_AAANRM wang_y_Page_105.QC.jpg
0ff555891a44534897448ec9ad26aa41
080314629ab3df70cdf7f4df06138668de794ab0
27724 F20101111_AAAPAK wang_y_Page_087.QC.jpg
6595190b23ba838b6b3dbefaa94bfe4b
a3e2cc78395467a210ba84289878ce2aa13c274f
2053 F20101111_AAAOVE wang_y_Page_154.txt
8a60fc985b62e3da179f5ec8132de203
22975731f7b1f8f01df85dd50c415ddd7ed4b4af
1796 F20101111_AAAOUP wang_y_Page_133.txt
da3e796873f054953d252a29617c1207
6d7a5cbacc6211cba6b89cc925bf9a0a92e67162
F20101111_AAANQZ wang_y_Page_096.tif
a48122834629df1fbcf64022ae625a24
61d7b4c31316466ed49859ae67b51a3fc2ccb9fa
78604 F20101111_AAANSC wang_y_Page_058.jpg
0889e205c50483f66fdf97f8fe54c323
607011acf8de07001f99766a36dc20cfcce66ac0
F20101111_AAANRN wang_y_Page_150.tif
cc6b413186d857c43fbce5a975fc17e7
603a6f92e6cd2bb0103a9188fbc372683d0659af
8033 F20101111_AAAPAL wang_y_Page_088.QC.jpg
ca34e12d3a8e97fc364de8077ed8aee6
55dee7891c30277b9547d84202724ddb4e058bb7
F20101111_AAAOVF wang_y_Page_155.txt
e063872acc6e851d5e9d429be4fd797c
5eecce0a06264993efe249543d19c4c5a72e0a2f
511 F20101111_AAAOUQ wang_y_Page_134.txt
aecc53ae69729ebfa1974dd995c798f7
a2e232141799c75b14445f6dd22edc4dc34512a2
51427 F20101111_AAANSD wang_y_Page_103.pro
be4dd8018c1ed28c539aaee0d1fe8654
8972786ed56d2143e7befbab62c556106001f624
9198 F20101111_AAANRO wang_y_Page_082.pro
6e825cf8af30164606958680e189b461
e8483bfd116bc11a16f25d6ee32bc660bd219380
5909 F20101111_AAAPBA wang_y_Page_109thm.jpg
62ece877f6c12dd50764c8b75b6abdc7
baf21c1327db6f0fedca35f27b2cbe3701297b9a
2701 F20101111_AAAPAM wang_y_Page_088thm.jpg
d1a42a5282a10d84a4a22338f53920fd
1c96a316d9b5aba48a656a28e04f7e2891e99ebb
1311 F20101111_AAAOVG wang_y_Page_158.txt
dcf6db2b2b8cdfdfbafd0eb20cef56bc
b22538fafdb78a5144de22cb9cf384d3f1891452
664 F20101111_AAAOUR wang_y_Page_136.txt
ffd73b33fe2a736eabb6443b7449cc0f
b94ce340578765a24c9a76dbac8e74667c93c037
29169 F20101111_AAANSE wang_y_Page_163.QC.jpg
fdf33963e47af3c701e3eb0ca4f34f55
99df1450e2c7b50d1b0f589578694e485d873824
6853 F20101111_AAANRP wang_y_Page_117thm.jpg
2d59d7e0024f202b638acb91fd85bbe5
bbf7eea5f84fdd1883caf79d766d8aae366c4677
25438 F20101111_AAAPBB wang_y_Page_110.QC.jpg
5c0663d5f8e913844ba0083a1c952d1b
0b390486d747de424326f2d999bb2e78f2c9e672
6152 F20101111_AAAPAN wang_y_Page_090thm.jpg
8773b4c3d11f5bfd9f5dfc11bcd22cb1
1baa410feae9a6f775bcb866fa23f5073dcb7855
1245 F20101111_AAAOVH wang_y_Page_159.txt
b2e1bffd32fe9d541d1d4f282b73347e
c7bcf1b9d7d7ad5b50fca5625fc11b60a2fdfa35
785 F20101111_AAAOUS wang_y_Page_137.txt
8b196df1566c4cd025d182d6aab58c08
8f9dbc10bd6031d4518a9d5f7ef8b17ef1afa031
52795 F20101111_AAANSF wang_y_Page_114.pro
fe14ecfb2b62376f7148832abbac7c12
c5d5e8ca482d8b9b9ce9cf7bc0312f508368b002
F20101111_AAANRQ wang_y_Page_138.txt
38fd1c9da789802d7565f7ffdbd9c829
00c2212e3ca8250baa446e404d214637bf081730
F20101111_AAAPBC wang_y_Page_111thm.jpg
d3979fc0fb56a47feb66ff4c346d707b
5c9fb8a1868411a1c37dd5ade7e18f529c6e721b
29654 F20101111_AAAPAO wang_y_Page_092.QC.jpg
83c72a6ec9631cc43cd28f0a2cd2955b
72e3520132c16614cf0dfb5ed3c2636af4f3b2de
F20101111_AAAOVI wang_y_Page_161.txt
1489083aa6f252aba3dc76c30a8a8bb1
521d97d98c185b7d3a093e7334bcc6e37be981fd
F20101111_AAAOUT wang_y_Page_139.txt
dbf245f297a33ea245317c1ebe392073
4ccceaaa913dc00bfe9328dbcd2a3b805df3fb3e
72069 F20101111_AAANSG wang_y_Page_093.jpg
3d07b95ca90413b5170d96ced596db4e
2b362b18f007f4f050a271867dec477b61a826e0
113849 F20101111_AAANRR wang_y_Page_150.jp2
31121faf24516be4f6ec9821264cb843
66aeb125568f0c3695692e1a279677a07b7e607a
6129 F20101111_AAAPBD wang_y_Page_113thm.jpg
ecee395069cd99cc40990af29c59bd42
75b0bb74b754ce26f3d976ca57a2e172120a8cb0
23867 F20101111_AAAPAP wang_y_Page_094.QC.jpg
f24b2dbf6c3b67559a2c2696cd003e6c
1715bb6ab0182a63bf5be18b79180d59fbaf785b
2524 F20101111_AAAOVJ wang_y_Page_163.txt
5a1cf9cfc8d3af05c63390f117d3a47b
1e9e29b384e772d2d13cf31bfd6686f232f16836
2014 F20101111_AAAOUU wang_y_Page_140.txt
57623ddac80c3be526e1106841fe4ad5
123fa0a29697563ded6c1490e44d45aee8d1ddc7
25361 F20101111_AAANSH wang_y_Page_123.QC.jpg
77c0a4df7f4f137d3a7f85b4873070d8
1ec384fded995a370678314bfacbe4a1bad26cec
99588 F20101111_AAANRS wang_y_Page_094.jp2
d8765eceda53d022f16e14bb89e213a5
9b55757c9d82d6a42b34077bdcb30aad935fcda4
30290 F20101111_AAAPBE wang_y_Page_115.QC.jpg
3f27292b3d80616d0747a8dddfefcd5c
6880f8d1acca25c62d1af82562c01652ab2f88df
28377 F20101111_AAAPAQ wang_y_Page_097.QC.jpg
61e44e53feb3c55415441205d405aab1
ac490b1e2d1c0d79fd3ff8ac4c7c49cacec6aa60
2735 F20101111_AAAOVK wang_y_Page_164.txt
cb557771227b0ecff3ed790f1ac9a9a2
3366e1a90cba8608ac24f99b67649b97d761e0c6
2209 F20101111_AAAOUV wang_y_Page_141.txt
8877e32e885baa348d7c462410614cf4
6d1a9c2c70b14ffcf54fb55ce5e2bc69278b09b5
F20101111_AAANSI wang_y_Page_047.tif
f88ea185bf2c76d2275d041e8eccff5c
2e032256dcf7f18ca19ac0b3c25f98d4ebbfe979
2118 F20101111_AAANRT wang_y_Page_026.txt
2d9466bb9b217d2001a7b71ef8a41630
9629402ddb1a0c3f34ce7133b4218ae77728590b
26713 F20101111_AAAPBF wang_y_Page_116.QC.jpg
2798acfeed8d1b1a7d4455e662b8ee48
f4d34cd8b03cc38cca119909400ada78f4f17e31
28415 F20101111_AAAPAR wang_y_Page_098.QC.jpg
7d4379b334c1281e51fe6cf8b9652d1b
d98498ee792aa92baf67a2073cda8c8b07858f88
2043 F20101111_AAAOUW wang_y_Page_143.txt
5946b1821c46544d2a1e1e23692e3b8c
ff6cad42ac5ede88b38d8e0de011e0bff77590f8
28310 F20101111_AAANRU wang_y_Page_030.QC.jpg
5550609b9f51aabbadc30d84267416ff
076ed3b8aab1e4496686e917e20f14197a07fb65
6201 F20101111_AAAPBG wang_y_Page_116thm.jpg
164525e019714f98118edd8834e73799
cc58e6e291288d790f80b71e353a5d6c1c876b0f
6909 F20101111_AAAPAS wang_y_Page_098thm.jpg
325d275789f1a8a1e90a035fadd32f4f
90a3cd3b66c0477d5aa9b7af55cdbb083655d666
5972 F20101111_AAAOWA wang_y_Page_081thm.jpg
89b01b811d875e997015d27b6476b176
397010df7e395e565e56d62bc090933515e0bb35
2336 F20101111_AAAOVL wang_y_Page_165.txt
3b34861a2373d2c0efb89819e8ff5560
e141f38d29dc15b5bfdabd656ddc77007c69592e
2215 F20101111_AAAOUX wang_y_Page_144.txt
9bb8c06daf4f296188578f994fc22ab3
267dcc8496a5ce3ed89f984b01bca9724f809fd5
20857 F20101111_AAANSJ wang_y_Page_047.jpg
e5b4ee5258e78adb4d07d8beb492bb11
621cde8057028c271e38fff2f01debf79e84ebc2
109940 F20101111_AAANRV wang_y_Page_042.jp2
2b12f836cc280c2b6a374eda24bffb08
a7f8b659d025905cb2659b42e86c12aa3852e737
15886 F20101111_AAAPBH wang_y_Page_121.QC.jpg
68028398ffabeccc163036ef2aace3c5
d9a47d42200b4d7b24533bbfcec277a5d942c21f
28327 F20101111_AAAPAT wang_y_Page_099.QC.jpg
80c89318fd4a4ce4cb02ef053bda1966
7144ec7a99125912fe35f03a1e9e18139c348f55
28118 F20101111_AAAOWB wang_y_Page_035.QC.jpg
7a1ca4c2896ccc6689bbecae20cc88bd
20763ae2a589a56216c36e238786fb8b261af203
2610 F20101111_AAAOVM wang_y_Page_167.txt
f8ba7f8a1b4f9293d0c933a23c599891
8a9685e5937993369362208315ab7fa0da8cc3c2
2005 F20101111_AAAOUY wang_y_Page_145.txt
fefb2863d593b0cfd7b47283a6118d9e
0d855a829fd7bf1cad417d14574b0183f1b3ab39
1540 F20101111_AAANSK wang_y_Page_038.txt
aa4ef09881a27fd94dd04db146c8f862
978df7e9ad601ba8a734853845d8a4d2a03f88e9
121685 F20101111_AAANRW wang_y_Page_020.jp2
916c5be41788802de00eea39b47471b6
f5b3a959cd40e592c42a86ecf9e10710077f11af
2791 F20101111_AAAPBI wang_y_Page_125thm.jpg
b679e087553e27168266bf8f697d0e06
fa8f4fc5aef18ed522460ffed7fa4effe14040f5
6779 F20101111_AAAPAU wang_y_Page_100thm.jpg
98f27de99d94e35042963244d53e7375
45acf0ee0cac0f0ba9924ef7ddff2482502616ec
27374 F20101111_AAAOWC wang_y_Page_114.QC.jpg
2a048ca3735c08d821fe9adbfc5b80d8
8a19df726853c81e4633ec983b94778e41d90f7c
1045 F20101111_AAAOVN wang_y_Page_169.txt
2a77ff1b90a9aa3c1f33ea6318908b24
d7e2870699261e397d99489c45d13815168f5dbe
2098 F20101111_AAAOUZ wang_y_Page_146.txt
f2e7641e287afcd59a1c3ebce513acb1
ca4a77ccd0da469abcfc78065ff098ae1e76e0f5
4858 F20101111_AAANTA wang_y_Page_133thm.jpg
eb936df46032748bb77fc97a13deb4f1
a1a9ebdc79b69e665e2dca67a5a7eae6a04b1974
24304 F20101111_AAANSL wang_y_Page_154.QC.jpg
e94ec56a003bb7de057281ba89642af9
151e4b958cbf61c2816049288641a82cc6ce120c
23354 F20101111_AAANRX wang_y_Page_068.QC.jpg
777f429890a528cf041f97c762b970b2
c24aa5020396d6e78af7a552cd28987cd963a26b







POINT PROCESS MONTE CARLO FILTERING FOR BRAIN MACHINE INTERFACES


By

YIWEN WANG

















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2008


































2008 Yiwen Wang

































To my family









ACKNOWLEDGMENTS

My deepest gratitude goes to my advisor Dr. Jose C. Principe because of his guidance,

support, critiques and sense of humor. I feel blessed to be one of his students. Inspired by his

suggestions, encouraged by his rigorous comments, relaxed by his jokes, touched by his caring, I

was brought along into the fabulous world of research. It is Dr. Principe who taught me to how to

think as a researcher.

My gratitude also goes to Dr. Justin C. Sanchez, Dr. John G. Harris, Dr. Dapeng Wu, (who

are members of my committee), and Antonio Paiva, for their suggestions and help in my

research.

I am also grateful to Aysegul Gunduz, John DiGiovanna, Shalom Darmanjian, Ruijiang Li,

Weifeng Liu, Dr. Rui Yan, Dr. Jianwu Xu, and Dr. Dongming Xu for their incredible supports

and caring. I also would like to express gratitude to Julie Veal, who helps me with my English

writing.

Last but not least, I am indebted to my mother, my father and my fiance, for their endless

love, support and strong belief in me. This dissertation is dedicated to them.









TABLE OF CONTENTS

page

A CK N O W LED G M EN T S ................................................................. ........... ............. .....

L IS T O F T A B L E S ................................................................................. 7

ABSTRAC T ............................... ..................... 10

CHAPTER

1 INTRODUCTION ............... .......................................................... 12

D description of Brain M machine Interfaces....................................................... .............. 12
Review of the Approaches in Spike D om ain................................... .................................... 13
Spike Sorting- Preprocessing Neural Activities .................................... ............... 14
Spike-B ased A association A analysis ........................................................ ............... 15
Spike-B asked M modeling .............................................. ... ............. .. .... .. 17
E ncoding analy sis........... .............................................................. .......... .. 18
D ecoding algorithm s ........................................ .................... .... .. .. 2 1
O u tlin e ............................................................................2 5

2 PROBABILISTIC APPROACH FOR POINT PROCESS ......................................... 30

Sequential State Estimation Problem -Pros and Cons ..............................................30
Review of the Previous Probabilistic Approaches............................................................... 31
Adaptive Algorithms for Point Processes.................................................................32
Adaptive Filtering for Point Processes with Gaussian Assumption............................. 33
Monte Carlo Sequential Estimation for Point Processes............................ ..................35
Simulation of Monte Carlo Sequential Estimation on Neural Spike Train Decoding............40
Interp rotation ....................... .. .. ......... .. .. .................................................4 4

3 INFORMATION THEORETICAL ANALYSIS OF INSTANTANEOUS MOTOR
CORTICAL NEURON ENCODING.......................... ............................................ 48

Experim mental Setups .................................................................... ......... ..... 48
D ata R recording .................. ..................................... .............. ......... 48
Sim ulation vs. V ivo Recordings......................................................... .............. 50
R review of T uning A analysis ......................................... ..................................................... 51
Visual Inspection of a Tuning N euron...................................................... ...................55
Metric for Tuning ................................. ...... ... ..................55
T u n in g D epth ................... ..................................... .............. ................ 56
Inform ation Theoretic Tuning M etric ........................................ ........................ 57
Sim ulated N eural Recordings.......................................................... ............... 59
In V ivo N eural R ecordings...................................................................... ..................63
Inform ation Theoretical N eural Encoding.................................... .......................... .. ........ 64
Instantaneous Tuning Function in Motor Cortex ................................. ................ 64









Inform ation Theoretic D elay Estim ation..................................... ........................ 69
Instantaneous vs. W indowed Tuning Curves ...................................... ............... 71
Instantaneous vs. W indowed Encoding...................................... ......................... 73
D iscu ssio n ................... ...................7...................5..........

4 BRAIN MACHINE INTERFACES DECODING IN SPIKE DOMAIN ..............................89

The Monte Carlo Sequential Estimation Framework for BMI Decoding ..............................89
Monte Carlo SE Decoding Results in Spike Domain.........................................................94
Parameter Study for Monte Carlo SE Decoding in Spike Domain .....................................98
Synthesis Averaging by Monte Carlo SE Decoding in Spike Domain .............................100
D ecoding Results Com prison Analysis ........................................ ......................... 104
D ecoding by K alm an ........................................................ ................ 105
D ecoding by A daptive Point Process ........................................ ........................ 106
Exponential tuning .................................. .. ........ ...............106
K alm an point process .............................................. .... .... .. ........ .... 108
Performance Analysis................................ ...........................109
Nonlinear & nonGaussian vs. linear & Gaussian........................................110
Exponential vs. linear vs. LNP in encoding ....................................... .................113
Training vs. testing in different segments nonstationary observation...............14
Spike rates vs. point process ............... ...................................................... ....... 115
Monte Carlo SE Decoding in Spike Domain Using a Neural Subset.............................. 117
N eural Subset Selection ........................................... ........................ ............... 118
N eural Subset vs. Full Ensem ble ............................. ........................... ............... 119

5 CONCLUSIONS AND FUTURE WORK .................................................. ...............138

C o n c lu sio n s............................................................................. .13 8
F future W ork ......................................................152

L IST O F R E F E R E N C E S ............................. ..................................................... .....................16 1

B IO G R A PH IC A L SK E T C H ............................. ............................................... ..................... 169









LIST OF TABLES


Table page

2-1 Comparison results of all algorithms with different Qk ..................................................45

3-1 Assignment of the sorted neural activity to the electrodes .........................................77

3-2 The statistical similarity results comparison........................................ ......... ...............79

3-3 The comparison of percentage of Monte Carlo results in monotonically increasing ........79

4-1 The kinematics reconstructions by Monte Carlo SE for segment of test data............... 121

4-2 Averaged performance by Monte Carlo SE of the kinematics reconstructions for
segm ent of test data .................. ............... ........ .......................... 123

4-3 Statistical performance of the kinematics reconstructions using 2 criteria......................123

4-4 Results comparing the kinematics reconstructions averaged among Monte Carlo
trials and synthetic averaging................................................. .............................. 126

4-5 Statistical performance of the kinematics reconstructions by Monte Carlo SE and
synthetic averaging .......................... ...... .................... .. ............. ....... 127

4-6 Results comparing the kinematics reconstruction by Kalman PP and Monte Carlo SE
for a segm ent of data .................. ................. .............. ........... .. ............. 127

4-7 Statistical performance of the kinematics reconstructions by Kalman PP and Monte
Carlo SE (synthetic averaging) ......................................................... ............... 130

4-8 Statistical performance of the kinematics reconstructions by different encoding
m o d e ls ................... ........... ............................................ ................. 1 3 0

4-9 Statistical performance of the kinematics reconstructions Kalman filter and Kalman
P P ................... ............................... ............................ ................ 1 3 3

4-10 Statistical performance of the kinematics reconstructions by spike pates and by point
process........ ................................ ................................................ 133

4-11 Statistical performance of the kinematics reconstructions by neuron subset and full
e n se m b le ............................................................................. 1 3 5

5-1 Results of the kinematics reconstructions by Kalman and dual Kalman for segment
of test d ata ................................................. ..........................................159









LIST OF FIGURES


Figure page

1-1 Brain machine interface paradigm ......... ....... .. ................. ... ............... 29

2-1 The desired velocity generated by triangle wave with Gaussian noise ...........................45

2-2 The simulated neuron spike train generated by an exponential tuning function ..............45

2-3 The velocity reconstruction by different algorithms............................................. 46

2-4 p (vk k AN ) at different tim e............................................. ...... ............................... 46

3-1 The BMI experiments of 2D target reaching task. The monkey moves a cursor
(yellow circle) to a randomly placed target (green circle), and is rewarded if a cursor
intersects the target .........................................................................77

3-2 T uning plot for neuron 72 ............................................................................... ........... 77

3-3 A counterexample of neuron tuning evaluated by tuning depth. The left plot is a
tuning plot of neuron 72 with tuning depth 1. The right plot is for neuron 80 with
tu n in g d ep th 0 .9 3 ...............................................................................................................7 8

3-4 The conditional probability density estimation....................................... ............... 78

3-5 The average tuning information across trials by different evaluation ............................79

3-6 Traditional tuning depth for all the neurons computed from three kinematics..................80

3-7 Information theoretic tuning depth for all the neurons computed from 3 kinematics
plotted individually ...................................................................... ..........81

3-8 Block diagram of Linear-Nonlinear-Poisson model ......................................................82

3-9 Sketch map of the time delay between neuron spike train (bottom plot) and the
kinem atics response (upper plot) ............................................... ............................ 82

3-10 The conditional probability density estimation....................................... ............... 83

3-11 Mutual information as function of time delay for 5 neurons.............................................83

3-12 N onlinearity estim ation for neurons ............................................................................ 84

3-13 Correlation coefficient between the nonlinearity calculated from windowed
kinematics and the instantaneous kinematics with optimum delay .................................86

3-14 Comparison of encoding results by instantaneous modeling and windowed modeling ....87









3-15 Comparison of encoding similarity by instantaneous modeling and windowed
m modeling across kernel size ....................................................................... ..................88

4-1 Schematic of relationship between encoding and decoding processes for Monte Carlo
sequential estim ation of point processes.................................. ..................................... 121

4-2 The posterior density of the reconstructed kinematics by Monte Carlo SE ....................122

4-3 The reconstructed kinematics for 2-D reaching task ..................................... ........123

4-4 Linear m odel error using different a ........................................ ......................... 124

4-5 cdf of noise distribution using different density ............. .............................. 125

4-6 Nonlinearity of neuron 72 using different c ............... .......... ................................ 125

4-7 Decoding performances by different x ............................... ................................. 126

4-8 The reconstructed kinematics for a 2-D reaching task..................................................128

4-9 The decoding performance by algorithms in PP for different data sets .........................131

4-10 Threshold setting for sorted information theoretic tuning depths for 185 neurons .........133

4-11 Selected neuron subset (30 neurons) distribution ............................ ....... ........134

4-12 Statistical performance of reconstructed kinematics by different neuron subsets...........136

5-1 The reconstructed kinematics for 2-D reaching task by Kalman and dual Kalman
filte r ...... ..... ........ ....... ......... .. ... .................................................. 1 5 9

5-2 The tracking of the tuning parameters for the 10 most important neurons in dual
K alm an filter ............ .. ....... ......... .......... ...................................160









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

POINT PROCESS MONTE CARLO FILTERING FOR BRAIN MACHINE INTERFACES

By

Yiwen Wang

August 2008

Chair: Jose C. Principe
Major: Electrical and Computer Engineering

Brain Machine Interface (BMI) design uses linear and nonlinear models to discover the

functional relationship between neural activity and a primate's behavior. The loss of time

resolution contained in spike timing cannot be captured in traditional adaptive filtering

algorithms and might exclude useful information for the generation of movement. More recently,

a Bayesian approach based on the observed spike times modeled as a discrete point process has

been proposed. However, it includes the simplifying assumption of Gaussian distributed state

posterior density, which in general may be too restrictive. We proposed in this dissertation a

Monte Carlo sequential estimation framework as a probabilistic approach to reconstruct the

kinematics directly from the multi-channel neural spike trains. Sample states are generated at

each time step to recursively evaluate the posterior density more accurately. The state estimation

is obtained easily by reconstructing the posterior density with Parzen kernels to obtain its mean

(called collapse). This algorithm is systematically tested in a simulated neural spike train

decoding experiment and then in BMI data. Implementing this algorithm in BMI requires

knowledge of both neuronal representation (encoding) and movement decoding from spike train

activity. Due to the on-line nature of BMIs, an instantaneous encoding estimation is necessary

which is different from the current models using time windows. We investigated an information









theoretic technique to evaluate neuron's tuning functional relationship between the instantaneous

cinematic vector and neural firing in the motor cortex by a parametric linear-nonlinear-Poisson

model. Moreover, mutual information is utilized as a tuning criterion to provide a way to

estimate the optimum time delay between motor cortical activity and the observed kinematics.

More than half (58.38%) of the neurons instantaneous tuning curves display a 0.9 correlation

coefficient with those estimated with the temporal kinematic vector.

With the knowledge gained from tuning analysis encapsulated in an observation model,

our proposed Brain Machine Interface becomes a problem of state sequential estimation. The

kinematics is directly reconstructed from the state of the neural spike trains through the

observation model. The posterior density estimated by Monte Carlo sampling modifies the

amplitude of the observed discrete neural spiking events by the probabilistic measurement. To

deal with the intrinsic spike randomness in online modeling, synthetic spike trains are generated

from the intensity function estimated from the neurons and utilized as extra model inputs in an

attempt to decrease the variance in the kinematic predictions. The performance of the Monte

Carlo Sequential Estimation methodology augmented with this synthetic spike input provides

improved reconstruction further. The current methodology assumes a stationary tuning function

of neurons, which might not be true. The effect of the tuning function non-stationary was also

studied by testing the decoding performance in different segment of data. The preliminary results

on tracking the non-stationary tuning function by a dual Kalman structure indicate a promising

avenue for future work.









CHAPTER 1
INTRODUCTION

Description of Brain Machine Interfaces

Brain-Machine Interfaces (BMIs) exploit the spatial and temporal structure of neural

activity to directly control a prosthetic device. The early work in the 1980s by Schmidt [1980],

and Georgopoulos, Schwartz and colleagues [1986], first described the concepts, application and

design of BMI as an engineering interface to modulate the motor system by neural firing

patterns. Two decades later, several research groups have designed experimental paradigms to

implement the ideas for Brain Machine Interfaces [Wessberg et al., 2000; Serruya et al., 2002].

These are illustrated in Figure 1-1.

In this framework [Wessberg et al., 2000; Serruya et al., 2002], neuronal activity (local

field potentials and single unit activity) has been synchronously collected from microelectrode

arrays implanted into multiple cortical areas while animals and humans have performed 3-D or

2-D target-tracking tasks. Several signal-processing approaches have been applied to extract the

functional relationship between the neural recordings and the animal's kinematic trajectories

[Wessberg et al. 2000; Sanchez, et al., 2002b; Kim, et al., 2003; Wu, et al., 2006; Brockwell, et

al., 2004]. The models predict movements and control a prosthetic robot arm or computer to

implement them. Many decoding methodologies use binned spike trains to predict movement

based on linear or nonlinear optimal filters [Wessberg et al. 2000; Sanchez et al. 2002b; Kim et

al., 2003]. These methods avoid the need for explicit knowledge of the neurological dynamic

encoding properties, and standard linear or nonlinear regression is used to fit the relationship

directly into the decoding operation. Yet another methodology can be derived probabilistically

using a state model within a Bayesian formulation [Schwartz, et al. 2001; Wu et al. 2006;

Brockwell et al. 2004]. From a sequence of noisy observations of the neural activity, the









probabilistic approach analyzes and infers the kinematics as a state variable of the neural

dynamical system. The neural tuning property relates the measurement of the neural activity to

the animal's behaviors, and builds up the observation measurement model. Consequently, a

recursive algorithm based on all available statistical information can be used to construct the

posterior probability density function of each kinematic state given the neuron activity at each

time step from the prior density of the state. The prior density in turn becomes the posterior

density of previous time step updated with the discrepancy between an observation model and

the neuron firings. Movements can be recovered probabilistically from the multi-channel neural

recordings by estimating the expectation of the posterior density or by maximum a posterior.

Review of the Approaches in Spike Domain

The mathematical model in Brain Machine Interfaces requires the application of signal

processing techniques to functionally approximate the relationship between neural activity and

kinematics, such as spike sorting and association analysis between neurons and neuron

encoding/decoding algorithms. Adaptive signal processing is a well-established engineering

domain to analyze the temporal evolution of system characteristics [Haykin, 2002]. Traditional

adaptive processing requires continuous measurement of signals using tools such as the Wiener

filter, least square algorithm, and Kalman filter. Early BMI research frequently employed a

inning process to analyze and develop algorithms to obtain the neural firing rate as a continuous

signal. This inning process conceals the randomness of neural firing behaviors, and the inning

window size is always a concern. In Brain Machine Interfaces, neural activity and plasticity are

characterized by spike trains. The loss of time resolution for "true" neuron activities might

exclude information useful for movement generation. Thus an analysis of the spike domain is

necessary for this specific application of BMI.









Spike Sorting-Preprocessing Neural Activities

In BMI neurophysiologic recordings, extracellular neural action potentials are recorded

with multiple electrodes representing the simultaneous electrical activity of a neuronal

population. To identify the action potentials of each neuron, the multi-channel data processing

of the spike train data analysis or decoding in BMIs starts with a spike sorting step. Most

commonly, an action potential is detected by imposing a threshold on the amplitude of the

amplified signal, thereby generating a pulse every time an action potential occurs. However, this

method is subject to failure due to noise contamination and spike overlapping, and the results

may not contain a single threshold for all conditions of interest. Previous research introduced

many algorithms to analyze spike shape features and to perform spike sorting by classifying

multiple spike shapes at the same time [Lewicki, 1998]. Clustering provides a simple way to

organize spikes by their shape, but also has an unfortunate trade-off between false positives and

missed spikes. Clustering in Principal Component Analysis space avoids the noise problem and

separates the different spike shapes according to the primary, or more robust, result. Template-

based Bayesian clustering quantifies the certainty of the spike classification by computing the

likelihood of data given a particular class. Fee et al. [1996] developed an approach to choose the

number of classes for Bayesian clustering by guiding the histogram of the interspike intervals.

An optimal filter-based method based on the assumption of accurate estimation of the spike

shapes and noise spectrum [Gozani & Miller, 1994] was also proposed to discriminate the spikes

from each other and the background noise. These methods remain unable to cope with

overlapping spikes. Neural networks, however, showed improved performance by providing

more general decision boundaries [Chandra & Optican, 1997]. Multi-recording of neuron activity

resulted in the ability to discriminate overlapping spikes. Independent Component Analysis

(ICA) was successfully used for multi-channel spike sorting [Makeig et al., 1997; McKeown et









al., 1998]. ICA has a strong assumption that each channel should be regarded as one signal

source and that all sources are mixed linearly. Although a significant body of work has addressed

spike detection/sorting algorithms, the problem is far from solved. The major shortcomings are

(1) assumption of stationary spike shapes across the experiment, which disregards electrode drift;

(2) assumption of stationary background noise; (3) the necessity of proper spike alignment

techniques for overlapping action potentials. The accuracy of the spike detection/sorting

techniques directly affects the prediction results of BMIs, but to what level this occurs is

unknown. Sanchez [2005] showed that the results of linear models using unsorted spike data

differs little from the sorted spikes in simple movement prediction, but it may affect more

complex movement prediction.

Spike-Based Association Analysis

The most common methods for spike train analysis are based on histograms, which require

the assumption of stationary parameters. The association among multi-neural spike trains can be

analyzed with and without neural stimulation. The functional relationship between neural spikes

and local field potentials can also be analyzed based on pre-stimulus patterns.

Brody [1999] proposed the unnormalized cross-correlogram (cross-covariance) to measure

the pair-wise association between two binned spike rates over different time lags, but this

method lacks time resolution. Cross-intensity function [Brillinger, 1992], a similar concept,

measures the spike rate of one neuron when another neuron fires a spike, and it preserves the

temporal resolution.

To quantify the association among more than two neurons in an ensemble (i.e., the

presence of spatiotemporal patterns); two statistical approaches to parameterize these interactions

have been introduced (1) coefficients of log-linear models, (2) a Bayesian approach for inferring

the existence or absence of interactions, and an estimation of the strength of those interactions









[Martignon et al., 2000]. A data-mining algorithm, originally developed to analyze the

generation of interictal activity in EEG recordings [Bourien et al., 2005] was also applied to

automatically extract co-activated neurons. This method provided the statistical evidence for the

existence of neuron subsets based on the stationary characteristics of neural activities. The

automatic extraction of neuron subsets needs long data segments in order to be useful. An online

realization has yet to be developed.

Another technique for the association analysis between neurons, appropriate when a

stimulus is present, is the Joint-Peri-Stimulus-Time-Histogram (JPSTH) [Gerstein & Perkel,

1969], which is the extension concept of PSTH for a single neuron [Abeles, 1982]. JPSTH is the

joint histogram between two spike trains, and describes the jointpdf of the synchrony when a

stimulus occurs. The computation is based on the null hypothesis that the spike trains are the

realization of independent Poisson-point processes, and as such are independent. The neuron

response to the stimulus is assumed statistically stationary.

The association analysis between spike firings and local field potentials (LFP) also has

been investigated in terms of stimulus. Researchers have described the temporal structure in

LFPs and spikes where negative deflections in LFPs were proposed to reflect excitatory, spike-

causing inputs to neurons near the electrode [Arieli et al., 1995]. The most appropriate feature

detection method explores the correlation between the amplitude modulated (AM) components

of the movement-evoked local field potentials and single-unit activities recorded as stimulus at

the same electrode across all movement trials [Wang et al., 2006a]. The correlation between

pairs of peri-event time histograms (PETH) and movement evoked local field potentials (mEP) at

the same electrode showed high correlation coefficients for some neurons, suggesting that the

extracellular dendritic potentials indicate the level of neuronal output. A critical demonstration of









this relationship was the process of averaging the LFP and single unit activity across the lever

press trials, thus reducing the noise contamination caused by the random realization of

unmodeled brain spontaneous activities. More work is needed toward reducing noise

contamination

All the above histogram-based methods can be considered empirically as approximations

to the probabilistic density, and information theoretic measures can be introduced into each

method. The information theoretic calculation for the spike train uses milliseconds, which, as the

minimum time scale determined to contain information [Borst & Theunissen, 1999], is the

"limiting spiking timing precision." Entropy was proposed to qualify the information carried by

the spike arrival times [Strong et al., 1998]. Mutual information can be used to measure the pair-

wise neural train association, the statistical significance conveyed by the neuron responding to

the stimulus [Nirenberg et al., 2001], and the evaluation of the independence and redundancy

from the nearby cortical neuron recordings [Reich et al., 2001]. The information theoretic

calculation can be performed directly on the neural activity, but the operation needs enough data

to ensure that the histogram-based analysis performs well. The mutual information summarizes

the relationship between multi spike trains and the neural response to a biological stimulus, but

in only a scalar quantity, which does not describe the complicated relationship as well as

modeling does.

Spike-Based Modeling

In addition to determining stimulus response association through a statistical analysis of

the neural spike train, researchers also investigated parametric probability modeling using the

likelihood method to estimate point process properties. A good model is an optimal way to

theoretically predict and analyze the underlying dynamics of neural spike generation. A simple

inhomogeneous Poisson process has been used most frequently to model both the simulation and









quantification of the neural activity analysis of a single spike train [Tuckwell 1988; Rieke et al.,

1997; Gabbiani & Koch, 1998; & Reich et al., 1998]. This model is particularly appealing

because it can explicitly describe neuron spiking as a simple analytical Poisson process [Brown

et al., 1998; Zhang et al., 1998]. The inhomogeneous Poisson model cannot, however,

completely describe the neuron behavior with a multimodal inner spike interval distribution

[Gabbiani & Koch, 1998; Shadlen & Newsome, 1998]. Non-Poisson spike train probabilistic

models have been studied under the assumption that a neuron fires probabilistically, but the

model depends on the experimental time and the elapsed time since the previous spike [Kass &

Ventura, 2001]. Additionally, dependencies between multi-spike trains were analyzed through

the pair-wise interactions among the ensemble of neurons, where the firing rate in the

inhomogeneous Poisson was modeled as a function of the inhibitory and excitatory interaction

history of nearby neurons [Okatan et al., 2005]. Truccolo et al. [2005] proposed a similar

analysis as a statistical framework, based on the point process likelihood function, to relate the

neuron spike probability to the spiking history, concurrent ensemble activity, and extrinsic

covariates such as stimuli or behavior. All of these parametric modeling methods provided a

coherent framework to understand neural behavior and the base to statistically apply

mathematical models to study the relationship between spike patterns of ensembles of neurons

and an external stimulus or biological response (the encoding), which characterizes the neural

spike activity as a function of the stimulus, and decoding, which infers the biological response

from the neural spikes.

Encoding analysis

The 'neural code' refers how a neuron represents behavioral responses or how it responds

to a stimulus. The parameterization of a tuning function requires an understanding of three

interconnected aspects 1) What is the behavior/stimulus? 2) How does the neuron encode it? 3)









What is the criterion for quantifying the quality of the response? The tuning curve was measured

initially as a cosine curve between the stimulus and the response [Georgopoulos et al., 1989]

using mainly static stimuli to discriminate between the stimuli based on neural responses.

For neurons located in the motor cortex, researchers first developed the static descriptions

of movement-related activity by applying electrical stimuli to motor areas to elicit muscle

contraction [Fritsch & Hitzig, 1870; Leyton & Sherrington, 1917; Schafer, 1900]. Later,

movement direction was correlated with cortical firing in a center-out task where the tuning

function was initially modelled as a cosine curve [Georgopoulos et al. 1982]. The peak discharge

rate of a cell is called preferred direction. To quantify the degree of tuning, the tuning depth has

been proposed as a metric and it is defined as the difference between the maximum and

minimum values in the firing rates, normalized by the standard deviation of the firing rate

[Carmena et al., 2003, Sanchez et al., 2003]. As a scalar, the tuning depth summarizes the

statistical information contained in the tuning curve to evaluate the neural representation, which

indicates how modulated the cell's firing rate is to the kinematic parameter of interest. However,

this metric has some shortcomings since it can exaggerate the value of tuning depth when the

neuron firing rate standard deviation is close to 0. Additionally, it depends on the inning

window size to calculate the firing rate of the neuron. The tuning depth also relates to the scale

of the behavior/stimulus and makes the analysis not comparable among neurons as we will see.

A more principled metric, allowing comparisons among neurons and among kinematic variables,

is necessary to mathematically evaluate the information encoded by neurons about the

kinematics variables. If this is achieved, the new tuning depth metric can be utilized to

distinguish the neuron's tuning ability in BMI.









Besides the scalar description of tuning properties, different models are used to describe

the tuning properties of the neurons parameterized by a few parameters. However, there is no

systematic method to completely characterize how a specific stimulus parameter governs the

subsequent response of a given neuron. Linear decoding, proposed by researchers to model the

stimulus-response function, has been widely used [Moran & Schwartz, 1999]. The linear filter

takes into account the sensitivity of preferred direction, the position and speed of the movement

to represent the firing rate in cortical activity [Roitman et al., 2005].

However, linear encoding captures only a fraction of the overall information transmitted

because the neuron exhibits nonlinear behavior with respect to the input signal. Brown et al.

[2001] used a Gaussian tuning function for the hippocampal pyramidal neurons. Brockwell et al.

[2003] assumed an exponential tuning function for their motor cortical data. These nonlinear

mathematical models are not optimal for dealing with real data because the tuned cells could

have very different tuning properties. Based on the white noise analysis to characterize the neural

light response [Chichilnisky, 2001], Simoncelli and Paninski et al. [2004] proposed a cascading

liner-nonlinear-Poisson model to characterize the neural response with stochastic stimuli. The

spike-triggered average (STA) and the spike-triggered covariance (STC) provided the first linear

filter stage in a polynomial series expansion of the tuning function [Paninski, 2003]. This linear

filter geometrically directs the high dimensional stimulus to where the statistical moments of

spike-triggered ensemble differ most from the raw signals. The nonlinear transformation of the

second stage is estimated by an intuitive nonparametric inning technique [Chichilnisky, 2001]

as the fraction of the two smoothed histograms. This gives a conditional instantaneous firing rate

to the Poisson spike-generating model. The nonlinear stage is then followed by a Poisson

generator. This modeling method assumes that the raw stimulus distribution is spherically









symmetric for STA and Gaussian distributed for STC, and that the generation of spikes depends

only on the recent stimulus and is historically independent of previous spike times. Both STA

and STC fail when the mean or the variance of the spike-triggered ensemble does not differ from

the raw ensemble at the direction of the linear filter. For the information-theoretic metric, mutual

information was proposed to quantify the predictability of the spike [Paninski & Shoham et al.,

2004; Sharpee & Rust et al., 2002]. The multi-linear filters representing the trial directions were

found to carry the most information between spikes and stimuli.

The encoding analysis provided a deeper understanding of how neuron spikes respond to a

stimulus. This important mathematical modeling holds promise toward providing analytical

solutions to the underlying mechanism of neuron receptive fields.

Decoding algorithms

In decoding, the biological response is estimated from the neural spike trains. The initial

method, a population vector algorithm, was proposed by Georgopoulos et al. [1986], who

studied the preferred direction of each cell as its tuning property. Using this method, the

movement direction is predicted by a weighted contribution of all cell preferred direction

vectors. The weights are represented as a function of a cell's binned firing rate. The population

vector algorithm demonstrated that effective decoding requires a pre-knowledge of the encoding

models. A co-adaptive movement prediction algorithm based on the population vector method

was developed to track changes in cell tuning properties during brain-controlled movement

[Taylor et al., 2002]. Initially random, the estimate of cell tuning properties is iteratively refined

as a subject attempts to make a series of brain-controlled movements.

Another decoding methodology uses binned spike trains to predict movement based on

linear or nonlinear optimal filters. This method avoids the neurological dynamic encoding model

of the neural receptive field, and standard linear or nonlinear regression is used to fit the









relationship directly into the decoding operation. The Wiener filter or time delay neural network

(TDNN) was designed to predict the 3D hand using neuronal binned spike rates embedded by a

10-tap delay line [Wessberg et al., 2000]. In addition to this forward model, a recursive

multilayer perceptrons (RMLP) model was proposed by Sanchez et al. [2002b] and improved

with better performance using only relevant neuronal activities [Sanchez et al., 2002a].

Subsequently, Kim et al. [2003] proposed the development of switching multiple linear models

combined with a nonlinear network to increase prediction performance in food reaching. Their

regression model performed very well in decoding movement prediction. It is difficult to derive

the neurological dynamics properties directly from models; however, this model is yet another

viable method to use weight coefficients to analyze the active properties of neurons.

A bridge is needed to link the performance of the adaptive signal processing methods with

the knowledge from the receptive field neuron dynamics. These symbioses will greatly improve

the present understanding of decoding algorithms.

The probabilistic method based on the Bayesian formulation estimates the biological

response from the ensemble spike trains. From a sequence of noisy observations of the neural

activity, the probabilistic approach analyzes and infers the response as a state variable of the

neural dynamical system. The neural tuning property relates the measurement of the noisy neural

activity to the stimuli, and builds up the observation measurement model. Probabilistic state

space formulation and information updating depend on the Bayesian approach of incorporating

information from measurements. A recursive algorithm based on all available statistical

information is used to construct the posterior probability density function of the biological

response for each time, and in principle yields the solution to the decoding problem. Movements









can be recovered probabilistically from the multi-channel neural recordings by estimating the

expectation of the posterior density or by maximum a posterior.

As a special case, the Kalman filter was applied to BMI that embodied the concepts of

neural receptive field properties [Wu et al., 2006]. The Kalman filter assumes strongly that time-

series neural activities are generated by kinematic stimulus through a linear system, so the tuning

function is a linear filter only. Another strong assumption is that, given the neural spiking

activities at every time step, the Gaussianity of the posterior density of the kinematic stimulus,

which reduces all the richness of the interactions to second order information (mean and the

covariance). These two assumptions may be too restrictive for BMI applications. The particle

filter algorithm was also investigated to recover movement velocities from continuous spike

binned data [Brockwell et al., 2004]. The particle filter can provide state estimation for a

nonlinear system where the tuning function is assumed to be an exponential operation on linear

filtered velocities [Schwartz, 1992].

All of the above algorithms, when applied to spike rates, are coarse approaches that lose

spike timing resolution and may exclude rich neural dynamics. The primary reason for this

limitation is that the sequential state estimation is applied normally to continuous value

observations, and cannot be applied directly to discrete point processes. Indeed, when the

observation becomes the spike train point process, only the time instance of the spike event

matters without amplitude. Initially, Diggle, Liang and colleagues [1995] mentioned the

estimation from the point process observations without a specific algorithm. Chan and Ledolter

[1995] proposed a Monte Carlo Expectation-maximization (EM) algorithm using the Markov

Chain sampling technique to calculate the expectation in the E-step of the EM algorithm. This

method later became the theoretical base to derive an EM algorithm for a point process recursive









nonlinear filter [Smith & Brown, 2003]. The algorithm combined the inhomogeneous Poisson

model on point process with the fixed interval smoothing algorithm to maximize the expectation

of the complete data log likelihood. In this particular case, the observation process is a point

process from an exponential family and the natural parameter is modeled as a linear function of

the latent process.

A general point process adaptive filtering paradigm was recently proposed [Brown et al.

2001] to probabilistically reconstruct a freely running rat's position from the discrete observation

of the neural firing. This algorithm modeled the neural spike train as an inhomogeneous Poisson

process feeding a kinematic model through a nonlinear tuning function. This approach also

embodies the conceptual Bayesian filtering algorithm to predict the posterior density by a linear

state update equation and revise it with the next observation measurement. More properties of

this algorithm were discussed in Frank et al. [2002], Frank and Stanley et al. [2004], and Suzuki

and Brown [2005]. The point process filter analogue of the Kalman filter, recursive least squares

and the steepest descent algorithms were derived and compared to decode the tuning parameters

and state from the ensemble neural spiking activity [Eden et al., 2004]. In this case, the point

process analogue of the Kalman filter performs the best because it provides more adjustable step

size to update the state, which is estimated from the covariance information. However, the

method assumes incorrectly that the posterior density of the state vector, given the discrete

observation, is always Gaussian distributed. A Monte Carlo sequential estimation algorithm on

point process was addressed as a probabilistic approach to infer the kinematic information

directly from the neural spike train [Wang et al., 2006b]. The posterior density of the kinematic

stimulus, given the neural spike train was estimated at each time step without the Gaussian









assumptions. The preliminary simulations showed a better velocity reconstruction from the

exponentially tuned neural spike train without imposing a Gaussian assumption.

Using all the probabilistic approaches to derive the kinematic information from the neural

activity for the BMI requires pre-knowledge of the neuron receptive properties. In other words,

the estimation of the tuning function between a kinematic stimulus and neural receptive

responses and the good initialization of all the parameters in the algorithm can directly affect the

results of the prediction of the primate's movements in BMI. This is because all the probabilistic

approaches are based on the Bayesian formulation to construct the posterior density at each time

step from the prior density of the kinematic state, which is the posterior density of previous time

step. The population vector algorithm hints that an accurate decoding prediction needs the

encoding of the neuron tuning property. For the Bayesian approach, the knowledge of the prior

density, including the good initialization of all the parameters and the format of the tuning

functions, is also a key step if we want to probabilistically infer an accurate kinematic estimation

from the posterior densities.

Outline

We are interested in building an adaptive signal processing framework for Brain Machine

Interfaces working directly in the spike domain. The model will include the stochastic time

information of neuron activities, which is different from conventional methods working on

binned spike rates. The Bayesian approach will convert the decoding of neural activity required

in BMIs into a state-estimation problem. The kinematics are described by a dynamic state model

and inferred as a state from multi-neuron spike train observation, which is connected with the

state through neuron tuning function. The good estimation of the state (decoding) depends on the

well educated guess of the tuning property of the neuron (encoding). The schematic is shown in

Figure 1-2.









Previous tuning analysis is done on windowed based estimation, which maps kinematics

information from a segment to one spike, which is not appropriate when the decoding process

tries to infer kinematics online from the spike train. Here we develop an instantaneous model for

the tuning properties, which builds a one-to-one mapping from the kinematics state to the neuron

spike trains. It would be interesting to also compare the instantaneous estimator with the

traditional windowed estimator in term of encoding performance.

We will then implement the Bayesian algorithm to decode the kinematics from spike

trains. The non-parametric estimation provides a nonlinear neuron tuning function with no

constrains, which goes beyond the Gaussian assumption on the posterior density that is usually

made in the previous Bayesian approaches. We are interested in lifting this assumption by

designing an algorithm based on Monte Carlo sequential estimation on point process. In this

algorithm, the full information of posterior density is estimated without Gaussian constrains in

order to gain better performance on state estimation, which, will unfortunately be paid with

higher computational complexity. The trade off between the performance and computational cost

will be quantified.

In addition to the interest in non-Gaussian assumption, we would also like to investigate

the stochasticity and the non-stationary of the neuron behavior in terms of the decoding

performance. Due to experimental constraints, only a few neurons are recorded from the motor

cortex. To study the effect of stochasticity intrinsic in single neuron representation of a neural

assembly in online modeling, several synthetic spike trains are generated from the intensity

function estimated from the neurons and utilized as extra model inputs. The decoding

performance is averaged across the realizations in the kinematics domain to reduce the variance

of original spike recordings as single realization. Lastly, the non-stationary of the neuron









behaviors is studied in the decoding performance of different test data segments with the fixed

tuning function. Preliminary results show that a dual Kalman filter approach is able to track the

tuning function change in the test data set, which indicates that the non-stationary of the neuron

tuning could be promisingly overcome by dual decoding structure.

The outline of the dissertation is the following. In Chapter 2, we review the traditional

probabilistic approach for adaptive signal processing as a state estimation problem, followed by

our new proposed Monte-Carlo sequential estimation for the point process optimum filtering

algorithm. This methodology estimates directly the posterior density of the state given the

observations. Sample observations are generated at each time to recursively evaluate the

posterior density more accurately. The state estimation is obtained easily by collapse, for

example, by smoothing the posterior density with Gaussian kernels to estimate its mean. When

tested in a one-channel simulated neuron spike train decoding experiment, our algorithm better

reconstructs the velocity as compared with the point process adaptive filtering algorithm with the

Gaussian assumption. In Chapter 3, we describe the experimental setups for Brain Machine

Interfaces and state the differences between the simulation data and real BMI data. The neuron

tuning properties are modeled to instantaneously encode the movement information of the

experimental primate as the pre-knowledge for Monte-Carlo sequential estimation for BMI. It is

also analyzed and compare in details with the traditional windowed encoding methods. In

Chapter 4, the decoding framework for the Brain Machine Interfaces is presented directly in the

spike domain and is followed with kinematics reconstruction results and performance analysis

comparing to the adaptive filtering algorithm in spike domain with different encoding models.

The results by synthetic averaging to reduce and variance of the kinematics prediction and the

efforts to reduce the computational complexity by selecting the neuron subset in decoding









process are also presented in Chapter 4. Conclusions and future work, including the preliminary

results on tracking the non-stationary neuron tuning property by Dual Kalman filter, are

described in Chapter 5.












Computer and
Prosthetic Arm
Commands


Figure 1-1. Brain machine interface paradigm



Encoding

3ILm


Observation model
(Tuning function)


-Decoding


Figure 1-2. Schematic of relationship between encoding and decoding processes for BMIs









CHAPTER 2
PROBABILISTIC APPROACH FOR POINT PROCESS

Sequential State Estimation Problem Pros and Cons

In sequential state estimation, the system state changes over time with a sequence of noisy

measurements observed continuously on the system. The state vector that contains all the

relevant information describes the system through a time-series model. Two models are required

to analyze and infer the state of a dynamical system, the system model, which describes the

evolution of the state with time, and the continuous observation measurement model, which

relates the noisy measurements to the state. The probabilistic state space formulation and the

updating of information are rooted in the Bayesian approach of incorporating information from

measurements. A recursive algorithm based on all available information, including all available

statistical information and, in principle, the solution to the estimation problem, is used to

construct the posterior probability density function of the state for each observation. Adapting

the filter is a two-stage process. The first stage, prediction, uses the system model to predict the

posterior probability density of the state given the observation from one measurement to the

next; the second stage, updating, revises the predicted posterior probability density based on the

latest measurement of the observation. The Kalman filter exemplifies an analytical solution that

embodies this conceptual filtering under the assumption that the time-series created by a linear

system and the posterior density of the state, given the observation at every step, is Gaussian,

hence only parameterized by mean and covariance.

Sequential state estimation can describe the decoding problem in Brain Machine Interfaces.

Information on the primate's movements can be regarded as the state, which changes over time

through a kinematic dynamic system model. The neuron spike trains functionally encode the

kinematic states, and this can be designed as a tuning function. This tuning function acts as the









observation model in the state sequential estimation problem. It probabilistically models the

randomness of the neuron behaviors and characterizes the nonlinear neuron firing properties with

the preferred kinematic directions, thereby describing the neuron receptive fields from the

neurophysiologic point of view. The parameters of the tuning function can also represent the

state changing slowly over the time, suggesting a possible investigation of the nonstationary

aspects of neuron tuning properties. The Brain Machine Interface then converts the observations

of multi-channel neuron spike trains to infer the kinematics as the state. This approach is

problematic in BMI because channels of neuron spike trains are multi-dimensional observations

driven by one state vector. A possible assumption is that all the neuron spike trains are motivated

independently with the cooperation of the kinematic information, but this may not be true.

Another problem with this method is that the probabilistic approach is based on the Bayesian

formulation, which constructs the posterior density from the prior recursively. To develop a good

estimation of the states, the information describing how the system works must correspond with

the pre-knowledge of the kinematic dynamics system and the neuron tuning function.

Review of the Previous Probabilistic Approaches

In Chapter 1, we reviewed several probabilistic approaches to decode the neuron activities

that take place during a primate's movement. The probabilistic methods investigated and applied

to BMI by different research groups include the Kalman filter [Wu & Gao et al., 2006], and the

particle filter algorithm [Brockwell & Rojas et al., 2004]. Both of these algorithms employ

concepts of sequential state estimation. The usefulness of the Kalman filter is limited in that it

reduces all the richness of the interactions to second order information (mean and the covariance)

because it assumes the linear tuning property and the Gaussianity of the posterior density of the

movements given the neural spiking activities at every time step. Although the particle filter

provides state estimation for a nonlinear system, the tuning function was directly assumed to be









an exponential operation on linear filtered velocities [Schwartz, 1992]. Both of the algorithms

were applied to continuous spike binned data and cannot be directly adapted to discrete point

processes. A point process adaptive filtering algorithm was recently proposed by Brown et al.

[2001]. In their approach, discrete observations of the neural firing spikes were utilized as the

state to probabilistically reconstruct the position of a freely running rat in space. This approach

also reflects the conceptual Bayesian filtering algorithm to predict the posterior density by a

linear state, update the equation and then revise it with the next observation measurement.

However, given the discrete observation, this method assumes that the posterior density of the

state vector is always Gaussian distributed, which may not be the case. We proposed a

probabilistic filtering algorithm to reconstruct the state from the discrete observation the

spiking event by generating a sequential set of samples to estimate the distribution of the state

posterior density without the Gaussian assumption. The posterior density is recursively

propagated and revised by sequential spike observations over time. The state at each time is

determined by the maximum a posterior or the expectation of the posterior density inferred by a

collapsing of the mixture of Gaussian kernels when estimating the posterior density. The

algorithm will be described in the next section, followed by an illustration of algorithm

performance in a simulated neuron decoding example and a comparison to the probabilistic

velocity reconstruction with Gaussian assumption on posterior density.

Adaptive Algorithms for Point Processes

In this section, we review the design of adaptive filters for point processes under the

Gaussian assumption, and then introduce our method, a Monte Carlo sequential estimation, to

probabilistically reconstruct the state from discrete (spiking) observation events.









Adaptive Filtering for Point Processes with Gaussian Assumption

One can model a point process using a Bayesian approach to estimate the system state by

evaluating the posterior density of the state given the discrete observation [Eden & Frank et al.,

2004]. This framework provides a nonlinear time-series probabilistic model between the state

and the spiking event [Brown et al., 1996].

Given an observation interval (0, T], the number N(t) of events (spikes) can be modeled as

a stochastic inhomogeneous Poisson process characterized by its conditional intensity function

A(t I x(t), (t), H(t)) (i.e., the instantaneous rate of events), defined as

(t x(),(t),H(t)) li Pr(N(t + At) N(t) = x(t),(t), H(t)) (2-1)
At-O At

where x(t) is the system state, 0(t) is the parameter of the adaptive filter, and H(t) is the

history of all the states, parameters and the discrete observations up to time t. The relationship

between the single parameter Poisson process A, the state x(t), and the parameter 0(t) is a

nonlinear model represented by

A(t x(t),0(t))= f(x(t),0(t)) (2-2)

Using the nonlinear function f(-), assumed to be known or specified according to the

application, let us consider hereafter the parameter 0(t) as part of the state vectorx(t). Given a

binary observation event ANk over the time interval (tk 1, tk), the posterior density of the whole

state vector x(t) at time tk can be represented by Bayes' rule as

p(xk I Ak,k) p(ANk I xk,, H,)p(xk I n (2-3)
p(xk .kH k) = (2-3)
p(ANk I H)

where p(ANAk Ixk,Hk) is the probability of observing spikes in the interval (tk ,tk),

considering the Poisson process









Pr(ANk I xk, Hk) = ((t, x,, H,)At)k exp(-A(tk I Xk, Hk)At) (2-4)

and p(xk Hk) is the one-step prediction density given by the Chapman-Kolmogorov Equation

as

P(xk k)= p(xk xk l,Hk)p(xk l ANk1,Hk )dxk (2-5)

where the state xk evolves according to the linear relation

xk = Fkxk- + k (2-6)

Fk establishes the dependence on the previous state and rk is zero-mean white noise with

covariance Qk. Substituting Equations 2-4 and 2-5, in 2-3 the posterior density of the state

p(xk AANk, Hk ) can be recursively estimated from the previous one based on all the spike

observations.

Assuming the posterior density given by Equation 2-3 and the noise term rk in the state

evolution Equation 2-6 are Gaussian distributed, the Chapman-Kolmogorov Equation 2-5

becomes a convolution of two Gaussian curves, from which the estimation of the state at each

time has a closed form expression given by (see [Eden et al., 2004] for details).

Xkk-l =Fkxk- 1k- (2-7a)

Wk=k-1= FkWk- 1k Fk +Qk (2-7b)


(Wkk) 1 (Wklk1 +[(- Atk ]lo ) (ANk -AAk)a Xkk- (2-7c)


8log i,
(7X k C\ ."X i '%'


XkLk =Xkk + Wk[(k o )'(A Atk)]X (2-7d)
aXk

The Gaussian assumption was used initially because it allows one to solve analytically

Equation 2-5 and therefore, for a closed form solution of Equation 2-3 as Equation 2-7.









Although the above set of equations may seem daunting, each can be interpreted quite

easily. First, Equation 2-7a establishes a prediction for the state based on the previous state.

Then, Equations 2-7b and 2-7c are used in Equation 2-7d to correct or refine the previous

estimate, after which the recurrent process is repeated.

Monte Carlo Sequential Estimation for Point Processes

The Gaussian assumption applied to the posterior distribution in the algorithm just

described may not be true in general. Therefore, in terms of discrete observations, a non-

parametric approach is developed here which poses no constraints on the form of the posterior

density.

Suppose at time instant k the previous system state is xk1 Recall that because the

parameter 0 was embedded in the state, we need only the estimation of the state from the

conditional intensity function Equation 2-1, since the nonlinear relation f(.) is assumed known.

Random state samples are generated using Monte Carlo simulations [Carpenter & Clifford et al.,

1999] in the neighborhood of the previous state according to Equation 2-6. Then, weighted

Parzen windowing [Parzen, 1962] was used with a Gaussian kernel to estimate the posterior

density. Due to the linearity of the integral in the Chapman-Kolmogorov Equation and the

weighted sum of Gaussians centered at the samples, we are still able to evaluate directly from

integral samples. The process is repeated recursively for each time instant, propagating the

estimate of the posterior density, and the state itself, based on the discrete events over time.

Notice that due to the recursive approach, the algorithm not only depends on the previous

observation, but also depends on the entire path of the spike observation events.

Let {txk, k }N1 denote a Random Measure [Arulampalam & Maskell et al., 2002] in the

posterior density p(x k IN k), where {xk, = 1,.., N I} is the set of all state samples up to time









with associated normalized weights {w',i = 1,...,Ns}, and Ns is the number of samples

generated at each time index. Then, the posterior density at time k can be approximated by a

weighted convolution of the samples with a Gaussian kernel as

Ns
P(X)k INk) I -w; k(xok -XO)k,) (2-8)
i=1

where N k represents the spike observation events up to time k modeled by an inhomogeneous

Poisson Process described in the previous section, and k(x- x, U) is the Gaussian kernel in

terms of x with mean x and covariance 7 By generating samples from a proposed density

q(x k I Nk) according to the principle of Importance Sampling [Bergman, 1999; Doucet, 1998],

which usually assumes dependence on xk landNk only, the weights can be defined by


p(0 INlk)
wk oc N1k (2-9)
q(xok I Nk)

Here, we assume the importance density obeys the properties of Markov Chain such that

q(x0k INk)= q(Xk XkO- 1,Nl)q(x0 k- -Nk 1)

=q(xk I Xk1,ANk)q(x0k-1 ANk 1) (2-10)

At each time iteration, the posterior density p(x k I Nk) can be derived and approximated

by the posterior density in the previous iteration as Equation 2-11.

i N p(Nk O kI ,Nl k-)p(xOk I N -1)
p(p(k xI NIk )


k k-)P(Xk X) X Xk- 1 I NIk 1)
Sp(x, I\N_,1)
_P(Mk A k-1 )
P(k Xk)P(Xk Xk-1) k-1 I Nik-1)
P(ANA N k-1)









x p(XNk xk)p(xk Ixk1)p(xok 1 INk 1) (2-11)

By replacing Equations 2-10 and 2-11 into Equation 2-9, the weight can be updated recursively

as Equation 2-12.

p(ANk I Xk)p(Xk Ixk1)P(X'k 1 INk 1)
q(xk Ixk1,ANk)q(x0k 11Nlk 1)

p(ANk I k)p(x I k-1)
Skl (2-12)
q(x x'k Xk-1, AA)

Usually the importance density q(x'k x~1,A k) is chosen to be the prior

density p(x'k xk-1), requiring the generation of new samples from p(xk xk 1) by Equation 2-6

as a prediction stage.

After the algorithm is applied for a few iterations, a phenomenon called degeneracy may

arise, where all but one sample has negligible weight [Doucet, 1998], implying that a large

computational effort is taken to update the samples that have almost no contribution to estimate

the posterior density. When a significant degeneracy appears, resampling is applied to eliminate

the samples with small weight and to concentrate on samples with large weights according the

samples cdf In our Monte Carlo sequential estimation of the point process, Sequential

Importance Resampling [Gordon & Salmond et al. 1993] is applied at every time index, so that

the sample is i.i.d. from the discrete uniform density with weights wk 1 1/N,. The pseudo code

of the scheme to resample {x', w },, to {x* }I, is the following [Arulampalam & Maskell

et al., 2002].

* Initial the cdf. c = 0;
* Fori= 2 :Ns
--construct the cdf. c, = c, + wk
* End For
* Start at the bottom of the cdf. i=1;









* Draw a starting point: u U[0, N1]
* Forj= :N
-- move along the cdf: uj = u + N1 (j -1)
While uj > c,
i=i+1
-- end While
-- Assign sample x* x'
-- Assign weight wi = 1/Ns
* End For

The weights then change proportionally, given by

w oc p(ANk I k) (2-13)

where p(ANk I xk) is defined by Equation 2-4 in this section. Using Equations 2-6, 2-13 and the

resampling step, the posterior density of the state xk given the whole path of the observed events

up to time tk can be approximated as

Ns
P(Xk 1 k) (AAk xI).k(xk -x) (2-14)


Equation 2-14 shows that, given the observation, the posterior density of the current state

is modified by the latest probabilistic measurement of the observing spike event p(ANk xk),

which is the updating stage in adaptive filtering.

Without a close form of the state estimation, we measure the posterior density of the state

given the observed spike event p(xk N1 k) every time and apply two methods to get the state

estimation k One method is Maximum A Posterior (MAP), which picks out the sample x'k

with maximum posterior density. The second method is to use the expectation of the posterior

density as the state estimation. As we smooth the posterior density by convolving with a










Gaussian kernel, we can easily obtain the expectation Xk and its error covariance Vk by collapse

[Wu & Black et al., 2004].

NS
xk = p(AANk IxkX) (2-15)
i=1

NS
Vk = p(AANk x).*(c+(x -Xk)(Xk -Xk)T) (2-16)
i=1

From Equations 2-15 and 2-16, we can see that without complex computation we can

easily estimate the next state. Hence, the expectation by collapse is simple and elegant.

The major drawback of the algorithm is computational complexity because the quality of

the solution requires many particles {x', = 1,. Ns} to approximate the posterior density.

Smoothing the particles with kernels as in Equation 2-14 alleviates the problem in particular

when collapsing is utilized, but still the computation is much higher than calculating the mean

and covariance of the PDF with a Gaussian assumption.

We have to point out that both approaches assume we know the state model Fk in

Equation 2-6 and the observation model f(-) in Equation 2-2, which actually are unknown in

real applications. The state model is normally assumed linear and the parameters are obtained

from the data using least squares. The knowledge of the observation model is very important for

decoding (deriving states from observations), because the probabilistic approach based on

Bayesian estimation constructs the posterior density of each state given the spike observation at

each time step from the prior density of the state. The prior density in turn is the posterior density

of previous time step updated with the discrepancy between an observation model and the spike

event. The observation model basically quantifies how each neuron encodes the kinematic









variables (encoding), and due to the variability of neural responses it should be carefully

estimated from a training set for the purpose of Monte Carlo decoding models.

Simulation of Monte Carlo Sequential Estimation on Neural Spike Train Decoding

Neurons dynamically change their responses to specific input stimuli patterns through

learning, which has been modeled with the help of receptive fields. Neural decoding can be used

to analyze receptive field plasticity and understand how the neurons learn and adapt by modeling

the tuning function of neuronal responses. In the rat hippocampus, for example, information

about spatial movement can be extracted from neural decoding, such as from the activity of

simultaneously recorded noisy place cells [Mehta & Quirk et al., 2000, O'Keefe & Dostrovsky,

1971] representing the spike-observed events.

In a conceptually simplified motor cortical neural model [Moran & Schwartz, 1999], the

one-dimensional velocity can be reconstructed from the neuron spiking events by the Monte

Carlo sequential estimation algorithm. This algorithm can provide a probabilistic approach to

infer the most probable velocity as one of the components of the state. This decoding simulation

updates the state estimation simultaneously and applies this estimation to reconstruct the signal,

which assumes interdependence between the encoding and decoding so that the accuracy of the

receptive field estimation and the accuracy of the signal reconstruction are reliable.

Let us first explain how the simulated data was generated. The tuning function of the

receptive field that models the relation between the velocity and the firing rate is assumed

exponential and given by

A(tk) = exp(/ +kvk) (2-17)

where exp(p/) is the background firing rate without any movement and /k is the modulation in

firing rate due to the velocity v,. In practice in the electrophysiology lab, this function is









unknown. Therefore, an educated guess needs to be made about the functional form, for which

the exponential function is widely utilized.

The desired velocity was generated as a frequency modulated (chirp) triangle wave added

with Gaussian noise (variance 2.5 x 10 5) at each Ims time step, as shown in Figure 2-1. The

design of the desired signal enables us to check if the algorithm could track the linear evolution

and the different frequency of the "movement".

The background-firing rate exp(p/) and the modulation parameter /k are set to be 1 and 3

respectively for the whole simulation time, 60s. A neuron spike is drawn as a Bernoulli random

variable with probability A(tk)At within each Ims time window [Brown et al. 2002]. A

realization of a neuron spike train is shown in Figure 2-2.

With the exponential tuning function operating on the velocity, we can see that when the

velocity is negative, there are few spikes; while when the velocity is positive, many spikes

appear. The problem is to obtain from this spike train the desired velocity of Figure 2-2,

assuming the Poisson model of Equation 2-17 and one of the sequential estimation techniques

discussed.

To implement the Monte Carlo sequential estimation for the point process, we regard both

modulation parameter /k and velocity vk as the state x = [vk k]T Here we set 100 samples to

initialize the velocity v' and modulation parameter /,k respectively with a uniform and with a

Gaussian distribution. Note that too many samples would increase the computational complexity

dramatically, while an insufficient number of samples would result on a poor description of the

non-Gaussian posterior density. The new samples are generated according to the linear state

evolution Equation 2-6, where F, is obtained from the data using least squares for vk and 1 for









pk (implicitly assuming that the modulation parameter would not change very fast). The i.i.d.

noise for velocity state in Equation 2-6 was drawn from the distribution of the error between the

true velocity and the linear predicted results by Fk. The i.i.d. noise for estimating the modulation

parameter /k is approximated by a zero mean Gaussian distribution with variance Qk (default

10-7). The kernel size utilized in Equation 2-14 to estimate the maximum of the posterior density

(thru MAP) obeys Silverman's rule [Silverman 1981]. Because the spike train is generated

according to the Poisson model, there is stochasticity involved. We then generate 10 sets of the

spike train from the same time series of the firing rate by the tuning function Equation 2-17 from

the desired velocity. The averaged performances evaluated by NMSE between the desired

trajectory and the model output are shown in Table 2-1, for different runs of the covariance

matrices of the state generation Qk. Notice that the noise variance should be small enough to

track the unchanged /,k set in the data. In general, if Qk is too large, the continuity constraint of

the whole sequential sample generated has little effect. If it is too small, this constraint may

become too restrictive and the reconstructed velocity may get stuck in the same position while

the real velocity moves away by a distance much larger than Qk.

In order to obtain realistic performance assessments of the different models (Maximum a

posterior and collapse), the state estimations vk ,k for the duration of the trajectory are drawn

10 different times. The best velocity reconstruction is shown in Figure 2-3. The Normalized

Mean Square Error (MSE normalized by the power of the desired signal) between the desired

trajectory and the model output for the adaptive filtering with Gaussian assumption is 0.3254.

NMSE for sequential estimation by MAP is 0.2352 and by collapse is 0.2140.









From Figure 2-3, we can see that compared with the desired velocity (dash-dotted red line),

all the methods obtain close estimation when there are many spikes (i.e., when the velocity is at

the positive peaks of the triangle wave). This is because the high likelihood of spikes

corresponds to the range of the exponential tuning function where the modulation of the high

firing probability is easily distinguished and the posterior density is close to the Gaussian

assumption. However, in the negative peaks of the desired velocity the sequential estimation

algorithm (using collapse for expectation or MAP) performs considerably better. This is

primarily because the modulation of the firing rate is nonlinearly compressed by the exponential

tuning function, leading to non-Gaussian posterior densities, and thus violating the Gaussian

assumption the adaptive filtering method relies on. Although there is nearly no neuronal

representation for negative velocities and therefore both algorithms are inferring the new velocity

solely on the previous state, the non-parametric estimation of thepdfin the sequential estimation

algorithm allows for more accurate inference. As an example in Figure 2-4A, the posterior

density at time 6.247s (when the desired velocity is close to the positive peak) is shown (dotted

pink line) Gaussian-like shape, all the methods provide similar estimations close to the true value

(red star). In Figure 2-4B, the posterior density at time 35.506s (when the desired velocity is

close to the negative peak) is shown (dotted pink line) non-symmetric with 2 ripples and is

obviously not Gaussian distributed. The adaptive filtering on point process under a Gaussian

assumption provides poor estimation (gray dotted line), not only because of its Gaussian

assumption but also because the algorithm propagates the poor estimation from previous time

resulting in an accumulation of errors. The velocity estimated by the sequential estimation with

collapse denoted by the blue circle is the closest to the desired velocity (red star). Notice also that

in all cases the tracking performance gets progressively worse as the frequency increases. This is









because the state model is fixed from the whole set of the data by a linear model, which tracks

the velocity state at the average frequency. If a time-variant state model is used on a segment-by-

segment basis, we could expect better reconstructions.

In summary, the Monte Carlo sequential estimation on point processes seems promising to

estimate the state from the discrete spiking events.

Interpretation

Point process adaptive filtering is a two-step Bayesian approach based on the Chapman-

Kolmogorov Equation to estimate parameters from discrete observed events. However, the

Gaussian assumption of posterior density of the state, upon observation, may not accurately

represent state reconstruction due to the less accurate evaluation of posterior density. We present

in this paper a Monte Carlo sequential estimation to modify the amplitude of the observed

discrete events by the probabilistic measurement, posterior density. A sequence of samples is

generated to estimate the posterior density more accurately. Through sequential estimation and

weighted Parzen windowing, we avoid the numerical computation of the integral in the C-K

Equation. By smoothing the posterior density with the Gaussian kernel from Parzen windowing,

we can collapse to easily derive the expectation of the posterior density, leading to a better result

of state estimate than noisy Maximum a posterior. The Monte Carlo estimation shows better

capability to probabilistically estimate the state because it better approximates the posterior

density than does the point process adaptive filtering algorithm with Gaussian assumption.




















time(msec)


x 104


Figure 2-1. The desired velocity generated by triangle wave with Gaussian noise


1.5


3
time(ms)


x 104


Figure 2-2. The simulated neuron spike train generated by an exponential tuning function


Table 2-1. Comparison results of all algorithms with different Qk


Adaptive filtering
of point process
0.4434
0.3940
0.3583


Sequential estimation
Collapse MAP
0.3803 0.3881
0.3575 0.3709
0.2956 0.3252


Qk
10-
10-6
10-
































6
x 104


time(ms)




Figure 2-3. The velocity reconstruction by different algorithms




pdf at time index 6.247s


.......... posterior density

0.6 a desired velocity
velocity by seq. estimation (EXP)
velocity by seq. estimation (MAP)
0.5 1
l V itUL b~ dU~LV t i IILtI I


v wlu^lny uy auaypuwve IeLngy

/


/'




..**oO"
**
I I I


1.2 1.4


Figure 2-4. p(vk I AkN) at different time. A) At time 6.247s. B) At time 35.506s


0.4


o 0.3
0_


0 0.2 0.4 0.6 0.8
velocity


~'c~
'is
i
'.
'.
'.
i
~
s
'.
i
'.
i
i
55












pdf at time index 35.506s


.......... posterior density

0.3- desired velocity
elocity by seq. estimation (EXP)
velocity by seq. estimation (MAP)
0.25- velocity by adaptive filtering


S0.2
0o
-o
2 0.15
0_


0.05


0
-2.5


-1
velocity


Figure 2-4. Continued


"'
f'
::


i
t
j






i
5

f......."~.


..................









CHAPTER 3
INFORMATION THEORETICAL ANALYSIS OF INSTANTANEOUS MOTOR CORTICAL
NEURON ENCODING

Experimental Setups

In Chapter 2, we presented a Monte Carlo sequential estimation algorithm to reconstruct

the continuous state variable directly from point process observations. In the one-neuron spike

train decoding simulation, this algorithm provided a better estimate of the state recursively

without Gaussian distribution. The Monte Carlo sequential estimation in spike domain is a

promising signal processing tool to decode the continuous kinematics variable directly from

neural spike trains in Brain Machine Interfaces. With this method, spike inning window size is

no longer a concern, as one can directly utilize the spike timing event. The online state

estimation is suitable for real-time BMIs decoding without the desired signal; however, both the

neural activity recoding and desired trajectories are required to estimate the neuron tuning

function. The decoding results by Monte Carlo estimation could be different between realizations

because of the random manner in which samples are generated to construct the posterior density.

Data Recording

The Brain-Machine Interface paradigm was designed and implemented in Dr. Miguel

Nicolelis laboratory at Duke University. Chronic, neural ensemble recordings were collected

from the brain of an adult female Rhesus monkey named Aurora, and synchronized with task

behaviors.

Several micro-electrode arrays were chronically implanted in five of the monkey's cortical

neural structures, right dorsolateral premotor area (PMA), right primary motor cortex (MI), right

primary somatosensory cortex (S1), right supplementary motor area (SMA), and the left primary

motor cortex (MI). Each electrode array consisted of up to 128 microwires (30 to 50 ,um in

diameter, spaced 300 pm apart), distributed in a 16 X 8 matrix. Each recording site occupied a









total area of 15.7 mm2 (5.6 X 2.8 mm) and was capable of recording up to four single cells from

each microwire for a total of 512 neurons (4 X 128) [Sanchez, 2004].

After the surgical procedure, a multi-channel acquisition processor cluster (MAP, Plexon,

Dallas, TX) was used in the experiments to record the neuronal action potentials simultaneously.

Analog waveforms of the action potential were amplified and band pass filtered from 500 Hz to

5 kHz. The spikes of single neurons from each microwire were discriminated based on time-

amplitude discriminators and a principal component analysis (PCA) algorithm [Nicolelis et al.,

1997; Wessberg et al. 2000]. The firing times of each spike were stored. Table 3-1 shows the

assignment of the sorted neural activity to the electrodes for different motor cortical areas [Kim

2005].

The monkey performed a two-dimensional target-reaching task to move the cursor on a

computer screen by controlling a hand-held joystick to reach the target (Figure 3-1). The monkey

was rewarded when the cursor intersected the target. The corresponding position of the joystick

was recorded continuously for an initial 30-min period at a 50 Hz sampling rate, referred to as

the "pole control" period [Carmena & Lebedev et al. 2003].

The monkey performed a two-dimensional target-reaching task to move the cursor on a

computer screen by controlling a hand-held joystick to reach the target (Figure 3-1). The monkey

was rewarded when the cursor intersected the target. The corresponding position of the joystick

was recorded continuously for an initial 30-min period at a 50 Hz sampling rate, referred to as

the "pole control" period [Carmena et al., 2003].









Simulation vs. Vivo Recordings

BMI data provides us with 185 neural spike train channels and 2-dimensional movement

trajectories for about 30 minutes. Compared to the one-neuron decoding simulation in Chapter 2,

there are big differences.

At first glance, it is remarkable that the time resolution for the neural spike train is about a

millisecond, while the movement trajectories have a sampling frequency 50Hz. The neural spike

trains allow us to more closely observe the true random neural behavior. Consequently, however

the millisecond scale requires more computational complexity. We must bridge the disparity

between the microscopic neural spikes and the macroscopic kinematics.

The tuning function provides a basis on which to build a simultaneously functional

relationship. In the simulation, we simply assume that the tuning function characterizes the

exponentially increasing firing rate conditioned on the velocity. For the real BMI data, is this

tuning function still valid and cogent? As presented in Chapter 2, our Monte Carlo sequential

estimation algorithm works as probabilistic approach directly in the spike domain. The major

assumption supporting the entire algorithm is that we have enough knowledge of both the system

model and the observation model. This assumption establishes a reliable base to propagate the

posterior density leading to the state estimation at each time iteration. How can we obtain the

knowledge? The work by Georgopoulos and Schwartz et al. [1986] provides some guidance. The

population coding presented in their paper analyzed the individual neural activities tuned broadly

to a particular direction. Based through trials on the weighted distribution of individual neurons

to the preferred direction, the direction of movement was found to be uniquely predicted. The

principle behind this work is letting the data speak for itself. We gain insight into neural tuning

properties by analyzing the existing neuron and kinematics data. This analysis leads to better

kinematics decoding from neural activities in the future.









Anther issue to resolve is dealing with multi-channel neural spike trains when there is only

one neural channel in the simulation. In the real BMI data, how can we account for the

association between channels? In Chapter 1, we reviewed the work done by many researchers in

this field with multiple outcomes. Most of the work focused on the exclusive relationship

between neural activities, such as the correlation between neurons characterized by the neural

firing, or between neuron microscopic spiking and field potentials. With regard to both external

kinematics and neural activities, neural spike trains between channels are usually assumed to be

conditionally independent of kinematics. In other words, spike generation is determined once the

kinematics and parameters of the neuron tuning are known. We should emphasize that the

assumption of conditional independence does not conflict with the association analysis between

neurons. If the firing rates of two neurons are generated independently through two similar

tuning functions in a certain time period, similar firing patterns are expected during this time

period, and the analysis on the correlation between them is still valid.

Review of Tuning Analysis

The probabilistic approach based on Bayesian estimation constructs the posterior density

of each kinematic state given the spike trains at each time step from the prior density of the state.

The prior density in turn is the posterior density of previous time step updated with the

discrepancy between an observation model and the spike train. The observation model linking

the measurement of the noisy neural activity to the kinematics implicitly utilizes the tuning

characteristics of each neuron. In our newly proposed Monte Carlo sequential estimation

algorithm operating directly on point processes [Wang et al., 2006b], the Bayesian approach

analyzes and infers the kinematics as a state variable of the neural dynamical system without the

constraints of linearity and Gaussianity. Accurate modelling of the neuron tuning properties in









the observation model is critical to decode the kinematics by expectation of the posterior density

or by maximum a posterior.

The tuning, also called the encoding function, mathematically models how a neuron

represents behavioral consequences or how it responds to a stimulus. The parameterization of a

tuning function requires an understanding of three interconnected aspects 1) What is the

behavior/stimulus? 2) How does the neuron encode it? 3) What is the criterion for quantifying

the quality of the response? For neurons located in the motor cortex, researchers first developed

the static descriptions of movement-related activity by applying electrical stimuli to motor areas

to elicit muscle contraction [Fritsch & Hitzig, 1870; Leyton & Sherrington, 1917; and Schafer

1900]. Later, movement direction was correlated with cortical firing in a center-out task where

the tuning function was initially modelled as a cosine curve [Georgopoulos et al., 1982]. The

peak discharge rate of a cell is called preferred direction. To quantify the degree of tuning, the

tuning depth has been proposed as a metric and it is defined as the difference between the

maximum and minimum values in the firing rates, normalized by the standard deviation of the

firing rate [Carmena et al., 2003; Sanchez et al., 2003]. As a scalar, the tuning depth summarizes

the statistical information contained in the tuning curve to evaluate the neural representation,

which indicates how modulated the cell's firing rate is to the kinematic parameter of interest.

However, this metric has some shortcomings since it can exaggerate the value of tuning depth

when the neuron firing rate standard deviation is close to 0. Additionally, it depends on the

inning window size to calculate the firing rate of the neuron. The tuning depth also relates to the

scale of the behavior/stimulus and makes the analysis not comparable among neurons as we will

see. A more principled metric, allowing comparisons among neurons and among kinematic

variables, is necessary to mathematically evaluate the information encoded by neurons about the









kinematics variables. If this is achieved, the new tuning depth metric can be utilized to

distinguish the neuron's tuning ability in BMI.

In addition to tuning depth, Researchers have also proposed a variety of parametric models

to describe the motor representation neurons. Linear relationships from motor cortical discharge

rate to speed and direction have been constructed [Moran & Schwartz, 1999]. The linear filter

took into account the sensitivity of preferred direction, the position and speed of the movement

to represent the firing rate in cortical activity [Roitman et al., 2005]. However, linear encoding

captures only a fraction of the overall information transmitted because the neuron exhibits

nonlinear behavior with respect to the input signal. Brown et al. [2001] used a Gaussian tuning

function for the hippocampal pyramidal neurons. Brockwell et al. [2003] assumed an exponential

tuning function for their motor cortical data. These nonlinear mathematical models are not

optimal for dealing with real data because the tuned cells could have very different tuning

properties. Simoncelli and Paninski et al. [2004] further improved the linear idea and proposed a

Liner-Nonlinear-Poisson (LNP) model to cascade the linear stage with a nonlinear

transformation as the second stage, which gives a conditional instantaneous firing rate to the

Poisson spike generating model at the third stage.

In the LNP model, the position or velocity at all relevant times within a temporal window

was utilized to extract the information between neuronal activity and animal movement

trajectories. During a continuous target tracking task, Paninski et al. [2004b] studied the

temporal dynamics of Ml neurons for position and velocity of hand motion given the firing rate.

The linear filter in the LNP model averages the temporal position or velocity within the window

and so it smoothes the statistical curves on the stimulus distribution and provides the widely

known exponential increasing nonlinearity that relates neuronal firing rate to the projected









kinematics. Unfortunately, the averaging builds up an Nto 1 temporal mapping between the

kinematic variables (position or velocities) and the neural spikes that negatively impacts our goal

of building sequential estimation algorithms. Indeed, sequential models require inferring the

estimation of the kinematics from current neuronal spike times. Therefore, the instantaneous one

to one functional tuning relationship between the kinematics and neuron activities are needed to

decode the kinematics online and to avoid the error accumulation within the windowed

kinematic vector. Moreover, the analysis of the receptive fields of motor cortex neurons is

different from the stimulus-response analysis in sensory cortices, because there is always a time

delay between the initiation of the neuron spiking and the movement response. This delay must

be taken into consideration in BMI decoding algorithms. The estimation of instantaneous tuning

parameters is more difficult and more prone to errors, therefore we will have to evaluate how

much nonlinearity properties still holds or changes compared to the temporal kinematic vectors.

In the literature, mutual information has been used to differentiate the raw stimulus

ensemble from the spike-triggered stimulus distribution [Simoncelli et al., 2004; Sharpee et al.,

2002], as well as to estimate the minimal number of delay samples in the temporal kinematics

needed to represent the information extracted by the full preferred trajectory of a given cell

[Paninski et al., 2004b]. In this Chapter 3, we also apply an information theoretical analysis but

on the instantaneous tuning properties of the motor cortical neurons. We propose the concept of

mutual information to estimate tuning depth to analyze the information that neurons in different

cortical areas share with respect to the animal's position, velocity and acceleration. This criterion

is first tested in synthetic data, and then applied to motor cortex data. We elaborate how to build

our instantaneous tuning function of motor cortical neurons for BMIs. The information

theoretical analysis is applied to the projective nonlinear-Poisson encoding analysis to estimate









the causal time delay. The nonlinearity of the instantaneous tuning curves is compared to the

method computed from windowed kinematics.

Visual Inspection of a Tuning Neuron

Neurophysiologic evidence suggests that neurons encode the direction of hand movements

with cosine shaped tuning curves [Georgopoulos et al., 1982]. For each neuron, the polar plot of

the neuron activity with regard to a kinematic vector, such as hand position, hand velocity, or

hand acceleration, is investigated to compute the kinematic direction as an angle between 0 and

360 degrees. 45 degree bins are chosen to coarsely classify all the directions into 8 bins. For each

direction, the average neuron firing rate obtained by inning defines the magnitude of the vector

in a polar plot. For a tuned neuron, the average firing rate in each direction is expected to be

quite different. The preferred direction is computed using circular statistics [Jammalamadaka &

SenGupta, 1999] as

circular mean = arg(V rNeN ) (3-1)
N

where rN is the neuron's average firing rate for angle N and Ncovers all the angle range.

Figure 3-2 shows the polar plot of neuron 72. The direction of the vector on the polar plot

indicates the direction of velocities, and the magnitude of the vector is the average firing rate,

marked as a blue circle, for each direction. The computed circular mean, estimated as the firing

rate weighted direction, is shown as a solid red line on the polar plot. This indicates clearly that

neuron 72 fired most frequently toward the preferred direction.

Metric for Tuning

A metric is necessary to evaluate the neural tuning. A comparative analysis between the

neural firing and the kinematics based on the metric could provide a better understanding of the

neuron receptive field properties. A metric would also present a way to select the tuning neuron









subset that contributes most to movement generation, potentially reducing the decoding

complexity. In this section, we review the previous tuning metric and then compare it to our

newly proposed tuning metric.

Tuning Depth

The metric for evaluating the tuning property of a cell is the tuning depth of the cell's

tuning curve. This quantity is defined as the difference between the maximum and minimum

values in the cellular tuning normalized by the standard deviation of the firing rate [Carmena et

al., 2003; Sanchez, 2004]. The tuning depth is normalized between 0 and 1 through all the

channels, which looses the scale for comparisons among different neurons.

max(r, ) min(r, )
tuning depth =max(r) -min(r (3-2)
std(firing rate)

The normalization in Equation 3-2 used to equalize the firing of different cells can wrongly

evaluate a shallow tuned neuron as a deeply tuned neuron, when both fire with a small variance.

The normalization inaccurately exaggerates the tuning depth when the standard deviation is close

to 0. In fact, the tuning metric to evaluate neuronal participation in kinematics should depend not

only upon the mean firing rate in a certain direction, but also on the distribution of the neural

spike patterns. Normalizing by the firing rate alone may not be the best way to evaluate neuron

tuning.

A counterexample using tuning depth as the metric is shown in Figure 3-3. Neuron 72 is

plotted on the left, and neuron 80 is on the right. Neuron 72 fires less in other directions than the

preferred one, while neuron 80 not fire at most of the directions expected for the preferred one.

By visually inspecting the plots, we can infer that neuron 80 is more "tuned" than neuron 72.

However, by tuning depth metric, neuron 80 was assigned smaller tuning depth, 0.93, than

neuron 72's tuning depth of 1. This may be due to normalization by the standard derivation of









the firing rate, which inaccurately exaggerates the tuning depth for some neurons with stable

activities (standard derivation is close to 0). In fact, the tuning metric to evaluate differences

between neuron reactions to kinematics depends not only upon the mean firing rate on a certain

direction, but also on the distribution of the neural spike patterns. Normalizing by only the firing

rate does not appear to be a very cogent or effective way to evaluate neuron tuning.

Information Theoretic Tuning Metric

The traditional tuning curves do not intrinsically allow us to measure information content.

We have used indirect observational methods such as tuning depth but they are not optimal. An

information theoretic tuning depth as a metric for evaluating neuron instantaneous receptive

properties is based on information theory and would capture much more of the neuronal response

[Paninski et al., 2004b; Wang et al., 2007b]. Define a tuned cell as a cell that extracts more

information between the stimulus direction angle and its spiking output. If a cell is tuned to a

certain angle, the well-established concept of mutual information [Reza, 1994] can

mathematically account for an information theoretic metric between the neural spikes and

direction angles, which is given by


I(spk; 0) = p(O) I p(spk 0) log2 (p(pk ) (3-3a)
0 spk=0,1 p(spk)


= 'p(spk)log2(p(spk))- p(O) p(spk O)log2(p(spk 0)) (3-3b)
spk=0,1 0 spk=0,1

where p(O) is the probabilistic density of all the direction angles, which can be easily estimated

by Parzen window [Parzen 1962]. The direction angles of the kinematic vectors are evaluated

between and ;r. p(spk) can be calculated simply as the percentage of the spike count during

the entire spike train. p(spk 0) is the conditional probability density of the spike given the

direction angle.









For each neuron, the conditional probability density p(spk 0) was estimated directed

from the data by an intuitive nonparametric technique [Chichilnisky 2001, Simoncelli et al.

2004], as the fraction of the two kernel smoothed histograms of marginal p(O) and joint

distribution p(spk = 1,0). The histogram of the spike-triggered angle is smoothed by a Gaussian

kernel according to Silverman's rule [Silverman, 1981] and normalized to approximate the joint

probability p(spk = 1,0), depicted as the solid red line in upper plot of Figure 3-4. In other

words, the direction angle is accounted for in the histogram during the corresponding direction

angle bin only when there is a spike. Then the conditional probability density p(spk = 1 0),

depicted as solid blue line in the bottom plot of Figure 1, is approximated by dividing the kernel-

smoothed histogram of p(spk = 1,0) by the kernel-smoothed histogram of 0 (blue dot line in

the upper plot of Figure 3-4), which is in fact Bayesian rule,


p(spk =1 )= p(spk 1,0) (3-4)
p(O)

where p(spk = 0 ) = 1 p(spk =1 0). When p(O) is 0, p(spk = 1,0) is set to be 0. Note that

because p(spk,0) is always not greater than p(0) this actually does not share the same

problem as Equation 3-2.

The traditional computation of r, in the tuning depth, which is the average firing rate for

certain angle N, is actually a rough approximation of Equation 3-3 because

M(o)
Al (0)
/n ~# spike(0) p(spk,0)
r(0) = ) -_- #s ) =p(spO) p(spk 0 ) (3-5)
M(0) #(0) p(0)

where M(0) is the total sample number at angle 0 from the whole data set, as well as #(0), and

A, (0) is the firing rate corresponding to sample i of angle 0. #spike(0) is the total number of









spike counts when the movement angle is 0. The conditional probability density p(spk I 0) can

be regarded as the non-linear functional relationship between instantaneous neuron firing

probability and movement directions. We can see that the traditional tuning depth analysis

actually works only with the difference between the maximum and minimum of the nonlinear

tuning curve, scaled by the inning window. During the experiment the monkey very likely will

not explore all the possible angles equally so it will achieve different prior distributions for p() .

The uniformly distributed p(O) provides the ideal estimation for tuning curves. When there is

insufficient data to estimate the accurate shape ofp(spk 10), the traditional tuning depth will

certainly provide a bias. In the experiment, there is no guarantee of the data sufficiency. Its effect

will be tested in synthetic data. The normalization by the standard deviation of the firing rate in

Equation 3-2 brings the concern of inning window size as well. The information theoretical

tuning depth works directly on the spike train. It takes into account not only the spike nature of

the data, which we can tell from the first term in Equation 3-3b, but also the every point of the

nonlinearity p(spk 10) and the prior distribution p(O) as well, which is shown in the second

term in Equation 3-3b.

Simulated Neural Recordings

We first test our information theoretical criterion on synthetic data using a single random

realization of the spike train. Three sets of 2-dimensional movement kinematics are generated.

The magnitude and the direction of first dataset are both uniformly distributed within the range

[0, 1], [- z, 7 ] respectively. The second dataset has magnitude uniformly distributed while the

direction is Gaussian distributed, centered at 2/3 r with standard deviation 0.1 ;r. The third data

set has Gaussian distributed magnitude centered at 0.7 with standard deviation 0.1, and Gaussian









distributed direction centered at 2/3 z with standard deviation 0.1 z. The velocity train is passed

through a LNP model with the assumed nonlinear tuning function in Equation 3-6.

A, = exp(/ + vt D prefer ) (3-6)

where A, is the instantaneous firing probability, /u is the background firing rate, /8 represents

the modulation factor to a certain preferred direction, which is represented by a unit vector

D pref,. The spike train is generated by an inhomogeneous Poisson spike generator, once we

have the knowledge of A,.

We generate each velocity dataset with 100 Hz sampling frequency and 100 sec duration

(10000 samples totally) or 10 sec duration (1000 samples totally) to test the reliability of the

tuning criterion when there is fewer data. The background-firing rate / is set to 0. The preferred

direction is set as 1/3 z. We implemented 10 synthetic neurons distinguished by their modulation

factor / varying from 1 to 10, which hints at a monotonically increasing tuning. The first

uniformly distributed data set is supposed to give full perspective of the tuning curve, since it

explores all possible direction angles. The Gaussian distributed direction in the second data set

favors samples at a certain direction. It won't change the information about the tuning curves in

terms of direction angle when compared to the first dataset. The third data set have the Gaussian

distribution magnitude with center at 0.7, which means for given direction angle the

instantaneous firing probability is higher than the uniformly distributed magnitude with mean at

0.5. Since randomness is involved in the generation of the velocity and spike trains, we will

evaluate the tuning depth criterion for 100 Monte Carlo trials.

Figure 3-5 shows the average tuning information with standard deviation across 100 Monte

Carlo trials evaluated for 10 neurons with 100 sec duration. The dotted line group is the tuning









information estimated by traditional tuning depth for all 3 datasets. In order to get the statistical

evaluation between Monte Carlo runs, the traditional tuning depth were not normalized to [0, 1]

for each realization as normally done in real data. The solid dot group is the tuning information

estimated by information theoretical analysis for all 3 datasets. Both groups show higher

information amount evaluated for each neuron from dataset 3 than the other 2 datasets as

expected. However, the 2 lines evaluated from dataset 1 and dataset 2 are grouped much closer,

which means less bias affected by prior distribution by information theoretical analysis than by

traditional tuning depth. Since the more samples on the certain direction angle should not affect

the information amount, the information theoretical analysis provides the estimation that makes

more sense.

The tuning criterion is expected to steadily represent the tuning information amount across

different Monte Carlo trials. However, for each neuron directly comparing the standard deviation

through Monte Carlo trials between 2 methods is not fair, since their scales are quite different.

We use correlation coefficient to measure the similarity of the tuning information curve along 10

neurons between each trial and the average performance. The statistical similarly results through

100 trials for 3 datasets evaluated by 2 methods with both durations are shown in Table 3-2. For

each data set, pair-wise student t-test was performed to see if the results are statistically different

from the traditional tuning depth. The test is performed against the alternative specified by the

left tail test CCTuning depth
hypothesis at a = 0.01 significance level. Under the null hypothesis, the probability of observing

a value as extreme or more extreme of the test statistic, as indicated by the p-value, is also shown

in Table 3-2.









For each dataset, the tuning information criterion by information theoretical analysis shows

steadily information representation with higher correlation and less standard deviation in terms

of similarity to average tuning information. All the t-test confirms the statistical performance

improvement. In the real data analysis, there is no guarantee that we always have sufficient data

to estimate the tuning abilities. Note that with less duration (1000 samples), the information

theoretical criterion performs better than the traditional one.

To distinguish the 10 neurons, we expect the criterion will be able to accurately rank the

neurons monotonically related to the modulation factor / from 1 to 10, even for the single

realization of the spike train. Throughout the 100 Monte Carlo trials, the monotonicity of the

tuning depth along 10 neurons for 3 datasets by both methods for the two durations is shown in

Table 3-3. For example, among 100 Monte Carlo trials on 1000 sample simulation, only 7 trials

show monotonicity by traditional tuning depth, while 62 trials show monotonicity by information

theoretical analysis.

Note that the traditional tuning depth shows much poorer monotonicity for all the dataset

compared to information theoretical analysis. It even fails on monotonicity test in dataset 3. This

is because the normalization term in the traditional tuning depth (the standard deviation of the

firing rate) is exponentially increasing when both modulation factor / and mean of speed

increases. When there is enough data, all the datasets show 100% monotonicity of tuning

information across the 10 neurons evaluated by the information theoretical analysis. Even with

insufficient data, the information theoretic tuning again shows much greater monotonicity

percentage than the traditional tuning depth. Thus the information theoretical tuning depth is

more reliable to rank neurons.









In Vivo Neural Recordings

Since we have tested the reliability of the information theoretical analysis on the tuning

information in synthetic data, we now implement this criterion for our BMI data, where the

neural activity is processed as binary spike trains sampled at 100 Hz. All the kinematics

variables, hand position, velocity and acceleration, are upsampled to be synchronized with the

neuron spikes trains. The traditional tuning depth for all 185 neurons is computed from each of

the kinematic variables and normalized into [0,1] as shown in Figure 3-6. The top plot is tuning

depth computed from position, the middle from velocity, and the bottom from acceleration. The

cortical areas where the micro-arrays were placed are also marked in the Figure. We can see

clearly that most tuned neurons are in the primary motor cortex regardless of which kinematic

vectors are used to calculate the tuning depth.

Figure 3-7A shows the information theoretic depth calculated from all 3 kinematic

directions for all the neurons. Compared to Figure 3-6, in which the tuning depths are usually

normalized to [0, 1] for all kinematics, the mutual information shows clearly that the velocities

(the middle plot) relatively conveys more tuning information than position or acceleration, as

reported in the literature [Paninski et al., 2004a] Since mutual information is a distance (it is self-

normalized) it allows the relative assessment of tuning across different kinematics. For example,

we found that neuron 121 is tuned more to position, while neuron 149 is tuned more to velocity.

In Figure 3-7A, with the exception of the Ml cortical area, the neuronal information theoretic

tuning depths seem almost flat, which could be erroneously interpreted as meaning that these

neurons have little or no tuning. Actually, the mutual information is a nonlinear measure,

emphasizing the large distances. Due to the large dynamic range of the mutual information, it is

preferable to display the results in logarithmic scale. The difference between neurons in other

cortical area is much clearly depicted in Fig 3-7B.









Information Theoretical Neural Encoding

This section implements an information theoretical methodology to address instantaneous

neuronal encoding properties. The analysis is based on a statistical procedure for quantifying

how neuronal spike trains directly encode arm kinematics. All of the evaluation is performed

directly with the neural spike times, which preserves the fine time structure of the representation

without determining a rate code and its associated window size commonly chosen by the

experimenter

Instantaneous Tuning Function in Motor Cortex

The literature contains many different types of tuning functions (i.e., linear, exponential,

Gaussian) [Moran & Schwartz, 1999; Eden & Frank et al., 2004]. These nonlinear mathematical

models are not optimal for dealing with the real data because each neuron very likely has

different tuning properties [Wise et al., 1998]. The accuracy of the tuning function estimation

will directly affect the Bayesian decoding approach and, therefore, the results of the kinematic

estimation in BMIs.

The spike-triggered average (STA) is one of the most commonly used white noise analysis

[deBoer & Kuyper, 1968; Marmarelis & Naka, 1972; Chichilnisky, 2001], applicable when data

is uncorrelated. It is applied for instance in the study of auditory neurons [Eggermont et al.,

1983], retinal ganglion cells [Sakai & Naka, 1987; Meister et al., 1994], lateral geniculate

neurons [Reid & Alonso, 1995], simple cells in primary visual cortex (VI) [Jones & Palmer,

1987; McLean & Palmer, 1989; DeAngelis et al., 1993]. STA provides an estimate of the first

linear term in a polynomial series expansion of the system response function with the

assumptions that the raw stimulus distribution is spherically symmetric or elliptically symmetric

(whitening operation is then necessary), and the raw stimuli and the spike-triggered stimuli

distribute differently in terms of the mean. If the system is truly linear, STA provides a complete









characterization. This linear approximation was improved by Simoncelli, Paninski and

colleagues [Simoncelli et al., 2004]. By parametric model identification, the nonlinear property

between the neural spikes and the stimuli was directly estimated from data, which is more

reliable than just assuming linear or Gaussian dependence. In our sequential estimation for BMI

studies [Wang et al., 2007b] it provides a very practical way to acquire the prior knowledge (the

tuning function) for decoding purposes.

This technique estimates the tuning function by a Linear-Nonlinear-Poisson (LNP) model

[Simoncelli et al., 2004], which is composed of a linear filter followed by a static nonlinearity

then followed by a Poisson model, as shown in Figure 3-8.

The linear filter projects the multi-dimensional kinematic vector into its weight vector k

(representing a direction in space), which produces a scalar value that is converted by a nonlinear

functionfand applied to the Poisson spike-generating model as the instantaneous conditional

firing probability p(spike I k x) for that particular direction in the high dimensional space. In

our work the optimal linear filter actually projects the multi-dimensional kinematic vector x

built from the position, velocity and acceleration in x and y along the direction where they differ

the most from the spike triggered kinematic vectors. This projection could represent the

transformation between kinematics to muscle activation [Todorov, 2000]. The nonlinear function

represents the neuron nonlinear response, which accounts for all of the processing of the spinal

cord and deep brain structures to condition the signal for activation operations [Todorov, 2000].

The Poisson model, which encodes the randomness of neural behavior, generates spike trains

with an instantaneous firing probability defined by the nonlinear output. This modeling method

assumes that the generation of spikes depends only on the recent stimulus and is historically

independent of previous spike times.









Previous work [Paninski et al., 2004a, Paninski et al., 2004b] utilized a window in time

approach to build a smoother statistical tuning function from temporal kinematics to

instantaneous neural firing rate. In the encoding stage, the kinematic variable within a window

that embeds temporal information before and after the current neuron firing time is used as a

high dimensional input vector. The linear-nonlinear stage of the LNP model generates a one-

dimensional output as the estimated firing rate for the Poisson stage. However, the sequential

estimation model of our BMI requires just the opposite (i.e., we need to predict from the current

neural activity event a sequence of kinematics), especially for the neurons in Ml. When we infer

the kinematics during a certain window length with respect to a particular spike, the state

estimation error can accumulate easily as the estimation is recursively propagated into the next

time iteration to build the vector during the window. Thus, a one-to-one mapping between the

instantaneous kinematics and the neural activities is of paramount importance for the online

decoding purpose. The other issue is to estimate appropriately the optimal delay in the

instantaneous functional mapping. Due to the decrease in the amounts of data, the instantaneous

decoding is expected to be noisier (fewer data to identify the transfer function), but there are also

possible advantages. Compared to the windowed method of Paninski et al. [2004b],

instantaneous estimation works directly in the dynamic range of the kinematic signals instead of

being affected by all the temporal information embedded within the window. To deal with the

sensitivity issue for neural tuning identification, the method works with the full kinematic vector

containing the instantaneous position, velocity and acceleration to include the information that

each kinematic variable conveys for tuning, which ultimately is what is needed in BMI decoding.

Estimation of the instantaneous encoding depends upon the ability to estimate the

appropriate time delay between motor cortical neuron activity and kinematics [Wu et al., 2006].









Due to the propagation effects of signals in the motor and peripheral nervous system and to

preserve causality, the intended movement is executed after the motor cortical neuron fires

(Figure 3-9).

In the temporal kinematic encoding by LNP models, a window that usually samples 300

msec before and 500 msec after the current neural firing rate [Paninski et al., 2004b] is used to

construct the high dimensional kinematic vector. Although the causal time delay is already taken

into account, the temporal kinematic information before the neuron fires actually has no causal

relation with respect to the current spike. For the instantaneous kinematic encoding model, the

optimum time delay has to be estimated to draw as much information as possible. The

instantaneous motor cortical neural activity can be modelled as

A = f(k xt+lag) (3-7)

spiket = Poisson(A,) (3-8)


where xt+lag is the instantaneous kinematics vector defined as [px vx ax py vy a,] with 2-


dimentional information of position, velocity and acceleration with causal time delay. k is a

linear filter, representing the preferred instantaneous direction in high-dimensional kinematics

space. The weight estimation of the linear filter is based on the standard technique of spike-

triggered regression.

k = (E[xt+~ g Xt+g ] + ) 1E- [xt+ag ] (3-9)
xt+iagjspiket

Equation 3-9 represents the least square solution for the linear adaptive filter, where
T-
E[xt+iag Xt+1ag ] gives the autocorrelation matrix R of the input vector considering causal time

delay. a is a regularization factor, which avoids ill-conditioning in the inverse. In the

experiment, a is chosen to maximize the linear filter performance. From a statistical










perspective, E- [xt+ag ] mimics the role of the cross-correlation vector P between the input
xt+lm, spiket

and the binary spike train considering a causal time delay. Therefore, Equation 3-9 reduces to a

conditional expectation of the binary spike train (i.e., this linear filter gives the spike-triggered

average instantaneous kinematic vector (E- [xt+ag ]) scaled by the decorrelated kinematic
xt+lg spike,

data (E[xt+lg xt+lag i + a 1).

A, is the instantaneous firing rate in an inhomogeneous Poisson spike generator. For the

time interval selected for the spike analysis (i.e. the time interval valid for a Poisson assumption

in the collected data, which has to be experimentally determined), a number is randomly drawn

from a normalized uniform distribution (i.e., 0 to 1) and compared with the instantaneous

conditional firing probability. If the number is smaller than the probability, then a spike is

generated in this time interval. This modelling approach is therefore intrinsically stochastic,

which carries implications (large variance) to on-line real time implementations.

f is the nonlinear function estimated by an intuitive nonparametric technique


[Chichilnisky 2001; Simoncelli et al., 2004] as the conditional probability density p(spk k x)

directed from the data. It is the fraction of the two kernel smoothed histograms of

marginal p(k x) and joint distribution p(spk, k x). It is the same way when we describe in

Figure 3-4. The only difference is that we are plot the joint and marginal pdfin term of filtered

kinematics k x. The histogram of the spike-triggered angle is smoothed by a Gaussian kernel

according to Silverman's rule [Silverman, 1981] and normalized to approximate the joint

probability p(spk,k x), depicted as the solid red line in upper plot of Figure 3-10. In other

words, the direction angle is accounted for in the histogram during the corresponding direction









angle bin only when there is a spike. Then the conditional probability density p(spk k x),

depicted as the line in the bottom plot of Figure 3-10, is obtained by dividing the kernel-

smoothed histogram of p(spk, k x) by the kernel-smoothed histogram of k x (dotted line in

the upper plot of Figure 3-10), which in fact implements Bayes rule,

p(spk 1,Zk x)
p(spk = k x) ) (3-10)
p(k x)

where p(spk = 0 k x) = 1- p(spk = 1 k x). When p(k x) is 0, p(spk = 1, k x) is set to be 0.

The peak in the conditional probability of Figure 3-10 is associated with the maximal

firing probability, which is linked with specific values of the kinematic variables, and produces

an increase in the firing rate of the neuron. Likewise, the region of low probability shows a

deviation from the spontaneous firing rate for the neuron. These two portions of the curve (the

most difficult to estimate well because they are at the tails of the distribution) are responsible for

the modulation that is seen in the rasters of the spike train data when observed along with the

kinematic variables, and that are fundamental for BMI decoding performance.

Information Theoretic Delay Estimation

The causal time delay can also be estimated by information theoretical analysis. Here, we

are interested in the optimum time lag, which extracts the most instantaneous kinematic

information corresponding to the neural spike event. The well-established concept of mutual

information [Reza, 1994] as a metric for evaluating neuron instantaneous receptive properties is

based on information theory and would capture much more of the neuronal response [Paninski et

al., 2004b; Wang et al., 2007b]. Define a tuned cell as a cell that extracts more information

between the linear filtered kinematics and its spiking output. If a neuron is tuned to a preferred

direction in high-dimensional space, the mutual information between the spike and the delayed









linear filter kinematics vector is first drawn simply as the function of the time lag after a spike as

in Equation 3-11.

p(spk| k x(lag))
I .(lag) = p(k x(lag)) p(spk k x(lag)) log, ( x(ag)) (3-11)
.sp kxx spke=0,1 p(spk)


where p(k x(lag)) is the probabilistic density the linear filtered kinematics as a function of time

lag, which can be easily estimated by Parzen window [Parzen, 1962]. p(spk) can be calculated

simply as the percentage of the spike count during the entire spike train. p(spk Ik x) is exactly

the nonlinear function fin LNP model.

The time delay with the highest mutual information is assigned as the optimum time lag for

each neuron. The kinematics at the optimum time lag carries maximally the causal information

of the neural spike. In the encoding stage, the 6-dimentional kinematic vectors are first

synchronized at the optimum delay for each neuron, then input to the LNP tuning model to

generate the estimated firing rates according to Equation 3-7. To test the encoding ability of the

instantaneous tuning model, the neuron firing rate is obtained by smoothing the real spike train

with a Gaussian kernel. The correlation coefficient is then calculated between two firing rates to

measure the quality of encoding.

As we mentioned in the previous section, the windowed kinematic vector is usually chosen

as 300 msec before and 500 msec after the current neural spike, which already takes into account

the causal time delay of the motor cortical neurons. We selected a possible delay range from 0 to

500ms after a neuron spikes to estimate the optimum time delay for our instantaneous tuning

function. The regularization factor o in the spike-triggered average stage is experimentally set

as 10-7, and the kernel size to smooth the histogram of probability density is set according to

Silverman's rule [Silverman, 1981]. For all 185 neurons, the mutual information as a function of









time delay was obtained from 10,000 continuous samples (100 seconds) during movement. The

time delay with highest mutual information was assigned as the best time lag for each neuron.

Since neurons in Ml show more tuning information than other cortical areas, here we study 5

neurons that show the highest tuning, neurons 72, 77, 80, 99, and neuron 108. Figure 3-11 shows

the mutual information as a function of time delay after spike occurrence. The best time lags are

marked by a cross on each curve, and are 110ms, 170 ms, 170ms, 130 ms and 250ms

respectively. It is interesting to observe that not all the neurons have the same time delay,

although all of these neurons are in Ml. During the analysis, different time delay is used for each

neuron respectively. The average best time delay for all 185 neurons was 220.108ms, which is

close to the results mentioned in the literature [Wu et al., 2006].

Instantaneous vs. Windowed Tuning Curves

The windowed encoding approach yields a widely accepted exponential increasing

nonlinear function f after linear projection [Paninski et al., 2004a, Paninski et al., 2004b].

However for BMIs we are proposing an instantaneous and global (i.e., across kinematic

variables) tuning estimation, therefore it is important to compare and evaluate the two tuning

methodologies. For each neuron, we chose 7 different window sizes to filter the kinematic vector

[p. vx ax p, vy a,]Tg and calculate the nonlinearity using the methods described in Figure 3-

10. The biggest window size is 300 ms before and 500 ms after the current neural spike, noted as

[-300, 500], which has been used in [Paninski et al., 2004b] for motor cortical neuron tuning

analysis. Then each window shrinks 50ms at left and right extremes, such as [-250, 450], [-200,

400], ... until the smallest window [0, 200] ms. Figure 3-12 shows the nonlinearity of the 4 MI

neurons estimated by windowed kinematics with 7 different window sizes, each plotted in









different colors. The instantaneous nonlinear tuning with optimum delay is emphasized in a thick

red line.

As we can observe from the figures, the tuning curves vary with different window sizes,

particularly in the high tuning region. However, the middle part of the nonlinearity is very stable

across all the window sizes, including the instantaneous estimation. Compared to the windowed

tuning, the instantaneous model produces a smaller dynamic range of projected values (x-axis)

because it directly works in the dynamics range of the kinematics without involving time-

embedded information. We chose the Correlation Coefficient (CC) as the criterion to evaluate

the similarity between the nonlinear tuning curves estimated from each windowed kinematics

and the instantaneous one within the range specified by the instantaneous model. Seven

histograms of correlation coefficients are shown in Figure 3-13, where the y-axis shows the

percentage of neurons (out of 185) with a given CC. We can see that 98.92% of neurons have

instantaneous tuning curves with a similarity over 0.9 compared to the one by window size [-

300, 500] ms. More than half (58.38%) of the neurons have a similarity over 0.9 for the [-50,

250] ms window. However, less than half (41.62%) of the neurons have a similarity over 0.9 for

the [0, 200] ms window because this window is not big enough to include the optimum causal

delay, which is on average 220 ms. Since the summation for the same window size (color bar) is

100%, the similarity of the less similar neurons (CC<0.9) is distributed across other CC bins.

Also notice that from the windowed methods the one with the smallest window, when it includes

the optimum time delay, is the closest to the instantaneous estimated tuning. The similarity

amongst windowed and instantaneous methods is rather surprising, and builds confidence that in

spite of its simplicity in computation it is quantifying appropriately neural tuning properties.









One possible reason for the differences at both extremes of the tuning curves is

insufficient data to provide an accurate estimation at both extremes, in particular because of the

division in Equation 3-10. Recall that this is actually the important part of the tuning curve for

BMIs because it is in this portion that the neuron firing shows modulation with the kinematic

variable. In particular, neurons 80 and 99 (as many others) show a large mismatch at the high

firing rate level (right end of the curve). Both neurons demonstrate a lower firing probability in

the instantaneous curve compared to the windowed curves. Neuron 80 also shows a saddle-like

behavior very different from the exponential increase. Therefore these behaviors need to be

further investigated.

Instantaneous vs. Windowed Encoding

Since the ultimate goal of the tuning analysis is to transform spike timing information into

the kinematic space, here we compare both tuning methods in our experimental data set. Here we

select neuron 80 and neuron 99 to compare the encoding ability between the windowed and the

instantaneous tuning model with the real kinematics signals (Figure 3-14A and 3-14B). From

previous studies, these 2 neurons are known to be among the most sensitive neurons for BMI

modeling [Sanchez et al., 2003; Wang et al., 2007b], and they are also amongst the ones that

show the larger mismatch at the high firing probability range (right extreme end of Figure 3-12).

In each plot, the pink bars in the first and second rows represent the neural spike train. The

red dash line superimposed on the spike train is the firing rate estimation by kernel smoothing. In

the top panel, the blue solid line superimposed on the spike train is the estimated firing rate by

instantaneous tuning, while in the second panel, the green solid line superimposed on the spike

train is the estimated firing rate by windowed tuning with 300 ms before and 500 ms after the

current neuron firing. To check the animal's behavior simultaneously with the neural activity, the









third and fourth panels show the re-scaled 2D position and velocity (blue for x, green fory) after

synchronization at the optimum delay.

We can clearly observe that, for both neurons (Figure 3-14A and Figure 3-14B), the

instantaneous model gives a smoother estimated firing rate than the noisy estimation by the

windowed model. We found that the linear filter outputs in the windowed model are very noisy,

because it is a projection of the high dimensional time-embedded kinematic vector, which

increases the range of the independent variable and so creates larger variability in the spike rates.

Moreover, the over-estimation at high firing rate of the nonlinearity curve leads to the extraneous

large peaks on the green line. As can be expected, since the tuning is higher, there will be more

spikes and so the intensity function estimation is very high and noisier as seen in the green curve.

It is also very interesting to notice that after causal alignment both neurons demonstrate

clear time alignment (negative correlation) between the hand velocity trajectory and the peaks of

firings, which reinforces the evidence for neural kinematic encoding.

To quantify the encoding comparisons, the correlation coefficient between the neuron's

firing rate and the estimated firing rates from the windowed and instantaneous models. The

kernel size smoothes the spike train to enable the estimation of CC but it will affect the results of

the similarity measure. Figure 3-15A and B shows results comparing the CC for the same 2

neurons vs. different kernel sizes. Correlation coefficients for the instantaneous model are always

greater than the ones by windowed model across kernel sizes. Here we choose to display the

kernel size that maximizes the similarity. For neuron 99, the correlation coefficient between the

instantaneous model and the firing rate is 0.6049, which is greater than 0.4964 for the windowed

model. For neuron 80, the correlation coefficient between the estimated firing rate with the

instantaneous model and the firing rate from real spike train is 0.6393, which is greater than









0.5841 given by the windowed model. Therefore, the instantaneous model shows better encoding

ability.

Discussion

The traditional criterion of estimating tuning depth from windows of data does not seem

the most appropriate in the design of BMIs using sequential estimation algorithms on spike

trains. Here we present instead an information theoretical tuning analysis of instantaneous neural

encoding properties that relate the instantaneous value of the kinematic vector to neural spiking.

The proposed methodology is still based on the LNP model, and an information theoretic

formulation provides a more detailed perspective when compared with the conventional tuning

curve because it statistically quantifies the amount of information between the kinematic vectors

triggered off by the spike train. As a direct consequence, it can estimate the optimum time delay

between motor cortex neurons and behavior caused by the propagation effects of signals in the

motor and peripheral nervous system.

The similarities and differences between the windowed and instantaneously evaluated

tuning functions were also analyzed. We conclude that the instantaneous tuning curves for most

of the neurons show over 0.9 correlation coefficients in the central region of the tuning curve,

which unfortunately is not the most important for BMI studies. There are marked differences in

the high tuning region of the curves, both in the dynamic range and in the estimated value. The

windowed model works on a time-embedded vector, which spreads the linear output k x to a

wider range. Since the pdf integral is always 1, the windowed model flattens the marginal

distribution of p(k x). In the time segment when the neuron keeps firing, the overlapping

windows make the linear filter output k x change slowly. It results in more spike-triggered

samples in the small neighborhood of k x. Therefore, the estimation on the joint distribution









p(spk, k x) becomes higher. Both consequences contribute to the overestimation of tuning at

the high firing rate part of the windowed nonlinear curve.

The instantaneous model works directly in the dynamic range of the kinematics that is

sensitive only to the corresponding neuron spike timings. It estimates more accurately the firing

probability without distortions from the temporal neighborhood information. However, we create

a vector with al of the kinematics (position, velocity, acceleration) to estimate better (i.e., to

obtain more sensitivity) the tuning from the data. This has the potential to mix tuning

information for the different kinematics variables and different directions if they are not exactly

the same. When the different kinematic variables display different sensitivities in the input space,

after projection by the weight filter direction they will peak at different values of k x in the

nonlinear curve, which then results in the saddle-like feature observed in Figure 3-12. The other

potential shortcoming is that less data is used, so the variability may be higher. However, at this

time one still does not know which tuning curve provides a better estimate for the instantaneous

tuning model required in the encoding and decoding stages of BMIs. Ultimately, the

instantaneous model can produce equivalent or better encoding results compared to existing

techniques. This outcome builds confidence to directly implement the instantaneous tuning

function into the future online decoding work for Brain-Machine Interfaces.










Table 3-1. Assignment of the sorted neural activity to the electrodes
Aurora Right PMA Right MI Right S1 Right SMA Left MI
(left handed) 1-66(66) 67-123(57) 123-161(38) 162-180(19) 181-185(5)


Figure 3-1. The BMI experiments of 2D target reaching task. The monkey moves a cursor
(yellow circle) to a randomly placed target (green circle), and is rewarded if a cursor
intersects the target


0.25
0.2
0.15 /
0.1


150







/10
?10\


240


300


270




Figure 3-2. Tuning plot for neuron 72


Aft

?F













90 025

02
0 15,
01


90 008


150






\ \
\.-'


33C
- - ^ ^ ,-


Figure 3-3. A counterexample of neuron tuning evaluated by tuning depth. The left plot is a
tuning plot of neuron 72 with tuning depth 1. The right plot is for neuron 80 with
tuning depth 0.93


marginal probability and joint probability


0-
-0.2


0.8


-0.15 -0.1 -0.05 0 0.05 0.1 0.15
0
conditional probability

(spkl9)








-0.15 -0.1 -0.05 0 0.05 0.1 0.15
0


Figure 3-4. The conditional probability density estimation


Sp(O)
p(spk,9)











The average tuning information across Monte Carlo trials for different neurons
2 dataset 1 by tuning depth --
1.5 dataset 2 by tuning depth ---
dataset 3 by tuning depth
1 ^

0.5 -

1 2 3 4 5 6 7 8 9 10



0.8
dataset 1 by information analysis
0.6 dataset 2 by information analysis
dataset 3 by information analysis


1 2 3 4 5 6 7 8 9 10
P


Figure 3-5. The average tuning information across trials by different evaluation



Table 3-2. The statistical similarity results comparison
Sample# Method Dataset 1 Dataset 2 Dataset 3
Traditional tuning depth 0.9705 0.0186 0.9775 0.0133 0.9911 0.058
103 Information theoretical analysis 0.9960 0.0024 0.9964 0.0021 0.9988 0.0008
t-test(p value) 1(9.52x 10-26) 1(1.37x 10-26) 1(5.68x 10-24)
Traditional tuning depth 0.9976 0.0013 0.9977 0.0014 0.9991 0.0005
104Information theoretical analysis 0.9997 0.0002 0.9996 0.0002 0.9999 + 0.0001
t-test(p value) 1(1.60x 1026) 1(4.57x10-25) 1(6.00x 10-9)

Table 3-3. The comparison of percentage of Monte Carlo results in monotonically increasing
Sample#Method Dataset 1 Dataset 2 Dataset 3


103Traditional tuning depth
Information theoretical analysis
104Traditional tuning depth
Information theoretical analysis


7%
62%
76%
100%


3%
57%
84%
100%


0%
76%
0%
100%














Tuning Depth

S PMd-contra1() contra. 67- 7) S1-conlra.124-161(33) S^ontr M1
0.8 0coral-i) 162-180(19) 1B1






D .I I I
0 2D 40 EDB B 1EO 120 140 150 180



> 1 -






;>DO




0







0 20 4D E50 8 100 120 140 160 180
O 20 4D 80 80 100 1 0 140 160 180
N'uron





Figure 3-6. Traditional tuning depth for all the neurons computed from three kinematics


200
















,0



Qo


0.01 -


0 20 40 6O 80


100
Neuron


130 140 160 180


InfDrmalion TheDretic Tuninri Depth


20 40 6D BD 10o 120 14l 15D 10D 20D


neuron


Figure 3-7. Information theoretic tuning depth for all the neurons computed from 3 kinematics

plotted individually. A) In regular sale. B) In logarithmic scale





81


Information Theorelic Tuning Depth

PMd-conlra 1-Ei(6Pi Ml-conntra.7-123(157) B1-cDntrs:12-1EB1t SMA- ontra MI-ipsi:
.015 162-180(19) 181-1B5(5)

0.01 -

.015 -


0 23 40 60 10 10 120 140 160 180 20


o.02- 1 1 1 1

.015 -

0.01 -




0 20 40 0O 80 1O0 120 140 160 180 20

n m.. .


0

0_

0


,.... .. ..... k M .... ......... J ..












Kinematics Linear Nonlinearf Poisson Spikes




Figure 3-8. Block diagram of Linear-Nonlinear-Poisson model


Jla/, Time (ms)
I,/s


Time (ms)


Figure 3-9. Sketch map of the time delay between neuron spike train (bottom plot) and the
kinematics response (upper plot)


IYIIl IIIIL 1 1 111111111
II


I u


1 a 2












marginal probability and joint probability


p p(KX)
--p(spk,KX)


-0.15 -0.1 -0.05
KX
nonlinearity conditionsa


p(spklKX)


-0.15 -0.1 -0.05


10


5


0
-0.2


0.2


Figure 3-10. The conditional probability density estimation


MI as function of time delay


neuron 72
neuron 77
neuron 80
neuron 99
neuron 108


0.004 1
0 50 100 150 200 250 300 350 400 450 500
time delay (ms)


Figure 3-11. Mutual information as function of time delay for 5 neurons


0 0.05 0.1 0.15

al probability)







0 0.05 0.1 0.15


0 0.05 0.1 0.15


0.022


0.02


0.018


0.016


0.014


0.012


0.01


0.008


0.006


x


I- _












neuron 80 nonlinear estimation A

optimum delay
[-300, 500] ms
[-250, 450] ms
[-200, 400] ms
[-150, 350] ms
[-100, 300] ms
[-50, 250] ms
- [0,200] ms


0.2



0.15



0.1



0.05



0
-0.1


0.15 0.2 0.25


neuron 72 nonlinear estimation


- optimum delay
[-300, 500] ms
[-250, 450] ms
[-200, 400] ms
[-150, 350] ms
[-100, 300] ms
[-50, 250] ms
[0,200] ms


0 0.2 0.4 0.6 0.8 1 1.2
KX


Figure 3-12. Nonlinearity estimation for neurons. A) Neuron 80. B) Neuron 72. C) Neuron 99.
D) Neuron 108


-0.05 0 0.05 0.1
KX


0.2

0.1

0
-0.2












neuron 99 nonlinear estimation


optimum delay C
0.3 [-300, 500] ms
[-250, 450] ms
[-200, 400] ms
0.25 [-150, 350] ms
-100, 300] ms
0.2 [- -50, 250] ms
S[0,200] ms
0.15


0.1


0.05


0
-0.2 -0.1 0 0.1 0.2 0.3
KX



neuron 108 nonlinear estimation
1 D
0 optimum delay
0.9
S [-300, 500] ms
0.8 [-250, 450] ms
[-200, 400] ms
0.7 [-150, 350] ms

0.6 [-100, 300] ms
--- [-50, 250] ms
2 0.5- [0,200] ms

0.4

0.3

0.2

0.1
0---AJ-"----------------------------
-0.2 0 0.2 0.4 0.6 0.8 1 1.2
KX


Figure 3-12. Continued










CC histogram between nonlinearity by windowed kinematics and optium delay
100
[-300,500]ms
90
S[-250,450]ms
80 [-200,400]ms
[-150,350]ms
70 [-100,300]ms
[-50,250]ms
2 60- [0,200]ms

L 50

0 40

30

20

10


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
correlation coefficient



Figure 3-13. Correlation coefficient between the nonlinearity calculated from windowed
kinematics and the instantaneous kinematics with optimum delay











































3200 3400 3600 3800 4000 4200 4400 4600 4800 50m
time(10ms)


02

01


Figure 3-14. Comparison of encoding results by instantaneous modeling and windowed
modeling. A) Neuron 99. B) Neuron 80













neuron 99@time[3001:5000]


by windowed tuning
by instantaenous tuning


0.2
0.05 0.1 0.15


0.2 0.25 0.3
kernel size


0.35 0.4 0.45 0.5


neuron80@time[2501:4500]

Sby windowed tuning
0.65 -- by instantaenous tuning

X017
0.6 Y 0 6393
0.6


kernel size


Figure 3-15. Comparison of encoding similarity by instantaneous modeling and windowed

modeling across kernel size. A) Neuron 99. B) Neuron 80


0.65


/
0.5 -

0.45


X 017
Y 0 4964
U









CHAPTER 4
BRAIN MACHINE INTERFACES DECODING IN SPIKE DOMAIN

The Monte Carlo Sequential Estimation Framework for BMI Decoding

We have thus far presented background on the difference between simulation and BMI real

data, and have elaborated on the Monte Carlo sequential estimation algorithm. Based on this

information, we now present a systematic framework for BMI decoding using a probabilistic

approach.

The decoding of Brain Machine Interfaces is intended to infer the primate's movement

from the multi-channel neuron spike trains. The spike times from multiple neurons are the multi-

channel point process observations. The kinematics is the state that needs to be derived from the

point process observation through the tuning function by our Monte Carlo sequential estimation

algorithm. Figure 4-1 provides a schematic of the basic process.

The decoding schematic for BMIs is shown in Figure 4-1 as the right to left arrow. The

signal processing begins by first translating the neuron spike times collected from the real data

into a sequence of 1 (there is a spike) and 0 (no spike). A time interval small enough should be

chosen to guarantee the Poisson hypothesis (i.e., only few intervals have more than one spike). If

the interval is too small, however, the computational complexity is increased without any

significant improvement in performance. One must also be careful when selecting the kinematic

state (position, velocity, or acceleration) for the decoding model since the actual neuron

encoding is unknown. The analysis presented here will consider a vector state with all three

kinematic variables. The velocity is estimated as the difference between the current and previous

recorded positions, and the acceleration is estimated by first differences from the velocity. For

fine timing resolution, all of the kinematics are interpolated and time synchronized with the

neural spike trains.









It is interesting to note that in black box modeling, the motor BMI is posed as a decoding

problem (i.e., a transformation from motor neurons to behavior). However, when we use the

Bayesian sequential estimation, decoding is insufficient to solve the modeling problem. In order

to implement decoding it is important to also model how each neuron encodes movement, which

is exactly the observation model f(-) in tuning analysis in Chapter 3. Therefore, one sees that

generative models do in fact require more information about the task and are therefore an

opportunity to investigate further neural functionality. Here we use the instantaneous motor

cortical neural activity modeled in Chapter 3 as

A1 = f(k xt+lag) (4-1)

spiket = Poisson(A,) (4-2)


where, as before, xt+lag is the instantaneous kinematics vector defined as


[px vx ax p, vy ay 1]'+ag with 2 dimensional information of position, velocity, acceleration

and bias with causal time delay depending on the data. For BMI, the kinematic vector in Linear-

Nonlinear-Poisson model must be read from the experiment for every spike occurrence since the

task is dynamic, taking into consideration the causal delay between neural firings and kinematic

outputs [Wang et al., 2007b]. The linear filter projects the kinematics vector x into its weight

vector k (representing a preferred direction in space), which produces a scalar value that is

converted by a nonlinear functionfand applied to the Poisson model as the instantaneous

conditional firing probability A, for that particular direction in space p(spike I k x). The filter


weights are obtained optimally by least squares as k (E[xt+ ag Xt iag] + a)1E [xt+lag],
xt+l g |spike,

where E- [xt+1ag ] is the conditional expectation of the kinematic data given the spikes. The
xt+l, |spiket









parameter a is a regularization parameter to properly condition the inverse. The optimal linear

filter actually projects the multi-dimensional kinematic vectors along the direction where they

differ the most from the spike triggered kinematic vectors.

The nonlinear encoding function f for each neuron was estimated using an intuitive

nonparametric technique [Chichilnisky, 2001; Simoncelli et al., 2004]. Given the linear filter

vector k, we drew the histogram of all the kinematics vectors filtered by k, and smoothed the

histogram by convolving with a Gaussian kernel. The same procedure was repeated to draw the

smoothed histogram for the outputs of the spike-triggered velocity vectors filtered byk The

nonlinear function f, which gives the conditional instantaneous firing rate to the Poisson spike-

generating model, was then estimated as the ratio of the two smoothed histograms. Since f is

estimated from real data by the nonparametric technique, it provides more accurate nonlinear

properties than just assuming the exponential or Gaussian function. In practice, it can be

implemented as a look up table, for its evaluation in testing as

y k(k test -k Xspzke,training)
p(spike | k. xtest) = t (4-3)
k(k Xtest k training)


where k is the Gaussian kernel, Xtest is a possible sample we generate at time t in the test data.

training is one sample of velocity vector in the training data, and xspike,trainng is corresponding

spike-triggered sample. In our calculation, we approximate the nonlinearity for each neuron by a

2-layer MLP with a 10 hidden logsig PEs trained by Levenberg-Marquardt back-propagation.

The causal time delay is obtained by maximizing the mutual information as a function of

time lag for each neuron from 10000 continuous samples of the kinematic variables [Wang et al.,

2007b], as we described in Chapter 3. Here we further assume that the firing rates of all the









neuron channels are conditionally independent in implementing whole Monte Carlo sequential

estimation (SE) algorithm with the encoding and decoding process on BMI.

First, the neuron activity data and kinematics are preprocessed. The only information we

stored on the neural activities is the spiking time. In our preprocessing, we check every time

interval with a spiking time, and assign 1 if there is a spike; otherwise, we assign 0. The interval

should be small enough so that only a few intervals have more than one spike. In this case, we

will still assign 1. The multi-channel spike trains are generated as our point process observations.

We identify the kinematic variable as the state we are interested in reconstructing, or the one that

carries the most information as determined by the information theoretical tuning depth. This

variable could be a kinematic vector during a window, which contains both spatial and temporal

information. It could also be an instantaneous kinematic variable resulting from a spike with

some time delays specific to the motor cortex. The velocity is derived as the difference between

the current and previous recorded positions, and the acceleration is derived the same way from the

velocity. All the kinematics are interpolated to be synchronized with neural spike trains.

Secondly, the kinematics dynamic system model Fk, as stated in Equation 2-6 in Chapter

2, and the tuning function between the neural spike train and the primate's kinematics are

estimated from the existing (training) data. The system model is used to linearly predict the next

kinematic value from the current one as Xk = Fkxk 1 + rk. Since the kinematics are continuous

values, Fk can be estimated easily by the least square solution. The tuning function


, = f(k x,) is designed as a linear-nonlinear-Poisson model for each neuron to describe the

conditional firing rate as a function that encodes the kinematics that we are interested in

reconstructing. The details of linear parameter k and nonlinear function f estimation are

already discussed in Chapter 3.









Provided the pre-knowledge of the system model and tuning function, we can implement

the Monte Carlo sequential estimation adaptive filtering algorithm for point process. By

generating a sequential set of samples, the posterior density p(AN/ I xk) is recursively estimated

given spike train of neuronj. At each time iteration k, the joint posterior density p(ANk xk) is

approximated by the product of all the marginal p(AN I xk), which assumes the conditional

independence between neurons. The state is determined by the maximum a posterior or the

expectation by collapsing the Gaussian kernel on the set of samples. The following steps

represent the entire process.

Step 1: Preprocess and analysis.
1. Generate spike trains from stored spike times.
2. Synchronize all the kinematics with the spike trains.
3. Assign the kinematic vector x to be reconstructed.

Step 2: Model estimation (encoding).
1. Estimate the kinematic dynamics of the system model
T T -
Fk = (E[xk lXk1 + a]) 1E[xk- Xk
2. For each neuronj, estimate the tuning function
-T-
1) Linear model k = ([x x]+al) E [x]
x spikes

2) Nonlinear function f J (k x) = spike 'kj
p(k x)
3) Implement the inhomogeneous Poisson generator

Step 3: Monte Carlo sequential estimation of the kinematics (decoding)
For each time k, a set of samples for state Xk are generated, i 1 :N
1. Predict new state samples xk = Fk xk-1 + k, i= :N
2. For each neuron j,
1) Estimate the conditional firing rate AJk = f (k xk), i= :N
2) Update the weights w1J oc p(AN I| iA ), i=1:N
3. Draw the weight for the joint posterior density Wk = -[wJ' i=1:N
J









W'
4. Normalize the weights W k i=l:N

N
5. Draw the joint posterior density p(xk NI k) W; k(xk Xk)

6. Estimate the state xk from the joint posterior density by MAP or expectation.

7. Resample xk according to the weights Wk.

Monte Carlo SE Decoding Results in Spike Domain

In this section, we show the BMI decoding results directly in the spike domain by

implementing the Monte Carlo sequential estimation framework.

We first preprocessed the 185 channels of the neuron spiking time as a 0, 1 point process.

For each neuron in the ensemble, an optimum time interval of 10 ms was selected to construct

the point process observation sequence. With this interval, 94.1% of the intervals with spikes had

only a single spike. For each time interval and in each channel, 1 was assigned when there were

one or more spikes, otherwise 0 was assigned. 185 multi-channel spike trains were generated

1750 seconds long. The recorded 2-D position vector p is interpolated to be synchronized with

the spike trains. The velocity v is derived as the difference between the current and previous

positions, and the acceleration a is derived the same way from the velocity.

Here, the state vector is chosen as the instantaneous kinematics vector

x = [px vx ai p, vy ay ] to be reconstructed directly from the spike trains, rather than

choosing only the velocity during a window when a spike appears. Therefore, the kinematics

vector contains more information about positions, velocities and accelerations. As we discussed

in the tuning analysis section, the information theoretical tuning depths computed from each

kinematics can be different, indicating that there are neurons tuned specifically to a particular









kind of kinematics. Using only one kinematic variable might leave out important information

between the neural spikes and other kinematics.

After data preprocessing, the kinematics model Fk can be estimated using the least squares

solution as shown in Equation 2-6. Notice that carefully choosing the parameters in the noise

estimation (the noise distribution p(r) in Monte Carlo SE) could affect the algorithm

performance. However, since we have no access to the desired kinematics in the test data set, the

parameter of both algorithms were estimated from the training data sets. In the Monte Carlo SE

model, the noise distribution p(r) is approximated by the histogram of rk = xk -Fkxk The

resolution parameter was experimentally set to 100 to approximate the noise distribution. The

regularization factor a in the tuning function was experimentally set at 10-7 for this analysis.

The remaining parameters in Monte Carlo SE include the kernel size C selected at 0.02 and the

number of particles x, experimentally set at 1000, for a reasonable compromise between

computational time and estimation performance. This kernel size should be chosen carefully to

not lose the characteristics of the tuning curve, while still minimizing ripples in the estimated

density.

Monte Carlo SE algorithm produces stochastic outputs because of the Poisson spike

generation model. It also introduces variations between realizations even with fixed parameters

due to the estimation of the posterior distribution with the particles.

Table 4-1 shows reconstruction results on a 1000 sample of test segment (time index from

25401 to 26400) of neural data. Correlation Coefficients (CC) and Normalized Mean Square

Error (MSE normalized by the power of the desired signal) between the desired signal and the

estimations are evaluated for the Monte Carlo SE using 20 realizations. We show the mean and









the standard derivation among realizations, together with the best and the worst performance

obtained by single realization.

Our approach resulted in reasonable reconstructions of the position and the velocity. The

position shows the best correlation coefficient with the true trajectory. This result may be due to

the fact that the velocity and the acceleration were derived as differential variables, where the

noise in the estimation might be magnified. The Monte Carlo SE obtains the tuning function

nonlinearity for each neuron from the training data and estimates the kinematics without any

restriction on the posterior density. The average correlation for the position along x is

0.8058 0.0111 and along is 0.8396 0.0124. The average correlation for velocity along x is

0.7945 0.0104 and along y is 0.7381 0.0057. We notice that although Monte Carlo SE

introduces differences on the reconstruction among realizations due to stochasticity, the variance

of the results is pretty small.

Figure 4-2 zooms in the first 100 samples of the reconstructed kinematics to show better

the modeling accuracy. The left and right column plots display the reconstructed kinematics for

x-axis and y-axis. The 3 rows of plots illustrate from top to bottom the reconstructed position, the

velocity and the acceleration. In each plot, the red dash line is the desired signal. The blue line is

the reconstructed kinematics by one trial of Monte Carlo SE. The gray area in each plot

represents the posterior density estimated by the algorithm over time where the darker areas

represent a higher value. As the value of the posterior density decreases to 0, the color of the dots

will fade to white. Figure 4-2 shows the Monte Carlo SE effectiveness to generate samples

whose density follows the trajectory. The desired signal falls almost always within the high

probability range of the posterior density, which demonstrates the good tracking ability of Monte

Carlo SE.









Since the desired signal in the test set data is formally unknown, it is not reasonable to just

pick the best realization to present the reconstruction results. Here, we choose the averaged

performance among realizations as the reconstruction results by Monte Carlo SE.

Figure 4-3 shows the averaged performance by Monte Carlo SE to reconstruct kinematics

from all 185 neuron spike trains for 1000 test samples. The left and right column plots display

the reconstructed kinematics for x-axis and y-axis. The 3 rows of plots illustrate from top to

bottom the reconstructed position, the velocity and the acceleration. In each subplot, the red line

indicates the desired signal, and the blue line indicates the expectation estimation. The

correlation coefficients between the desired signal and the estimations were shown in Table 4-2.

We further compared the statistical performance of both algorithms on 8000 test data

samples (80 seconds) of neural data. The performance averaged among the decoding results from

20 Monte Carlo trials is chosen as the reconstruction result by Monte Carlo SE. CC and NMSE

were both evaluated with an 800 sample-long window with 50% overlap. The reconstruction

performance is shown in Table 4-3.

As for the Figure of merit for reconstruction, the correlation coefficient has been the

preferred metric to compare movement reconstruction between different experimental data sets

in BMIs [Wessberg et al. 2000]. However, it may not be sufficient to evaluate the accuracy of

BMI algorithm, since a bias in position means that a different point in the external space will be

targeted, so the rating criterion should take this bias into consideration to properly compare

reconstruction models. Notice also that the correlation coefficient obtained from the acceleration

is pretty low. However, if we visually check the reconstruction results in Figure 4-3, the

algorithm actually follows the trend of the desired signal closely. The problem with the NMSE

for BMIs is that the results do not "look as good", with errors sometimes bigger than the power









of the trajectory. This can be observed in Figure 4-3, where the reconstructed position seems to

have a different scale of the desired trajectory. Therefore, NMSE is also chosen as another

criterion to evaluate the tracking accuracy of the animal's true movement trajectory.

Parameter Study for Monte Carlo SE Decoding in Spike Domain

Although the results are interesting, Monte Carlo SE for spike modeling needs to be further

developed. They are substantially more complex than the ones for random processes, and many

parameters are assumed and need to be estimated with significant design expertise. There are 4

parameters in Monte Carlo SE for point process in need to be tuned. Three of them during the

encoding process (training stage), regularization factor a in kinematics correlation matrix

inverse (default 10-7), kernel size c in nonlinearity smooth (default 0.02), and the resolution

parameter in approximation of noise distribution p(r) in state dynamic model (default 100).

The fourth parameter occurs in the decoding process and relates to the number of samples x, of

particle Xk in posterior density estimation (default 1000). Therefore we will evaluate the

encoding/decoding performance as a function of these parameters. For each parameter, 5

different values are tried with all the other parameters set at the default values.

Regularization factor a. It is used to calculate the inverse of the correlation matrix of the
-T-
kinematics (E[x x] + a) '. The parameter a is supposed to be a small positive number in

order to properly condition the inverse of the correlation matrix of the kinematics, when the

minimal eigenvalue is close to 0. However, it should be insignificant compared to the maximal

eigenvalue of the correlation matrix, otherwise it would disturb the eignvalue structure. Notice

that one way to experimentally set the proper a is to check how a affects the linear model error

between linear output and desired signal. Here we set a = [0 10-7 10-5 10-3 10-1]. The error

between the linear model output and the desired signal in terms of different a is shown in









Figure 4-4. As before, the left and right column plots display the reconstructed kinematics for x-

axis and y-axis. The 3 rows of plots illustrate from top to bottom the error for position, the

velocity and the acceleration. We can see that when a is smaller than 10-5, there is almost no

significant difference between the errors. However we have only access to training data, a very

small value will be safer (10-7) for the test data.

The resolution parameter p(r). It is in approximating the noise distribution in the state

dynamic model 77k = Xk -Fkxk1 Density is the number of samples to approximate the cdf of the

noise distribution during the training. The greater the density is, the better the cdf approximates

to the true one, together with more computation. Here we set density = [20,50,100,200,500].

Figure 4-5 shows the cdf of the noise distribution obtained from training set using different

density values. We can see that when the density is larger than 100, the cdf lines overlap.

Therefore 100 is a proper choice to approximate the cdf of the noise distribution in our

experimental data.

Kernel size a It is used to smooth the nonlinearity in tuning estimation. Here we only

study the kernel size for the important neurons, which contribute most to shape the posterior

density of the kinematics. If the kernel size is too small, there will be ripples on the conditional

pdf, which brings a large variance in nonlinearity estimation. If the kernel size is too big, it will

smooth out the difference between jointpdf and marginal pdf which results in the under-

estimation of the conditionalpdf Here we set a = [0.005 0.01 0.02 0.05 0.1]. Figure 4-6 shows

the nonlinearity of neuron 72 (one of the important tuning neurons) smoothed by different kernel

sizes. We can see that when a is 0.005, there are a few ripples on the nonlinear tuning curve.

Even when a is 0.01, there are still ripple at both extreme ends due to insufficient samples.

When a is too big (0.05 and 0.1), the tuning curve is underestimated. We check a for all









neurons, especially focus on the important tuning neurons. 0.02 is an empirical middle ground to

smooth the nonlinearity in tuning.

The sample number x,. It refers to the number of particle xk in posterior density

estimation is the only free parameter during the decoding process. This parameter describes the

accuracy of the posterior density estimation at each time index. It also brings the main drawback

of the approach, the high computational complexity, because each of the samples will be

evaluated to construct the shape of the posterior density. Here we set the sample number x,=

[200, 500, 1000, 1500, 2000]. Figure 4-7 shows the averaged decoding results through 20 Monte

Carlo trials of the kinematics reconstruction with different x, .The left and right column plots

display the reconstructed kinematics for x-axis and y-axis. The 3 rows of plots illustrate from top

to bottom the reconstructed performance of the position, the velocity and the acceleration. In

each plot, the x-axis shows the value ofx,. The blue solid line is CC between the reconstruction

and desired signal. The green dash line is NMSE between the reconstruction and desired signal.

We can see that CCs don't change obviously for all kinematics even using much higher x,n, but

the NMSE clearly shows the decrease trend when x, is bigger. Although the performance

convergences with very large value of x, it would also bring a large computational burden to

decoding. To comprise between the accuracy and computational complexity, we choose 1000

samples where the decoding of most of the kinematic variables start to converge.

Synthesis Averaging by Monte Carlo SE Decoding in Spike Domain

The Monte Carlo sequential estimation for point processes contains two sources of

stochasticity, the generation of the samples to reconstruct the posterior density and the very

nature of the single neuron firings that is modeled as a Poisson point process. While the former

was dealt with the Monte Carlo method (averaging several realizations), the later is still present









in our results due to the coarse spatial sampling of neural activity produced by the limited

number of electrodes. This coarse sampling has two basic consequences. First, the multi

electrode array collects activity from only some of these neural assemblies, which means that the

Monte Carlo sequential estimation model output will have an error produced by not observing all

the relevant neural data. This problem will always be present due to the huge difference in the

number of motor cortex neurons and electrodes. Second, even when a given neural assembly is

probed by one or a few neurons, it is still not possible to achieve accurate modeling due to the

stochasticity embedded in the time structure of the spike trains. To remove it, one would have to

access the intensity function of neural assemblies that are transiently created in motor cortex for

movement planning and control, which are deterministic quantities.

This means that every neuron belonging to the same neural assembly will display slightly

different spike timing, although they share the same intensity function. Since each probed neuron

drives an observation model in the BMI, there will be a stochastic term in the output of the BMI

(kinematics estimation) that can only be removed by averaging over the neural assembly

populations. However, we can attempt to decrease this variance by estimating the intensity

function from the probed neuron and from it generate several synthetic spike trains, use them in

the observation model and average the corresponding estimated kinematics. Since this averaging

is done in the movement domain (and if the process would not incur a bias in the estimation of

the intensity function) the time resolution would be preserved, while the variance would be

decreased. We call this procedure synthetic averaging and it attempts to mimic the population

effect in the cortical assemblies. This averaging is rather different from the time average that is

operated in inning, which looses time resolution in the reconstructed kinematics.









The synthetic spike trains are generated by an inhomogeneous Poisson process with a

mean value given by the estimated intensity function obtained by kernel smoothing. This is

repeated for each neuron in the array. During testing these synthetic spike trains play the same

role as the true spike trains to predict the kinematics on-line. Of course this will increase the

computation time proportionally to the number of synthetic spike trains created. In a sense we

are trying to use computer power to offset the limitations of probing relatively few neurons in the

cortex. Since the errors in prediction have a bias and a variance which are not quantified, it is

unclear at this point how much better performance will become, but this will be addressed in the

validation.

As we analyzed in the previous section, in order to deal with the intrinsic stochasticity due

to the randomness of the spike trains, we proposed the synthetic averaging idea to mimic the

neuron population effect. Instead of decoding only from current spike trains, we use a Poisson

generator to obtain 20 sets of spike trains from each neuron as synthetic plausible observations to

represent the neuron ensemble firing with the same intensity function. This firing intensity

function is estimated by kernel smoothing from each recorded spike trains. The kernel size is

experimentally set as 0.17. In order to preserve the timing resolution the averaging is performed

across the estimated kinematics of each group (including the output of the true spike train). Table

4-4 shows the comparison results of the performance by Monte Carlo SE averaged among 20

realizations on recorded real spike train and the deterministicc" averaged performance over

Monte Carlo and synthetic data (20 sets re-generated spike trains, 20 Monte Carlo trials for each

set) in the same segment of test data (time index 215401 to 216400).

Both approaches as well as the deterministic performance resulted in reconstruction with

similar correlation coefficients. However, the average over synthetic data shows smoother









kinematics reconstruction with reduced NMSE comparing to the averaged performance through

20 Monte Carlo trials on original spike trains. NMSE reduces 26% for position along x, 18% for

position along y, and on average 15% for all 6 kinematic variables. Therefore we can conclude

that the reconstruction accuracy measured by NMSE has a large component due to the variance

intrinsic in the spike firing, but does not affect the general trend of the reconstructed signal as

measured by the CC.

We further compared the statistical performance of both algorithms on 8000 test data

samples of neural data. The performance averaged among the decoding results from 20 sets re-

generated spike trains is chosen as the reconstruction result by Monte Carlo SE. We run the

decoding process for 20 Monte Carlo trials on each set of synthetic spike trains. CC and NMSE

were both evaluated with an 800 sample-long window with 50% overlap. For each segment of

data, pair-wise student t-test was performed to see if the synthetic averaging (SA) results are

statistically different from the averaged performance by recorded neuron spike train alone

(MCSE). The test is performed against the alternative specified by the left tail test CCMCSE
for each kinematic variable. Comparing NMSE by both approaches, the test is performed against

the alternative specified by the right tail test NMSEMcsE>NMSEsA for each kinematic variable.

All the tests are performed on the null hypothesis at a = 0.05 significance level. Under the null

hypothesis, the probability of observing a value as extreme or more extreme of the test statistic,

as indicated by thep-value, is shown in Table 4-5.

Except the position x and the velocity y from this first case, we could not see CC by

synthetic averaging is significantly larger than the one Monte Carlo SE (p < 0.05 ), as

statistically verified using the t-test. In terms of NMSE, however, the t-test verifies that the









Monte Carlo SE reconstruction is statistically better than the Monte Carlo SE for most kinematic

variables.

This result demonstrates that using the simulated neuron population attenuates the

variability intrinsic in the coarse sampling of a given neural population, effectively trading

computation for lack of more neural channels belonging to the same neural population. However,

this procedure only reduces the kinematics estimation error that is due to the variance of the

recorded spike train. It cannot do anything against the lack of information produced by the coarse

sampling of other neural population involved in the movement but not sampled at all. On the

other hand, the procedure creates a modeling bias because the intensity function is estimated

from a single neuron, but it is very difficult to quantify. Since the results improve as measured by

NMSE, overall the synthetic averaging method gains more than it looses. When compared with

the averaging done in time by inning, the averaging in the kinematics domain bypasses the lack

of resolution problem and still smoothes the reconstruction.

Decoding Results Comparison Analysis

Several signal-processing approaches have been applied to predict movements from neural

activities. Many decoding methodologies use binned spike trains to predict movement based on

linear or nonlinear optimal filters [Wessberg et al., 2000; Sanchez et al., 2002b; Kim et al.,

2003]. These methods avoid the need for explicit knowledge of the neurological dynamic

encoding properties, and standard linear or nonlinear regression is used to fit the relationship

directly into the decoding operation. Yet another methodology can be derived probabilistically

using a state model within a Bayesian formulation [Schwartz et al., 2001; Wu et al., 2006;

Brockwell et al., 2004] as we did in our Monte Carlo SE for point process. The difference is all

the previous algorithms are coarse approaches that do not exploit spike timing resolution due to

inning and may exclude rich neural dynamics in the modeling. Monte Carlo SE for point









process decodes the movement in spike domain. It is important to compare our algorithm to other

Bayesian approaches that have been applied to BMI in terms of their different assumptions and

decoding performance.

Decoding by Kalman

The Kalman filter has been applied to BMIs [Wu et al., 2006] to reconstruct the kinematics

as the state from continuous representation of neural activities (i.e., using binned data). When

seen as a Bayesian approach, the 2 basic assumptions of the Kalman filter are the linearity and

Gaussian distributed posterior density. In another word, both the kinematic dynamic model and

the tuning function are assumed to be strictly linear, and the posterior density of the kinematics

state given current neural firing rates are Gaussian distributed at each time index. In this way, the

posterior density can be represented in close form with only 2 parameters, mean and variance of

pdf To apply Kalman filter on our BMI data, the state dynamic remains the same as

xk = FkXk-1 + k (4-4)

where Fk establishes the dependence on the previous state and 7lk is zero-mean Gaussian

distributed noise with covariance Qk. Fk is estimated from training data by the least square

solution. Qk is estimated as the variance of the error between the linear model output and the

desired signal. The tuning function is linearly defined as

A, = H xt+lag + nk (4-5)


where A, is the firing rate by 100ms window inning. xt is the instantaneous kinematics vector

defined as [px vx ax p, vy as 1] with 2-dimensional information of position, velocity,

acceleration and bias term. The variable lag refers to the causal time delay between motor

cortical neuron activity and kinematics due to the propagation effects of signals thru the motor









and peripheral nervous systems. Here it is experimentally set as 200 ms [Wu et al., 2006, Wang

et al., 2007b]. nk is zero-mean Gaussian distributed noise with covariance Rk. The weight

estimation of the linear filter H is given from training data by

H= (E[xt/agXtag ]) 1E[xt+IagAt] (4-6)
Equation 4-6 represents the least square solution for the linear tuning function. The kinematics

vector is then derived as the state from the observation of firing rate in test by Equations 4-7 a-e.

Xkk 1 = FkXk-lk 1 (4-7 a)

Pkk-1 =FkPk-ilk-1F, +Qk (4-7 b)

Kk = Pk 1Hk (HkPkk1H + Rk) (4-7 c)


Pk=kl =FkPk-k-1F, + Qk (4-7 d)

Xkk = Xkkl + Kk (A Hkxkk 1) (4-7 e)

Decoding by Adaptive Point Process

Adaptive filtering of point processes provides an analytical solution to the state estimation

in the spike domain. Therefore, it requires a parametric model for the neuron tuning in closed

form. Many different functional forms of tuning have been proposed, consisting mostly of linear

projections of the neural modulation on 2 or 3 dimensions of kinematic vectors and bias. Moran

and Schwartz [1999] also introduced a linear relationship from motor cortical spiking rate to

speed and direction. Brockwell et al. [2003] assumed an exponential tuning function for their

motor cortical data. Here we have tried both tuning functions for our BMI data.

Exponential tuning

The exponential tuning function is estimated from 10000 samples of the training data as

A, = exp(H xt+,ag) (4-8)









spiket = Poisson(A,)


where A, is the firing probability for each neuron, obtained by smoothing the spike train with a

Gaussian kernel. The kernel size is empirically set to be 0.17 in the experiment [Wang et al.,

2007c]. xt is the instantaneous kinematics vector defined as [px vx a, py vy ay 1]; with 2-

dimensional information of position, velocity acceleration, and bias. The variable lag refers to

the causal time delay between motor cortical neuron activity and kinematics due to the

propagation effects of signals thru the motor and peripheral nervous systems. Here it is

experimentally set as 200 ms as well [Wu et al., 2006; Wang et al., 2007c]. The weight

estimation of the linear filter H is given from the training data by

H = (E[xt+iagTXt+ag ])-1E[xt+ag log(At)] (4-10)
Equation 4-10 represents the least square solution for the linear adaptive filter in log

likelihood form. During operation, most likely some firing rates are close to 0, which results in

extremely negative numbers. Therefore, we add a small positive number, defined as 10% of the

mean firing rate during training for each neuron, which makes the firing rate always positive.

The exponential tuning function in Equation 4-8 defines the first and second derivative terms in

Equations 2-7c and 2-7d as

OlogAt -r
S= H (4-11)
OXt+lag

a logA 0
2 o = 0 (4-12)
T
OXt+lag Xt+lag

The kinematics vector is then derived as the state from the observation of multi-channel

spikes train for the test samples by Equations 2-7a-d in Chapter 2.


(4-9)









Kalman point process

Notice that when a linear tuning function is selected for the observation model together

with a Gaussian assumption for the posterior density, the end result is actually a Kalman filter in

the spike domain and will be called Kalman filter for point process (PP). Here the linear tuning

function is estimated from 10000 samples of the training data as

At = h*Xt+ag +B (4-13)

spiket = Poisson(A,) (4-14)

where A, is the firing probability for each neuron, obtained by smoothing the spike train with a

Gaussian kernel. The kernel size is empirically set to be 0.17 in the experiment [Wang et al.,

2007c]. xt is the instantaneous kinematics vector defined as [px vx a- py vy ay]T with 2-

dimensional information of position, velocity and acceleration. The variable lag refers to the

causal time delay between motor cortical neuron activity and kinematics due to the propagation

effects of signals thru the motor and peripheral nervous systems. Here it is experimentally set as

200 ms [Wu et al., 2006; Wang et al., 2007c]. We extend the kinematics vector as

[p, v, ax p, vy a 1]T to include a bias B, which can be regarded as part of the weights of the

linear filterH. The tuning function is then At = H xt+iag. The weight estimation of the linear


filter H is given by

H= (E[xt+lagxt+,g]) 1E[xt+agAt] (4-15)
Equation 4-15 represents the least square solution for the linear adaptive filter, where

TT
E[xt+~ag Xt+lag ] gives the autocorrelation matrix R of the input kinematics vector considering a

causal time delay. E[xt+lagA ] gives the cross-correlation vector P between the input and the









firing probability. The linear tuning function in Equation 4-13 defines the first and second

derivative terms in Equations 2-7c and 2-7d in Chapter 2 as
-T
8 log 1, H
l (4-16)
iXt+lag At

02 log2l H-H
T _H (4-17)
8Xt+lag8Xt+lag t

The kinematics vector is then derived as the state from the observation of multi-channel

spikes train for the test samples by Equation 2-7a-d in Chapter 2.

Performance Analysis

Our Monte Carlo SE for Point Process is designed to estimate the kinematics state directly

from spike trains. The posterior density is estimated non-parametrically without Gaussian

assumptions, which allows the state model and the observation model to be nonlinear. It is

important to compare the performance of the Monte Carlo SE with the other algorithms on the

same data set to validate all the assumptions. First, to evaluate the performance advantages of a

nonlinear & nonGaussian model, we compare it with the Kalman PP, which works in spike

domain with linear tuning function and assumes the posterior density Gaussian distributed.

Secondly, the Monte Carlo SE is utilizes a tuning function that is estimated non-parametrically

directly from data. It would be interesting to compare the decoding performances with the

different tuning models, such as the Gaussian tuning curve and the exponential tuning curve.

Thirdly, all the algorithms assume stationary tuning function between the training and test

datasets. To study the decoding performance separately in training and testing would provide us

some idea how the tuning function could be changing over the time. Fourthly, the following

question should be asked. How is the performance in the spike domain compared to working on









the conventional spike rates? The above questions will be analyzed in details in the following

sections.

Nonlinear & non-Gaussian vs. linear & Gaussian

The point process adaptive filtering with linear observation model and Gaussian

assumption (Kalman filter PP) and the proposed Monte Carlo SE framework were both tested

and compared in a BMI experiment for the 2-D control of a computer cursor using 185 motor

cortical neurons [Nicolelis et al., 1997; Wessberg et al., 2000] as before.

After data preprocessing, the kinematics model Fk for both algorithms can be estimated

using the least squares solution. Notice that carefully choosing the parameters in the noise

estimation covariancee Qk in Kalman PP and the noise distribution p(r) in Monte Carlo SE)

could affect the algorithm performance. However, since we have no access to the desired

kinematics in the test data set, the parameter estimations of both algorithms were obtained from

the training data sets. For the Kalman filter PP, the noise in the kinematics model (Equation 2-6)

is approximated by a Gaussian distribution with covariance Qk. In the Monte Carlo SE model,

the noise distribution p(r) is approximated by the histogram of rk = Xk -Fkxk The resolution

parameter was experimentally set to 100 to approximate the noise distribution. The

regularization factor a in the tuning function was experimentally set at 10-7 for this analysis.

The remaining parameters in Monte Carlo SE include the kernel size a selected at 0.02 and the

number of particles x, experimentally set 1000, for a reasonable compromise between

computational time and estimation performance. This kernel size is chosen carefully to not lose

the characteristics of the tuning curve as we study before.

As we have analyzed before, both algorithms produce stochastic outputs because of the

Poisson spike generation model. However, the Kalman filtering PP has an analytical solution










with recursive close form equations. We set the initial state xo to be the zero vectors and the state

variance Po is estimated from the training data. Once the initial condition and parameters are set,

the state estimation is determined uniquely by the spike observations. However, the Monte Carlo

SE approach introduces variations between realizations even with fixed parameters due to the

estimation of the posterior distribution with the particles. Since the desired signal in the test set

data is formally unknown, it is not reasonable to just pick the best realization to present the

reconstruction results. Here, we choose the averaged performance among realizations as the

reconstruction results by Monte Carlo SE, and compare with the Kalman filter PP results.

Table 4-6 shows reconstruction results on a 1000 sample of a test segment (shown in

Figure 4-7) of neural data. Correlation Coefficients (CC) and Normalized Mean Square Error

between the desired signal and the estimations are evaluated for the Kalman filter PP as well as

for the Monte Carlo SE using 20 realizations of the posterior. For the second approach we also

show the mean and the standard derivation among realizations, together with the best and the

worst performance obtained by single realization.

Both approaches resulted in reasonable reconstructions of the position and the velocity.

The position shows the best correlation coefficient with the true trajectory. This result may be

due to the fact that the velocity and the acceleration were derived as differential variables, where

the noise in the estimation might be magnified. Although the Kalman filter PP assumes a

Gaussian posterior and a simple linear model for both the kinematic dynamic system and the

tuning function, it obtains a reasonable reconstruction of the position and the velocity. For the

position CC= 0.7422 for the x direction and CC= 0.8264 for the y direction. The velocity shows a

CC = 0.7416 for x and CC = 0.6813 fory. The Monte Carlo SE obtains the tuning function

nonlinearity for each neuron from the training data and estimates the kinematics without any









restriction on the posterior density. The average correlation for the position along x is

0.8058 0.0111 and along y is 0.8396 0.0124. The average correlation for velocity along x is

0.7945 0.0104 and along y is 0.7381 + 0.0057. The Monte Carlo SE is better than the Kalman

filter PP in terms of both CC and NMSE.

Figure 4-8B shows the reconstructed kinematics using both algorithms from all 185

neurons for 1000 testing samples. As before, the left and right panels depict respectively the

reconstructed kinematics for x-axis and y-axis. The 3 rows of plots from top to bottom display

respectively the reconstructed position, the velocity and the acceleration. In each subplot, the red

dash line indicates the desired signal, the blue solid line indicates the estimation by Monte Carlo

SE, and green dotted line indicates the estimation by Kalman filtering PP. For clarity, Figure 4-

8B also shows the 2D reconstructed position for a segment of the testing samples by two

methods. The Monte Carlo approach offers the most consistent reconstruction in terms of both

correlation coefficient and normalized mean square error.

The simulation of both models with synthetic data provides important hints on how to

interpret the results with real neural data. The linear tuning model by the Kalman filter PP

provides less accuracy in the nonlinear region of the tuning function, which in turn affects the

decoding performance. Moreover, the Kalman filter PP also assumes the posterior density is

Gaussian, therefore both algorithms provide similar velocity estimation along y when both

assumptions are verified. When the estimation from the two algorithms are different (often occur

at the peak of the desired signal), the Monte Carlo SE model usually provides better

performance, which is due to either its better modeling of the neuron's nonlinear tuning or its

ability to track the non-Gaussian posterior density better.









We further compared the statistical performance of both algorithms on 8000 test data

samples of neural data. The performance averaged among the decoding results from 20 sets re-

generated spike trains (20 realizations each set) is chosen as the reconstruction result by Monte

Carlo SE. CC and NMSE were both evaluated with an 800 sample-long window with 50%

overlap. For each segment of data, pair-wise student t-test was performed to see if the results are

statistically different from the Kalman filter PP. The test is performed against the alternative

specified by the left tail test CCKalman
both approaches, the test is performed against the alternative specified by the right tail test

NMSEKalman>NMSEMCSE for each kinematic variable. All the tests are performed on the null

hypothesis at a = 0.05 significance level. Under the null hypothesis, the probability of observing

a value as extreme or more extreme of the test statistic, as indicated by thep-value, is shown in

Table 4-7.

Except the x position and they acceleration from this first case, the CC of the Monte Carlo

SE of all other kinematic variables is significantly larger than the Kalman filter PP (p < 0.05 ), as

statistically verified using the t-test. In terms of NMSE, however, the t-test verifies that the

Monte Carlo SE reconstruction is statistically better than the Kalman filter PP for all kinematic

variables.

Exponential vs. linear vs. LNP in encoding

We have shown 2 tuning models in implementing the adaptive filtering on point process.

Comparing the decoding performance of these 2 different encoding (tuning) models with the

Gaussian distributed posterior density could show the importance of choosing an appropriate

tuning model for the decoding methodology.









Both tuning models were implemented as BMI decoders in the spike domain. The point

process generation is the same as we described for Kalman PP in previous section. After data

preprocessing, the parameter estimation of both algorithms were obtained from the training data

sets. For the exponential filter PP, the noise in the kinematics model is the same as the one in

Kalman PP. We set the initial state xo to be the zero vectors and the state variance Po is

estimated from the training data. Once the initial condition and parameters are set, the state

estimation is determined uniquely by the spike observations.

Table 4-8 shows the statistical reconstruction results on 8000 samples of test neural data.

NMSE between the desired signal and the estimations by exponential PP and Kalman PP are

evaluated with an 8 sec window with 50% overlap, together with the performance by Monte

Carlo SE.

Kalman filter PP gives better performance in position but worse performance in position

x comparing to exponential PP. For all the other kinematic variables, both encodings give similar

performances. We can infer that the proper tuning function to decode the kinematics on-line

would be somehow between linear and exponential curves. The performance comparing to one

by Monte Carlo SE shows that the instantaneous tuning curves we evaluate directly from the data

catches more information than both linear and exponential curves, which provides the best

decoding results. However, it is a very time consuming operation as described before.

Training vs. testing in different segments nonstationary observation

As we have mentioned before, all the parameters of the tuning curves were estimated from

the training data and remain the same in the testing segments. The big assumption behind this

methodology is stationary of the tuning properties over time, which may not be true. One way to

test this assumption is to see the performance comparison among the training data and different









testing data. Here time index of the training set is from 113500 ms to 193500 ms. The time index

for testing set 1 is from 213500 ms to 293500 ms, which is right after the training data. The

second testing set is chosen from 1413500 ms to 1493500 ms, which is far from the training data.

For each data set, the statistical reconstruction results on 8000 samples of neural data. Both CC

and NMSE between the desired signal and the estimations by exponential PP, Kalman PP and

Monte Carlo PP are evaluated with an 8 sec window with 50% overlap. Figure 4-9 A and 4-9 B

shows the performance trends between the training and different test sets in terms of CC and

NMSE respectively. The left and right panels depict respectively the reconstructed kinematics

for x-axis and y-axis. The 3 rows of plots from top to bottom display respectively the

reconstructed performances for position, velocity and acceleration. In each subplot, the green bar

indicates the mean and variance of the estimation performance for 3 different data sets by

Kalman filtering PP, the cyan line indicates the statistical estimation performance by Exponential

filtering PP, and the blue line indicates the statistical estimation performance by Monte Carlo SE.

For both criteria, all the algorithms show clearly similar trends of statistical performance.

The reconstruction on test data 1 is slightly worse than the reconstruction in training data.

However, in the test data 2, which is quite far from the training data, the performance is much

worse. It means the stationary assumption in both training and testing is questionable. It might be

allowed in the testing segment close right after the training because the change of the tuning

property is not obvious. But it would result in poor estimation when the tuning properties change

after some time. Therefore, the study on the non-stationary tuning property and the

corresponding tracking in the decoding algorithm is necessary.

Spike rates vs. point process

One way to test the decoding difference between continuous variables (spike rates) and

point processes is to compare the performance of the Kalman filter and Kalman PP on the same









segment of test data, because both filters have linear tuning and Gaussian distributed posterior

density assumptions. The difference is Kalman filtering reconstructs the kinematics state from

continuous representation of neural activities the inning firing rate, while Kalman PP directly

works in spike domain. For Kalman filtering, 100 msec inning window is used to process the

spike times into continuous firing rates for each neuron in the ensemble. For Kalman PP, the

preprocessing to construct the point process observation sequence remains the same as Monte

Carlo SE does.

After data preprocessing, the kinematics model Fk for both algorithms can be estimated

using the least squares solution. Notice that carefully choosing the parameters in the noise

estimation covariancee Qk in both Kalman and Kalman PP) could affect the algorithm

performance. However, since we have no access to the desired kinematics in the test data set, the

parameter estimations of both algorithms were obtained from the training data sets. The noise in

the kinematics model is approximated by a Gaussian distribution with covariance Qk.

The Kalman filtering PP algorithms produce stochastic outputs because of the Poisson

spike generation model. Both have analytical solutions with recursive close form equations. We

set the initial state xo to be the zero vectors and the state variance P0 is estimated from the

training data. Once the initial condition and parameters are set, the state estimation is determined

uniquely by the spike observations.

Table 4-9 shows the statistical reconstruction results on 8000 samples of training and 8000

samples of test segment of neural data. Since the desired signals of Kalman filter and Kalman PP

are obtained differently, here only Correlation Coefficients (CC) between the desired signal and

the estimations are evaluated with an 8 sec window with 50% overlap.









Both approaches resulted in reasonable reconstructions of the position and the velocity.

The position shows the best correlation coefficient with the true trajectory. This result may be

due to the fact that the velocity and the acceleration were derived as differential variables, where

the noise in the estimation might be magnified. It is interesting to first notice that Kalman filter

gets pretty good results on training set while the performance drop much more in testing

comparing to the decrease between training and test. In the Kalman filter for BMI, the firing

rates obtained by inning techniques blur the exact time information of the spike trains. The

inning techniques also serve as averaging which makes the noise terms more Gaussian. This

may make the Kalman filter over-fit the training set, while loose the generality in test because of

a lack of information in spike timing. Comparing the performance difference between training

and testing by Kalman PP, it shows no phenomenon of model over-fitting even with position y

better predicted in testing than in training. This is because Kalman works directly in the spike

train, which involves higher resolution time information of neural; activity. However, working in

spike domain without averaging makes the assumption of the Gaussian distributed posterior

density less satisfied in Kalman PP than Kalman. This is why the Kalman PP doesn't show better

decoding performance, which does not necessarily mean point process brings less information

for decoding. When we compare the Kalman performance to Monte Carlo SE for point process

in Table 4-10, where we have no Gaussian assumption, Monte Carlo SE has better decoding

results in position and velocity as we expected. The smaller CC of reconstructed acceleration in

point process might be due to large peaks of the desired acceleration as we explained before,

which is different from the desired acceleration of Kalman filter.

Monte Carlo SE Decoding in Spike Domain Using a Neural Subset

The performance of BMI hinges on the ability to exploit information in chronically

recorded neuronal activity. Since during the surgical phase there are no precise techniques to









target the modulated cells, the strategy has been to sample as many cells as possible from

multiple cortical areas with known motor associations. In the experiment, we collected activities

of 185 neurons from 5 motor cortical areas and regard them contributing equally to the current

decoding process. Research has shown that different motor cortical areas play different roles in

terms of the movement plan and execution. Moreover, the time-consuming computation on all

the neuron information would bring significant computational burden to implement BMI in low-

power, portable hardware. We can't help making a guess that groups of neurons have different

importance in BMI decoding as suggested in previous work [Sanchez et al., 2003]. In Chapter 3,

we have shown that the information theoretical analysis on the neuron tuning function could be a

criterion to evaluate the information amount between the kinematics and neural spike trains,

therefore it weights the importance among neurons in term of certain task or movement.

Moreover, if the decoding algorithm only calculates the subset of the important neuron

associated with movement behavior, it will improve the efficiency of BMI on large amount of

the brain activity data.

Neural Subset Selection

As we have shown in Chapter 3, the information theoretic tuning depth we proposed as a

metric for evaluating neuron instantaneous receptive properties is based on information theory

and would capture much more of the neuronal response. Define a tuned cell as a cell that extracts

more information between the kinematics and its spiking output. The well-established concept of

mutual information [Reza 1994] can mathematically account as an information theoretic metric

from the neural spikes for each neuron based on the instantaneous tuning model, which is given

by

..-* p(spk"() I k.xl, )
I( (spk(); k. 2g) = p(k.xlag) p(spk( k -) xlag) og, (( p ) (4-18)
kx spke '=0,1 p(Spk01)









where j is the neuron index. p(k xi,) is the probabilistic density the linear filtered kinematics

evaluated at the optimum time lag, which can be easily estimated by Parzen window [Parzen

1962]. p(spk) can be calculated simply as the percentage of the spike count during the entire

spike train. p(spk I k x) is exactly the nonlinear functionfin LNP model

The information theoretical tuning depth statistically indicates the information between the

kinematic direction and neural spike train. By setting a threshold, as shown in Figure 4-10, it

could help determine which subsets of the tuned neuron to include in the model to reduce the

computation complexity. For example, the first 30 most tuned neurons could be selected as

candidates to decode the movements in the BMI model. The distribution of the selected neuron is

shown in Figure 4-11. Here the 5 different cortical areas are shown in different color bar with the

corresponding mutual information estimated by Equation 4-18. The selected 30 neurons are

labeled as read stars. We can see that here are 1 neuron in PMd-contra, 21 neurons in Mi-contra,

6 neurons in S1-contra, and 2 neurons in SMA-contra. The most tuned neurons are in M1 as we

expected.

Neural Subset vs. Full Ensemble

As we have the criterion to select the neural subset, we compare the reconstruction

performance by different neural subsets, which have 60, 40, 30, 20, 10 most important neurons

associated with the movement, to the decoding results by the full ensemble of 185 neurons. The

statistical performances evaluated by both CC and NMSE with an 8 sec window with 50%

overlap are shown in Table 4-11.

We also plot the statistical decoding performance (mean and standard derivation) by CC

and NMSE respectively with different neuron subsets in Figure 4-12 A and 4-12 B. The

performance difference evaluated by CC among the neural subsets is not as clear as the one by









NMSE. Decoding performances along x evaluated by CC and NMSE increase and converge as

the neuron number of neuron subset increases. The decoding performances along y reach the

maximum (CC) or the minimum (NMSE) when the neuron subset has 30 neurons.

The study of decoding performance by neuron subset shows the possibility to evaluate the

neuron tuning importance associated with the movement task. With only 30 neurons (bolded row

in Table 4-11) out of the full ensemble 185 (italic row in Table 4-11), we could achieve similar

or even better performance in terms of NMSE. It means that not all the neuron activities in motor

cortex are closely related to the movement task. Some of the neurons' activities might contribute

as noise in term of certain task, which reduces the decoding performance. At the same time,

computation by only 30 neurons saves 84% running time comparing to the one by 185 neurons.











Encoding


Observation model
(Tuning function)


state
model Decoding



Figure 4-1. Schematic of relationship between encoding and decoding processes for Monte
Carlo sequential estimation of point processes


Table 4-1. The kinematics reconstructions by Monte Carlo SE for segment of test data


Meth
od


Position


Velocity


Acceleration


Criterion


Mean
C Std
Mont Best
e Worst
Carlo N Mean
SE M Std
S Best
E Worst


0.810.01 0.830.01 0.790.01 0.740.01 0.450.01 0.250.01


0.83
0.79


0.84
0.83


0.80
0.78


0.74
0.73


0.47
0.44


0.26
0.25


0.44 0.03 0.98 0.14 0.45 0.02 0.55 0.01 0.82 0.01 1.03 0.01


0.40
0.43


0.74
1.30


0.45
0.44


0.54
0.54


0.81
0.81






















50






50


100
0 10 20 30 40 50 60 70 80 90 100
t

Vx


0 10 20 30 40 50 60 70 80 90 100
t

Ax
03

02

0




-0I

-0 2
0 10 20 30 40 50 80 70 80 90 100


posterior density
-- 5 desired










0 10 20 30 40 50 60 70 80 90 100
t

Vy
2


I
1









-2
0 10 20 30 40 50 60 70 80 90 100
t

Ay
02








0 1



-02
0 10 20 30 40 50 C60 70 80 90 100


Figure 4-2. The posterior density of the reconstructed kinematics by Monte Carlo SE













----- desired Py ----- desired
60 CCMCSE=0 81,NMSEMCSE=0 43 40 CCMCSE0o 84,NMSEMCSE=0 93
402


10

20 -20
-40 0
254 255 256 257 258 259 26 261 262 263 26454 2 55 256 257 258 259 26 261 262 263 264
x 10 x 10
Vx ---- desired Vy --- de
15 CC,,=0 80,NMSEcsE=0 44 --- desired
1 5 CMCSE 0N MCSE M- CCMSE-0 74, NMSEMCSE=0 54
0 1 0 1

; x 1 I x 10"
-0 5
-0 5

-1 5i
-15-1
-2 -1 5
254 255 256 257 258 259 26 261 262 263 264 255 256 257 258 259 26 261 262 263 264






t x 10 t x 10










Figure 4-3. The reconstructed kinematics for 2-D reaching task



Table 4-2. Averaged performance by Monte Carlo SE of the kinematics reconstructions for

segment of test data
Position Velocity Acceleration
x y desired
015- CCMCSE=0 46,NMSEMCSE= 0 82 01dCCeCSEd 26,NMSEMCSE -1 01












CC 0.81 080 0174 06 26



54 255 256 2NMSE 57 258 259 20.436 261 2620.9 23 64 0.44 55 20.546 257 258 0.859 2 61 262 21.0163 264
t xl04 t xl04




Figure 4-3. The reconstructed kinematics for 2-D reaching task









Table 4-3. Statistical performance by Monte Carlo SE of the kinematics reconstructions using 2 criteriaor
segment of test data






Criteria Position Velocity Acceleration
onx y x y x
C0.762 0.757 0.7580 0.74 0.520 0.3700.26





C+0.078 +0.128 +0.075 +0.063 +0.055 +0.076
SE 00.563 0.964 0.515 0.51 0.748 1.01 7
Table 4-3. Statistical performance of the kinematics reconstructions using 2 criteria
Criteria Position Velocity Acceleration
on x y x y x y
0.762 0.757 0.751 0.734 0.520 0.370
0.078 0.128 + 0.075 0.063 0.055 0.076
0.563 0.964 0.515 0.510 0.748 1.017
NMSE
+ 0.186 + 0.322 + 0.126 + 0.126 + 0.160 + 0.353











x 10-5
6

4

2

0
1 2


0.06

0.04

S0.02


x 104 Py
1


5


n


1 2 3
Vy


1 2 3

Ay


1 2 3
alpha


Figure 4-4. Linear model error using different a


4 5


1 2


S0.5
a,


4 5


1 2


3
alpha


A


















0.9


0.8


0.7


0.6


0.5

0.4


-0 05


0.05


noise


Figure 4-5. cdf of noise distribution using different density






neuron 72
0.7,


a=0.005
c=0.01
c=0.02
c=0.05
c=0.1


0.6



0.5



0.4


0.1 0.15
KX


0.2 0.25


Nonlinearity of neuron 72 using different c


density=20
density=50
density=100
density=200
density=500


0.15


Figure 4-6.
















75

75


35


200 400 600 800 1000 1200 1400 1600 1800 2000
xn

Vx
1- CC
1 NMSE

9

8

7


200 400 600 800 1000 1200 1400 1600 1800 2000
xn


08

06

04

02
200 400 600 800 1000 1200 1400 1600 1800 2000
xn


-cc
NMSE






400 600 800 1000 1200 1400 1600 1800 2000
xn

Vy



SCC
NMSE






400 600 800 1000 1200 1400 1600 1800 2000
xn

Ay
SCC
NMSE








400 600 800 1000 1200 1400 1600 1800 2000
xn


Figure 4-7. Decoding performances by different x,




Table 4-4. Results comparing the kinematics reconstructions averaged among Monte Carlo trials
and synthetic averaging

Criteria Position Velocity Acceleration

on Method x y x y x y

Average among 20 Monte Carlo
Average among 20 Monte Carlo 0.811 0.837 0.799 0.741 0.456 0.255
S trials
CC
Average among 20 Synthetic 0.843 0.852 0.822 0.737 0.443 0.233
spikes, 20 Monte Carlo trials each
Average among 20 Monte Carlo
average among 20 Monte Carlo 0.429 0.933 0.439 0.538 0.817 1.025
NSE trials
NMSE
Average among 20 Synthetic
Average among 20 Synthetic 0.319 0.768 0.330 0.484 0.808 0.990
spikes, 20 Monte Carlo trials each










Table 4-5. Statistical performance of the kinematics reconstructions by Monte Carlo SE and
synthetic averaging


Position


Velocity


Acceleration


Method


Monte Carlo SE
CC
S Monte Carlo SE
(synthetic averaging)
t-t H: CCMCSE (p-value)


NMS


Monte Carlo SE


E Monte Carlo SE
(synthetic averaging)
SH1: NMSEMCSE>NMSESA
-test value)
(p-value)


x
0.762
0.078
0.777
+ 0.089


y
0.757
_ 0.128
0.755
+0.154


x
0.751
0.075
0.753
+0.083


y
0.734
0.063
0.750
+0.058


x
0.520
_ 0.055
0.496
+ 0.073


y
0.370
0.076
0.346
+ 0.082


1(0.027) 0(0.618) 0(0.365) 1(0.004) 0(0.995) 0(0.999)


0.563
0.186
0.467
+0.171

1(0)


0.964
0.322
0.880
_ 0.321

1(0)


0.515
0.126
0.445
0.127


0.510
0.126
0.448
+0.137


0.748
0.160
0.764
0.169


1.017
0.353
0.951
0.368


1(0) 1(0.001) 0(0.954)


Table 4-6. Results comparing the kinematics reconstruction by Kalman PP
for a segment of data


Position


Criterion Method

CC Kalman filter PP
Monte Carlo SE
N SE Kalman filter PP
NMSEMonte Carlo SE
Monte Carlo SE


x

0.74
0.81
0.81
0.43


Velocity


y
0.83
0.84
1.51
0.93


0.74
0.80
0.50
0.44


0.68
0.74
0.77
0.54


and Monte Carlo SE


Acceleration


0.42
0.46
0.95
0.82


0.18
0.26
1.13
1.01





























x 10


264 256 258 26 262 264
x104 x 104


x 10 x 10


Figure 4-8. The reconstructed kinematics for a 2-D reaching task. A) Plot individually. B)
Position reconstruction in 2D









128












position


desired
KalmanPP
S Monte Carlo PP


40


30


20


10


0


-10


-20


-30


-40


-50
-50


0 10 20 30 40


Figure 4-8. Continued


-40 -30 -20 -10










Table 4-7. Statistical performance of the kinematics reconstructions by Kalman PP and Monte
Carlo SE (synthetic averaging)


Position


Method

Kalman filter PP


Monte Carlo SE (synthetic
averaging)
t-t H: CCKalman (p-value)

S Kalman filter PP
NMS
E Monte Carlo SE (synthetic
averaging)
S H : NMSEKalman>NMSEMCSE
(p value)


x
0.763
+ 0.073
0.777
+ 0.089


Velocity


y
0.717
_ 0.133
0.755
+ 0.154


0(0.172) 1(0.028)


0.897
0.305
0.467
+0.171


1.043
+ 0.245
0.880
+ 0.321


1(0) 1(0.019)


x
0.702
_ 0.114
0.753
+ 0.083


y
0.694
+ 0.066
0.750
+ 0.058


1(0.001) 1(0.001)


0.673
+ 0.271
0.445
+ 0.127

1(0)


0.686
+ 0.172
0.448
+ 0.137

1(0)


Acceleration


x
0.471
+ 0.065
0.496
+ 0.073


y
0.345
+ 0.089
0.346
+ 0.082


1(0.034) 0(0.480)


0.891
+ 0.187
0.764
+ 0.169

1(0)


1.085
+ 0.385
0.951
+ 0.368

1(0)


Table 4-8. Statistical performance of the kinematics reconstructions by different encoding
models


crit
eri
on Method


Position


Velocity


Acceleration


Exponential PP


MS Kalman filter PP
E
MCSE PP


0.6673
+ 0.2024
0.897
+ 0.305
0.563
+0.186


1.4976
+ 0.6547
1.043
+ 0.245
0.964
+ 0.322


0.6690
+ 0.2090
0.673
+ 0.271
0.515
+ 0.126


0.6922
_ 0.1180
0.686
+ 0.172
0.510
+ 0.126


0.8731
+0.1718
0.891
0.187
0.748
0.160


1.1178
+ 0.3731
1.085
0.385
1.017
0.353















Py A
Exponential PP A
Kalman PP
MCSE PP


training test test2

0.5 1 1.5 2 2.5 3 3.5

Vx


0 0.5
0


training test test2

0.5 1 1.5 2 2.5 3 3.5

Vy


o 0.5
0


training test test2

0.5 1 1.5 2 2.5 3 3.5

Ax
0.8

0.6

S0.4

0.2
training test test
05 1 15 2 25
0.5 1 1.5 2 2.5 3 3.5


training test test2

0.5 1 1.5 2 2.5 3 3.5

Ay
0.8

0.6

0 0.4

0.2tes
training test test
0.5 1 1.5 2 2.5 3 3.5


Figure 4-9. The decoding performance by algorithms in PP for different data sets. A) CC. B)
NMSE


0 0.5
0


0 0.5















py Exponetial PP
Kalman PP
MCSE PP B


tinig test test2
05 1 15 2 25 3 35

Vx


0 training test test
05 1 15 2 25 3 35

Ax


S training test test2
05 1 15 2 25 3 35


training test test2
05 1 15 2 25 3 35

Vy


0 training testl test2
05 1 15 2 25 3 35

Ay


0| training test test
05 1 15 2 25 3 35


Figure 4-9. Continued.











Table 4-9. Statistical performance of the kinematics reconstructions Kalman filter and Kalman
PP


Position


Velocity


Acceleration


Method


Kalman filter


Kalman filter PP


0.874
Training 0.039
0.746
Test
S 0.070
0.794
Training 0.0
S0.061
0.763
Test
0.073


y
0.859
+ 0.061
0.740
0.100
0.641
0.182
0.717
0.133


x
0.851
+ 0.043
0.738
0.060
0.759
0.090
0.702
0.114


Y
0.809
+ 0.064
0.732
0.064
0.696
0.105
0.694
0.066


x
0.748
+ 0.057
0.585
0.081
0.479
0.087
0.471
0.065


y
0.676
+ 0.068
0.483
0.112
0.361
0.113
0.345
0.089


Table 4-10. Statistical performance of the kinematics reconstructions by spike pates and by
point process


Position


Method

Kalman filter

Monte Carlo
SE


0.018

0.016

0.014

0.012

0.01

0.008

0.006

0.004

0.002


x
0.7463
+ 0.0703
0.7776
+0.0886


Velocity


y
0.7397
0.1003
0.7545
S0.1543


x
0.7379
0.0601
0.7530
S0.0830


y
0.7318
0.0643
0.7505
S0.0583


Acceleration


x
0.5853
0.0806
0.4958
S0.0726


y
0.4834
0.1123
0.3459
S0.0824


sorted information theoretic tuning depth















X 30
Y 000158
0-


01
0 20 40 60 80 100 120
sorted neuron index


140 160 180 200


Figure 4-10. Threshold setting for sorted information theoretic tuning depths for 185 neurons














0.016


0.014


0.012


0.01


S0.008


0.006


0.004


0.002


0
0 20 40 60


Seleted Neuron Subset distribution


PMd
SM1
-Si
SMA
M1 ipsi
*t Selected Neuron Subset (30 Neurons)


L
80 100
Neuron


120 140 160 180
120 140 160 180


Figure 4-11.


Selected neuron subset (30 neurons) distribution










Table 4-11. Statistical performance of the kinematics reconstructions by neuron subset and full


ensemble

Neuron Subset

Full Ensemble


Full Ensemble


Position


0.7619
+0.0784
0.7554
+ 0.0942
0.7449
+ 0.0954
0.7456
+ 0.1027
0.7213
+0.1036
0.7181
+0.1141
0.5628
+0.1861
0.5330
+0.1908
0.5335
+0.1818
0.5339
+ 0.2047
0.5828
+0.1858
0.5770
0.2167


Velocity


0.7574
+0.1275
0.7721
+0.1105
0.7782
+0.1058
0.7730
+ 0.1084
0.7568
+0.1238
0.6487
+0.1752
0.9643
0.3222
0.8925
+ 0.2256
0.8003
+0.1585
0.8022
+ 0.2555
0.7273
+ 0.2674
0.7304
+ 0.3408


0.7511
+0.0749
0.7473
+ 0.0787
0.7373
+ 0.0848
0.7420
+ 0.0823
0.7227
+ 0.0884
0.6824
+ 0.0924
0.5145
+ 0.1259
0.5031
+0.1345
0.5173
0.1385
0.4985
+ 0.1440
0.5334
+0.1538
0.5697
+0.1550


0.7342
0.0633
0.7279
+ 0.0633
0.7315
+ 0.0646
0.7327
+ 0.0613
0.7354
+ 0.0566
0.6931
+ 0.0774
0.509 7
+ 0.1261
0.5098
+0.1098
0.5044
+0.1039
0.4985
+ 0.0940
0.4915
0.1124
0.5348
+ 0.1502


Acceleration


0.5199
+0.0545
0.5145
0.0510
0.5102
+ 0.0608
0.5084
+ 0.0650
0.4927
+ 0.0681
0.4661
+ 0.0804
0.7481
+0.1598
0.7505
0.1634
0.7541
0.1678
0.7536
+ 0.1697
0.7711
0.1680
0.7940
0.1684


0.3703
+0.0764
0.3650
0.0830
0.3644
0.0850
0.3682
+ 0.0857
0.3669
+ 0.0763
0.3515
+ 0.0702
1.0165
0.3526
1.0156
0.3557
1.0106
0.3593
0.9993
+ 0.3562
1.0005
+ 0.3600
0.9595
+ 0.3779



















05


50 100
Neuron Subset
Vx


150 200


50 100
Neuron Subset
Vy


o 05
0


50 100
Neuron Subset
Ax


150 200


50 100
Neuron Subset
Ay


8 05
0


50 100
Neuron Subset


150 200


50 100
Neuron Subset


Figure 4-12. Statistical performance of reconstructed kinematics
CC. B) NMSE


by different neuron subsets. A)


8 05
0


tt A


0 05
0


150 200


8 05
0


150 200


150 200
















Py B


LU
7D 06
z
04

02
0


1

08
LU
o 06
z
04


( 0

0 5


50 100
Neuron Subset
Vx


150 200


50 100
Neuron Subset
Vy


150 200


06

) 05
z
04


0 50 100
Neuron Subset
Ax



S08


06


150 200


0 50 100
Neuron Subset
Ay


150 200


w


50 100
Neuron Subset


1 05
150 200 0


50 100
Neuron Subset


Figure 4-12. Continued


150 200









CHAPTER 5
CONCLUSIONS AND FUTURE WORK

Conclusions

Brain-Machine Interfaces (BMI) is an emerging field inspired by the need to restore motor

function and control in individuals who have lost the ability to control the movement of their

limbs. Researchers seek to design a neuron-motor system that exploits the spatial and temporal

structure of neural activity in the brain to bypass spinal cord lesions and directly control a

prosthetic device by intended movement. In human and animal experiments, neuronal activity

has been collected synchronously from microelectrode arrays implanted into multiple cortical

areas while subjects performed 3-D or 2-D target-tracking tasks. Several signal processing

approaches have been applied to extract the functional relationship between the neural recordings

and the animal's kinematic trajectories. The resulting models can predict movements and control

a prosthetic robot arm or computer to implement them.

Many decoding methodologies, including Wiener filter and neural networks, use binned

spike trains to predict movement based on standard linear or nonlinear regression. Alternative

methodologies, such as Kalman filter or particle filter, were derived using a state model within a

Bayesian formulation. From a sequence of noisy observations of neural activity, the probabilistic

approach analyzes and infers the kinematics as a state variable of the neural dynamical system.

The neural tuning property relates the measurement of the noisy neural activity to the animal's

behaviors, and builds up the observation measurement model. Consequently, a recursive

algorithm based on all available statistical information can be used to construct the posterior

probability density function of each kinematic state given the neuron activity at each time step

from the prior density of that state. The prior density in turn is the posterior density of the

previous time step updated with the discrepancy between an observation model and the neuron









firings. Movements are then recovered probabilistically from the multi-channel neural recordings

by estimating the expectation of the posterior density or by maximum a posterior. The

differences among the above approaches reflect the following challenges in BMI modeling.

Linear or nonlinear? Wiener and Kalman filters are both linear fitting methods that can

be used to reflect the functional relationship between neural firing and movements. A linear

model is intuitive and is not computationally complex, so is simple to calculate. However, the

assumption of linearity is very strict, and although it may be valid for binned data due to the

averaging effective, most neuroscientists do not agree with this approach at the neural level.

Adding to this concern is that neuron behavior exhibits saturation, thresholding, and refractory

attributes, thus reflecting nonlinearity. To improve the performance of these models, neural

networks and particle filters were added to build nonlinear relationships, but this also increases

the computational complexity. On the other hand, a standard method to accurately estimate or

model the neural nonlinearity is still in development, since the ground truth is not fully

understood even by neuroscientists. Evaluation of the model nonlinearity by comparing several

algorithm performances in the BMI reconstruction accuracy is one of the feasible ways to rate

different hypothesis. At issue is whether or not the performance will improve enough to justify

the complicated nonlinear modeling and computation.

Gaussian or non-Gaussian? Gaussianity is one of engineering's most preferred

assumptions to describe the error distribution when we build models for stochastic signals. In the

Bayesian approach, the assumption of Gaussianity is also present in the Kalman filter to describe

the posterior density. However, if we agree on the nonlinearity relation of the neuron behavior

tuning to preferred movement, the Gaussian assumption at all the time is always questionable

because thepdfis reshaped by the nonlinear tuning. An algorithm that is not bound to this









assumption (i.e., which utilizes the full information in thepdf) is necessary to help us understand

how much performance hit is tied to the Gaussian assumption. Particle filtering is a general

sequential estimation method that works with continuous observation through a nonlinear

observation model without the Gaussian assumptions. However, in terms of a practical

application, we should not over-claim the non-Gaussian assumption for performance evaluation

because the computational complexity of both methods (Particle and Kalman filters) is

drastically different. The proper framework is to realize that the Gaussian assumption is a

simplifying assumption, and then ask how much improvement over the Kalman can the Particle

filter provide. For instance, if the local pdf can be approximated by a Gaussian distribution very

well for certain segment of experimental data, the algorithm with non-Gaussian assumption

would have the equivalent performance without showing its advantage, which, at the same time,

comes with more computation complexity. .

Black box or gray box? Wiener and neural networks are black box models that operate

without physical insight into the important features of the motor nervous system. However, in the

Bayesian approach, the observation model enables us to have more insight into the neural tuning

property which relates the measurement of the noisy neural activity to the animal's behaviors.

Although the Kalman filter is still (and controversially) linear, it would be an excellent entry

point to incorporate the knowledge of neural tuning into modeling. Enhancing the black box

model to the gray box model is expected to increase performance and in turn to test the

knowledge we incorporate into the model. Notice however, that both the particle filtering and the

Kalman filter both still assume a fixed and known state and observation models. In actuality,

these remain unknown for BMI data.









All of the computational models described above are intended to efficiently and accurately

translate neural activity into the intention of movement. Depending on different animals and

tasks, well-established adaptive signal processing algorithms have achieved reasonable

kinematics predictions (average correlation coefficient of around 0.8 [Sanchez, 2004]). These

algorithms provide an attractive engineering solution for evaluating and characterizing the

temporal aspects of a system. However, a successful realization of BMI cannot be dependent

entirely on improvement of methodologies. We must develop a better understanding of brain

signal properties. Brain signals are believed to be very complicated. They contain a huge amount

of data, they are noisy, non-stationary, and interact with each other in ways not fully understood.

When designing the computational model, the following should be carefully considered.

What is the proper signal scale for BMIs? To fit into the traditional signal processing

algorithm which works with a continuous value, early BMI research frequently employed a

inning process on action potentials to obtain the neural firing rate as a continuous neural signal.

However, single unit activity is completely specified by the spike times. The weakness of the

inning technique, as a coarse approach, is finding the optimal window size. The loss of spike

timing resolution might exclude rich neural dynamics from the model. How to extract effectively

the information hidden in the spike timing brings challenges not only in signal-processing

algorithm development but also in the accurate modeling of the neuron physiologic properties.

Moreover, if the signal-processing techniques enable us to look closer into the neural spike train,

we will have to face another challenge not encountered in BMI for spike rates.

Time resolution gap between neural activity and movement. Although spike trains are a

very good indicator of neuronal function, they are also far removed from the time and

macroscopic scales of behavior. Therefore, a central question in modeling brain function in









behavior experiments is how to optimally bridge the time scale between spike events

(milliseconds) and the time scale of behavior (seconds). Most often, the relatively rudimentary

method of time averaging inningg spikes) is used to bridge the gap, but much of the resolution

of the spike representation is wasted. Therefore, to model the hierarchy of scales present in the

nervous system, a model-based methodology must link the firing times to movement in a

principled way. It remains to be seen under what conditions the spike timing is relevant for

motor BMIs because as stated the kinematics exist at a much longer time scale, which may

indicate that the exact timing of spikes is not important.

Non-stationary neuron behavior. Studies show that the response of individual neurons

to the same stimulus changes frequently. Even the cortical areas used in BMI experiments can

vary considerably from day to day. Neuroscientists have used the average of peri-event neuron

spiking patterns across trials/times in order to eliminate noise contamination and observe the

same stationary neuron behaviors. However, this statistical analysis is not feasible for the

reconstruction of trajectory time series in motor BMI. Current signal processing modeling still

assumes that neuron behaviors are stationary between the training and testing data. This

assumption is questionable and affects the performance on the test data.

Association among the neurons. Evidence shows that neuron spikes are synchronized as

groups along time. Some researchers even claimed that in order to understand brain function, the

signals should be recorded from areas all over the brain since they are dynamically correlated as

a network. Imagine the computational complexity when we have about 200 neurons interacting

with each other. Researchers have applied statistics and data mining techniques to evaluate the

synchronization of multi-channel spikes in terms of the accuracy and the efficiency. A better

understanding of neuron recordings, especially the causal correlation between the different









recording areas, would be achieved by dynamic modeling in the probability domain neural

dependence across channels. Therefore, it is very important to involve a dependence study

among the neurons into our BMI study. Unfortunately the sequential estimation models for point

processes assume independence among neurons to avoid estimating the joint distribution, so this

is one of their most important shortcomings.

Computational complexity. BMI performance hinges on the ability to exploit information

in chronically recorded neuronal activity. Since there are no precise techniques to target the

modulated cells during the surgical phase, the strategy has been to sample as many cells as

possible from multiple cortical areas with known motor associations. This time-consuming

computational burden would significantly impair the use of BMI in low-power, portable

hardware. Therefore channel selection methodologies should be applied to the neural vector to

estimate the channels that are more relevant for the task.

With all of these issues in mind, we proposed and validated a Monte Carlo sequential

estimation framework to reconstruct the kinematics directly from the neural spike trains. There

are two main steps to apply this idea to neural data from BMI experiments. First, we must

validate our physiologic knowledge of neural tuning properties by analysis and modeling using

statistical signal processing. Second, based on the knowledge we have gained, we must

implement the adaptive signal filtering algorithm to derive the kinematics directly from the

neuron spike trains.

Our intention is to reduce the randomness of the neuron spiking in probabilistic models.

Faced with a tremendous amount of neural recoding data, we proposed using the mutual

information between the neuron spike and kinematic direction as a new metric to evaluate how

much information the neuron spike encodes. This well-established concept in information theory









provides a statistical measure to gauge neuron tuning depth. As a non-unit measure, the proposed

metric provides a means to compare information in terms of tuning, not only among different

kinematics, positions, velocities and accelerations; but also among neurons in different cortical

areas. The primary motor cortex contained most of the tuned neurons, and therefore is a potential

location to elicit a neuron subset for movement reconstructions.

In addition to its informative value for importance, the tuning function was also

mathematically estimated by a parametric Linear-Nonlinear-Poisson model. The traditional

criterion of estimating tuning depth from windows of data does not seem the most appropriate in

the design of BMIs using sequential estimation algorithms on spike trains. Here we presented

instead an information theoretical tuning analysis of instantaneous neural encoding properties

that relate the instantaneous value of the kinematic vector to neural spiking. The proposed

methodology is still based on the Linear-Nonlinear-Poisson model of Paninski. Using a spike-

triggered averaging technique, the linear filter finds the preferred direction of a high-dimensional

kinematics vector, which could involve both spatial (2-D) and temporal information if evaluated

in a window. The nonlinear filter demonstrates the neuron's nonlinear property, such as

saturation, thresholding, or refractory period. As the function of the filtered kinematic vectors,

the neuron's nonlinear property is approximated by the conditional probability density of the

spikes according to the Bayesian rule. Although most of the statistical nonlinear neuron

properties are expressed as exponentially increasing curves, we also found diversity among these

properties. This might indicate varying functional tuning roles among neurons. The prescribed

inhomogeneous model embodies the randomness and nonstationary aspects of neural behaviors,

which finally connects the continuous kinematics to the point process. An information theoretic

formulation provides a more detailed perspective when compared with the conventional tuning









curve because it statistically quantifies the amount of information between the kinematic vectors

triggered off by the spike train. As a direct consequence, it can estimate the optimum time delay

between motor cortex neurons and behavior caused by the propagation effects of signals in the

motor and peripheral nervous system.

The similarities and differences between the windowed and instantaneously evaluated

tuning functions were also analyzed. The instantaneous tuning function displayed over 0.9

correlation in the central region w.r.t. to the windowed tuning function. The differences in the

high tuning region of the curves, both in the dynamic range and in the estimated value were

much higher and resulted from the overestimation of tuning by the window method at the high

firing rate part of the curve. The instantaneous model works directly in the dynamic range of the

kinematics therefore it estimates more accurately the firing probability without distortions from

the temporal neighborhood information and produce equivalent or better encoding results

compared to existing techniques. This outcome builds confidence to directly implement the

instantaneous tuning function into the future online decoding models for Brain-Machine

Interfaces.

The instantaneous tuning function based on the Linear-Nonlinear-Poisson model builds a

non-linear functional relationship from the kinematics to the neuron activity, which is estimating

neural physiologic tuning directly from the spike timing information. This solution is working to

a certain extent, but it might not describe totally how the neuron actually fires corresponding to

the certain kinematics. For example, it assumes a stationary linear filter and nonlinear tuning

curve; the current modeling is done independently for each neuron without considering the

interactions. Since the accuracy of the encoding model will impact the performance of the









kinematic decoding from the neural activity, further development and validation of the encoding

model is an important aspect to consider.

With the knowledge gained from the neuron physiology function analysis with this signal

processing algorithm, we proposed a Monte-Carlo sequential estimation for point process (PP)

adaptive filtering to convert the Brain Machine Interfaces decoding problem to state sequential

estimation. We reconstruct the kinematics as the state directly from the neural spike trains. The

traditional adaptive filtering algorithms were well established to represent the temporal evolution

of a system with continuous measurements on signals, such as Kalman filter, least square

solution and gradient decent searching. They are of limited use when it comes to BMI decoding

in the spike domain, where only the recorded neural spiking time matters and the amplitude

information of the signals is absent. A recently proposed point process adaptive filtering

algorithm uses the probability of a spike occurrence (which is a continuous variable) and the

Chapman-Kolmogorov Equation to estimate parameters from discrete observed events. As a two-

step Bayesian approach, it assumes posterior density of the state given the observation Gaussian

distrusted with less accuracy. We presented a Monte Carlo sequential estimation to modify the

amplitude of the observed discrete events by the probabilistic measurement posterior density.

We generated a sequence of samples to estimate the posterior density more precisely, avoiding

the numerical computation of the integral in the C-K Equation through sequential estimation and

weighted Parzen windowing. Due to the smoothing of the posterior density with the Gaussian

kernel from Parzen windowing, we used collapse to easily obtain the expectation of the posterior

density, which leads to a better result of state estimate than noisy Maximum A Posterior. In a

simulation of a one-neuron encoding experiment, the Monte Carlo estimation showed better









capability to probabilistically estimate the state, better approximating posterior density than the

point process adaptive filtering algorithm with Gaussian assumption.

The Monte Carlo sequential estimation PP algorithm enables us to use signal-processing

techniques to directly draw information from timing of discrete event without a Gaussian

assumption. Although it is proposed for the BMI application on motor cortical neurons in this

dissertation, it is theoretically a general non-parametric approach that can infer continuous

signals from point process without constraints, which can be utilized in many other neuroscience

applications (e.g. visual cortex processing), in communications (network traffic) and in process

optimization. We have to point out that the implementation of this algorithm would not always

bring us better performance. It depends how the user assign the proper state and build the

models. In addition, the advantage of the approach will be only shown when the posterior density

of the state given observation can't be well approximated by Gaussian distribution, for example,

multi-modes or highly skewed. On the other hand, since thepdfinformation is fully stored and

propagated for each time index, the computation complexity is one trade off that the user must

weight. Moreover, we were able to pin point and quantify for motor BMIs the performance paid

by the Gaussian assumption. Towards this goal, we compared performance with the Kalman

filter PP applied to a cursor control task, and concluded that the Monte Carlo PP framework

showed statistically better results ( all thep value of the pair-wise t-test on NMSE is smaller than

0.02) between the desired and estimated trajectory. We should mention that this improvement in

performance is paid by much more demanding computation and also by much more detailed

information about the decoding model for each neuron.

Although spike trains are very telling of neuronal function, they are also very removed

from the macroscopic time scales of behavior. Therefore, a central question in modeling brain









function in behavior experiments is how to optimally bridge the time scale between spike events

(milliseconds) and the time scale of behavior (seconds). Most often, the relatively rudimentary

method of time averaging inningg spikes) is used to bridge the gap, but excludes the rich

information embedded in the high resolution of the spike representation. Model-based

methodologies including an encoding model linking the firing times to state variables as the ones

presented here seem to be a much more principled way to model the hierarchy of scales present

in the nervous system. However, these models are intrinsically stochastic with the encoding

models in use today, so they pose difficulties for real time operation of BMI models.

Although the results are interesting, the signal processing methodologies for spike train

modeling need to be further developed. Many parameters are assumed and need to be estimated

with significant design expertise as we studied in terms of decoding performance. They are

substantially more complex than the ones for random processes. Therefore, we choose the

averaged kinematics estimation among many Monte Carlo trials as the algorithm performance.

Still, the results are still intrinsically stochastic due to the randomness of the generated spike

trains. In order to achieve more reliable results, we propose a synthetic averaging idea to

generate several sets of spike trains from the estimated firing intensity probability to simulate the

population effects in the cortex. Instead of the coarse inning techniques on the neural activity,

the model is implemented several times from regenerated spike observation to reconstruction the

kinematics. The performance is averaged among the decoding results in the movement domain to

bypass the possible distortion by nonlinear tuning function due to the inning in spike domain.

The synthetic averaging idea provided smoother kinematics reconstruction, which is a promising

result for improved performance.









However, synthetic averaging is effectively averaging the timing information that one

seeks in this class of methods in the first place. Therefore, the interesting observation is that it

seems to indicate that spike timing has no effect in performance, otherwise when we use the

synthetic examples performance should decrease. This issue is hard to quantify due to the many

factors at play and the lack of ground truth to compare absolute performance. We briefly explain

the issues below, but this is an open problem that deserves much more research.

First, the way we generate the synthetic spike trains is to obtain an estimate of the intensity

function (firing probability) of a single neuron by kernel smoothing. This obviously will always

produce a bias estimate of the intensity function that will be present in all the realizations.

However, the averaging of kinematic responses will decrease the variance of the estimated

kinematics as we have seen in the results. NMSE reduces 26% for position along x, 18% for

position along y, and on average 15% for all 6 kinematic variables. But this process of averaging

effectively puts us back into the realm of rate models if we look at the input side (spike trains).

We think that further analysis is necessary distinguishing the linear and the nonlinear models. If

we do synthetic averaging in the Kalman PP, where the neuron tuning function is linear, the

synthetic averaging would be indeed equivalent as inputting the continuous firing rates when the

number of realizations is infinite. However, since the neuron tuning function is developed based

on LNP model, the averaging in the neuron activity, inning or smoothed spike rates, would be

conceptually different from the averaging on the nonlinear outputs of the tuning. As a simple

example, in general f(E[x]) # E[f(x)], where E[*] is expectation operation. Besides, synthetic

averaging is coupled with LNP encoding model designed just for the spike train, which models

the kinematics simply triggered by the spike timing. This quantity can not be currently estimated









on the continuous firing rates inputs since there is no corresponding encoding modeling method

available.

The synthetic averaging is an attempt not only to bridge the time-resolution difference

between neuron activity and the kinematics, but also to reduce the variance of the spike timing

introduced by single realization of the neuron recordings. Alternate methods that can reduce the

variance of the estimate without reducing temporal resolution need to be investigated, but are not

known to us at this moment.

In addition to the comparing our Monte Carlo SE to Kalman PP to evaluate effect of the

linear/nonlinear tuning and Gaussian/nonGaussian distributed posterior density; we further

investigated the decoding performance by comparison to other decoding methods. The difference

between the statistical reconstruction results by Kalman PP and adaptive filtering on point

process with exponential tuning function shows the importance of accurate encoding model. The

linear tuning curve works better for kinematics along y (e.g. NMSE of position y linear vs.

exponential: 1.043 0.245 vs. 1.498 0.655). While the exponential tuning curves works better

for kinematics along x (e.g. NMSE of position x exponential vs. linear: 0.667 0.202 vs.

0.897 0.305). However, both of the encoding models couldn't catch more information than

Monte Carlo SE, which provides the best decoding results (e.g. NMSE of position x and y:

0.563 0.186 vs. 0.964 0.322). This is because the Monte Carlo SE using the instantaneous

encoding estimated directly for data without close-form assumptions.

Let's come back to the motivation of developing signal processing techniques on the point

process, where we wonder if the spike timing contains richer information than the conventional

spike rates. One straightforward way is to compare decoding performances between the spikes

rates and point process domain. Since our algorithm is developed based on the state-observation









model, it would be comparable to first start with Kalman filter and Kalman PP. Both methods

have linear tuning and assume the posterior density Gaussian distributed. The big performance

drop between the training and test by Kalman filter shows the over-fit parameter in tuning model

because of the blurred time information of the neural activity. Kalman PP works directly on point

process, which overcomes the problem with less performance difference between training and

testing set. However, the closer resolution on neural activity results in poor estimation on

posterior density results approximated Gaussian, which produces not necessarily better results.

Compare the performance by Monte Carlo SE, which estimated posterior density more

accurately, the performance in spike domain is slightly better (CC of 2D position

0.7776 0.0886, 0.7545 0.1543) than the one in continuous spike rates (CC of 2D position

0.7463 0.0703, 0.7397 0.1003).

The slightly better performance is not as good as we expected to corroborate the hypothesis

that richer dynamic information from spike timing is needed in motor BMIs. By only checking

values of the performance criterion, it would be too quick to come to the conclusion that the

spike trains contain no more information than spike rates. We should look into carefully how the

2 different methods are implemented and under what circumstances each shows the advantage.

The Kalman filtering infers the kinematics from continuous spike rates within closed form

simply and analytically with linear model and Gaussian assumption on posterior. Our proposed

Monte Carlo sequential estimation enable to filter on point process, while it would show clearly

better performance only if the pdf of the state given experimental observation is multi-modal or

highly skewed for most of the time. One of the possible reasons of the slightly better

performance here could be that state variable we are modeling on. Currently we build the

probabilistic approach to inferring the 2D position, velocity and acceleration, which are a final









representation of a combination of complicated muscle movements that are initialized by the

motor neuron spiking. Those combinations can be regarded as low-pass filtering or weighted

averaging operations from the neural activities, which might make linear function and Gaussian

assumptions easily satisfied in Kalman filter. Plus the bigger time resolution gap from spike

timing rather than spike rates brings more difficult decoding job for Monte Carlo SE. If we

would have access to synchronous EMG (electromyopgraphic), signals which have a much

higher time resolution than the kinematics because they respond to motor neuron firing without

too much averaging and less time resolution gap, it might be a better case for Monte Carlo

sequential estimation to show its advantages on decoding.

Comparing to the Kalman filter with fixed linear model, our proposed approach, as a non-

parametric method without constrains, enables us to build the neuron physiologic tuning

knowledge estimated simply from spike timing into the decoding framework. The instantaneous

LNP model we currently use may not be optimal, which could also result in the "slightly" but not

"obviously" better performance. The better encoding model should bring the potentials to

improve the BMI decoding performance, therefore evaluate more fairly if the spike timing

contains more information comparing to the spike rates.

In the efforts to reduce the computational complexity for multi-channel BMI, we proposed

mutual information based on the instantaneous tuning function to select the neuron subset in term

of the importance related to the movement task. Among the 30 selected neurons, 70% of neurons

distribute in Ml. The decoding performance has close or even less NMSE comparing to the full

neuron ensemble with much less computational complexity.

Future Work

As we have described the challenges in BMI, Monte Carlo SE is design to derive

kinematics directly from spike domain without linear and Gaussian assumptions. The









instantaneous encoding model tries to evaluate the tuning property directly from the data without

a closed form assumption such as linear or exponential. We have also developed the synthetic

averaging idea in efforts to bridge the time gap between the neural activates and the movement.

The information theoretical criterion is proposed to reduce the computation complexity by

decoding with only subset of the neurons. There are still some aspects we could work on in the

future 1) the association among neurons, and 2) the non-stationary tracking of the neuron tuning

properties during decoding process.

In our current approach, the posterior density of the kinematics given multi-channel spike

observations is obtained with the conditional independent assumption among all the neurons.

This opposes the concerns on neuron associations. One solution might be to modify the neuron

tuning function such that it takes into account not only the kinematics but also the neurons with

synchronized behavior. In this way, we also build the functional structure between the neuron

firing information and improve our approach in a more realistic way.

In our preliminary BMI decoding results, we used the statistically fixed tuning function to

reconstruct the monkey's movements from the multi-channel neuron spike trains. The preferred

kinematic direction, which is represented by the linear filter in the tuning function model, is

constant for each neuron. The nonlinearity of the neuron tuning curve remains constant

throughout the decoding. As we analyzed the decoding performance in training and testing data

in different segments, it clearly shows that the reconstruction in the testing segment, which is far

away from the training set, is poor. It is because the stable assumptions could conflict with

regard to nonstationary neuron firing patterns. If we can analyze the amount of information that a

neuron conveys by firing changes, could we deal with it in the decoding?









Awareness of the nonstationary properties of neuron firing behaviors should alter the

parameters in the tuning function modeling along the time step. The preferred kinematic

direction could deviate slightly from the direction at the previous time iteration. Approximating

both movements and linear filter weights is a duel estimation problem. In the dual extended

Kalman filter [Wan & Nelson, 1997] and the joint extended Kalman filter [Matthews, 1990], the

dual estimation problem was addressed with differing solutions. In the dual extended Kalman

filter, a separate state-space representation is used for both the signal and the weights. At every

time step, the current estimation of the weights is used as a fixed parameter in the signal filter,

and vice versa. The joint extended Kalman Figure combines signal and weights into a single joint

state vector, and runs the estimation simultaneously. Since there are 185 neurons recorded

simultaneously with the movement task, to explore the joint state vector with both signal and

weights within such a high dimensional space could require huge amount of samples. We apply

here the dual methods to our BMI decoding to deal with the nonstationary neuron tuning

function.

We started with the simplest case, Kalman filter working on the continuous inning spike

rates to show preliminary results of the dual idea. To apply Kalman filter on our BMI data, the

state dynamic remains the same as

x, = F,x,l +7t, (5-1)

where F, establishes the dependence on the previous state and r, is zero-mean Gaussian

distributed noise with covariance Q1,. F, is estimated from training data by the lease square

solution. Q1, is estimated as the variance of the error between the linear model output and the

desired signal. The tuning function is linearly defined as

A = Ht Xt+iag + n + n2t (5-2)









where A, is the firing rate by 100ms window inning. xt is the instantaneous kinematics vector


defined as [p vx a pa vy a, 1]; with 2-dimensional information of position, velocity,

acceleration and bias term. The variable lag refers to the causal time delay between motor

cortical neuron activity and kinematics due to the propagation effects of signals thru the motor

and peripheral nervous systems. Here it is experimentally set as 200 ms [Wu et al., 2006; Wang

et al., 2007c].


In traditional Kalman filter, the weight estimation of the linear tuning function Hj is given

from training data by

Ht = (E[xt+ag TX t+ag ) E[X t+lagAt (5-3)
Different from traditional Kalman filter, the linear filter weights in the tuning function, which

represent the preferred kinematic direction, are modeled as a slowly changing random walk in

dual Kalman filter. In this way, the dual estimation on tuning function parameters would

demonstrate the transformation of the neuron encoding.

TJ ---T J
H t H t-1 + ur (5-4)


where Ht represents the linear tuning parameters of neuronj at time index t. Here we only

model the tuning parameter of the first 10 most important neurons as we selected in Chapter 4 by

the information theoretical criterion. The tuning parameters of the 10 neurons change over time

with the dependence on the previous tuning parameters. (.)T represents the transformation

operation. u/ is zero-mean Gaussian distributed noise with covariance Q2k.

nlk is zero-mean Gaussian distributed noise with covariance R1k, which is contributed by

the noisy kinematics states. n2k is zero-mean Gaussian distributed noise with covarianceR2k ,









which is contributed by the changing tuning parameters. At each time index, the kinematics

vector is first derived as the state from the observation of firing rate in test by Equations 5-5 a-e.

Xkkl- = FkXk-1 k-I (5-5 a)

k1=kPk- Fkk-1-FkT +1Qk (5-5 b)

Kk, Pklk-H (HkPklk 1- + Rlk) 1 (5-5 c)

Pk=k-1 FkPk-lk-1FkT + 1k (5-5 d)

Xkjk = Xk k- + Kk (k HkXkk-1) (5-5 e)

After the kinematics state is estimated from the observation, the tuning parameters for each

neuron are then estimated by another Kalman filter by Equations 5-6 a-d.

Phkkl- = Phklkl1 Q2k (5-6 a)

Khk = Phkk-lk Xk(XPhkkk +Rl 2k)-1 (5-6 b)

Phkk = Phkk 1 + KhkxkPhkk 1 (5-6 c)

HTk H 1 + Khk(k Hkkk 1) (5-6 d)

Notice that carefully choosing the parameters in the noise estimation covariancee Qlk in

state dynamic model and covariance Q2k in tuning dynamic model) could affect the algorithm

performance. However, since we have no access to the desired kinematics in the test data set, the

parameter estimations of both algorithms were obtained from the training data sets. For the

Kalman filter, the noise in the kinematics model (Equation 5-1) is approximated by a Gaussian

distribution with covariance Qk We set the initial state xo to be the zero vectors and the state

variance P, is estimated as the state variance from the training data.










The initial tuning parameter Ho can be set as the one estimated from training by least

square. It is somewhat different to set the variance parameters Q2k and Phoo in the tuning

dynamic model. This is because we have the access to a series of the stochastic kinematics

signals in the training set, but only the deterministic result to get the tuning parameters by least

square solution. In order to get a series of the tuning parameter changing over time, we run the

dual Kalman (Equation 5-6 a-d) to estimate the tuning parameters over time in the training set,

where the kinematics state is set directly as the true value. Since in the testing set, the noise term

is always contributed by 2 terms, the noisy kinematics state and the noisy tuning parameters,

here we set covariance Q2k of the noise term uk in tuning dynamic is only 20% of the noise


variance approximated by (Ht H t-) from the time series of the tuning parameters. The

variancePh0o is also set as 20% of the variance from the time series of the tuning parameters

estimated from the training data.

Table 5-1 shows reconstruction results on 800 sample segment (time index from 213.5 m

to 293.5 m) of a test segment of neural data by Kalman filter and by dual Kalman with tuning

parameter modification on 10 most important neurons with the criterion Normalized Mean

Square Error (MSE normalized by the power of the desired signal) between the desired signal

and the estimations.

Table 5-1 shows that dual Kalman filter obtained less NMSE than Kalman filter with fixed

tuning parameters for all the kinematics. Figure 5-1 shows the reconstruction performance by

Kalman filter and dual Kalman filter on 10 most important neurons for 1000 test samples. The

left and right column plots display the reconstructed kinematics for x-axis and y-axis. The 3 rows

of plots illustrate from top to bottom the reconstructed position, the velocity and the acceleration.









In each subplot, the red line indicates the desired signal, the green line indicates the estimation

by Kalman filter and the blue line indicates the estimation by Dual Kalman filter. We zoom in

the position reconstruction in the plots. It is shown that dual Kalman filter provides better

estimation at the peak of the desired signal than Kalman filter, because the tuning parameter is

slowly tuned over the time. Figure 5-2 shows the tracking of the tuning parameters of the 10

neurons estimated by dual Kalman filter in test set. As we expected, we see the slow change of

the parameters over the time. Neuron 72 and neuron 158 show diverge of the parameter change.

It only appears when a pair or pairs of the parameter changes fast over time. We could infer that

after the linear projection, the pair of the fast changing weight could results in a slow change of

the linear output.

The preliminary results of the dual Kalman shows the possibility of the tracking the

nonstationary tuning properties of the motor neurons. As we know from the experiment, the

results are very sensitive to the parameter settings. The systematic way to decide the optimal

parameters could be studied. Again the algorithm should be tested for longer data in the future.










Table 5-1. Results of the kinematics reconstructions by Kalman and dual Kalman for segment of
test data


Position


Velocity


Acceleration


NMSE
Kalman
Dual Kalman


0.5706 0.5222
0.5574 0.5170


0.4747 0.4733 0.6752 0.8153
0.4740 0.4725 0.6698 0.8100


40
4 1 A A
20

-20

2640 2660 2680 2700 2720 2740 2760
t
Vx








2700 2750 2800 2850
t
Ax

01





0 2780 2800 2820 2840 280
2080 2800 2L20 2840 2L60
2780 2800 2820 2840 2860


U
-20
An


740 2760 2780 2800 2820 2840 2860 2880 2900
t
Vy


1
2550 2600 2650 2700 2750
t
Ay

0.05
o ,I' ..cs ,




280 2.050 880 2900
2800 2820 2840 28GO 2880 2900


Figure 5-1. The reconstructed kinematics for 2-D reaching task by Kalman and dual Kalman
filter


desired
Kalman
S- Dual Kalman

. .VI F


2













neuron 67


50

.2 0
w,


100 200 300 400 500 600 700 800
time

neuron 76


0 0


0 100 200 300 400 500 600 700 800
time

neuron 80
10

o0 0

-10
0 100 200 300 400 500 600 700 800
time

neuron 85
50


c
.2) 0

-50
0



5-


100 200 300 400 500 600 700 800
time

neuron 107


100 200 300 400 500 600 700 800
time

neuron 77


vc
. 0


0 100 200 300 400 500 600 700 800
time

neuron 81
10

o)

-10 ...
0 100 200 300 400 500 600 700 800
time

neuron 98
10

. 0

-10
0 100 200 300 400 500 600 700 800
time

neuron 158


.2 0
w,


0 100 200 300 400 500 600 700 800
time


0 100 200 300 400 500 600 700 800
time


Figure 5-2. The tracking of the tuning parameters for the 10 most important neurons in dual Kalman filter


.) 0
I,


neuron 72









LIST OF REFERENCES


Ashe, J., & Georgopoulos, A. P. (1994). Movement parameters and neural activity in motor cortex
and area 5 Cereb. Cortex 6, 590-600

Abeles, M. (1982). Quantification, smoothing, and confidence limits for single-unit histograms, J.
Neurosci. Methods. 5, 317-325

Arieli, A., Shoham, D., Hildesheim, R., & Grinvald, A. (1995). Coherent spatiotemporal patterns of
ongoing activity revealed by realtime optical imaging coupled with single-unit recording in
the cat visual cortex, JNeurophysiol. 73, 2072-2093.

Arulampalam, M. S., Maskell, S., Gordon, N., & Clapp, T. (2002). A tutorial on particle filters for
online nonlinear/non-gaussian bayesian tracking. IEEE Trans. on Signal Processing. 50(2),
174-188

Bergman, N. (1999). Recursive Bayesian estimation: Navigation and tracking applications, Ph.D.
dissertation, Linkoping University, Sweden

Borst, A., & Theunissen, F. E. (1999). Information, Information theory and neural coding. Nat.
Neurosci.. 2, 947-957

Bourien, J., Bartolomei, F., Bellanger, J. J., Gavaret, M., Chauvel, P., & Wendling, F. (2005). A
method to identify reproducible subsets of co-activated structures during interictal spikes.
Application to intracerebral EEG in temporal lobe epilepsy, Clin Neurophysiol. 116(2), 443-
55

Brillinger, D. R. (1992). Nerve cell spike train data analysis: a progression of techniques, J. Amer.
Stat. Assoc. 87, 260-271

Brockwell, A. E., Rojas, A. L. & Kass, R. E. (2004). Recursive Bayesian decoding of motor
cortical signals by particle filtering. JNeurophy. 91, 1899-1907

Brody, C. D. (1999). Correlations without synchrony, Neural Comput. 11, 1537-1551

Brown, E. N., Frank, L., & Wilson, M. (1996). Statistical approaches to place field estimation and
neuronal population decoding. Soc. ofNeurosci. Abstr. 26, 910,

Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, MA (1998). A statistical paradigm
for neural spike train decoding applied to position prediction from ensemble firing patterns
of rat hippocampal place cells. J. Neurosci. 18, 7411-25

Brown, E. N., Nguyen, D. P., Frank, L. M., Wilson, M. A., & Solo, V. (2001). An analysis of
neural receptive field plasticity by point process adaptive filtering. PNAS, 98 12261-12266

Brown, E. N., Barbieri, R., Ventura, V., Kass, R. E., & Frank, L. M. (2002). The time-rescaling
theorem and its application to neural spike train data analysis. Neural Computation. 14,
325-246









Brown, E., Kass, R., & Mitra, P. P. (2004). Multiple neural spike train data analysis: state-of-the-art
and future challenges. Nature Neurosci. 7(5), 456-461

Carmena, J. M., Lebedev, M. A., Crist, R. E., O'Doherty, J. E., Santucci, D. M., Dimitrov, D. F.,
Patil, P. G, Henriquez, C. S., & Nicolelis, M. A. L. (2003). Learning to control a brain
machine interface for reaching and grasping by primates. PLoS Biology. 1(2), 193-208

Carpenter, J., Clifford, P., & Feamhead, P. (1999). Improved particle filter for non-linear problems.
in IEE Proc. on Radar and Sonar Navigation. 136(1), 2-7

Chan, K. S., & Ledolter, J. (1995) Monte Carlo estimation for time series models involving counts.
J. Am. Stat. Assoc. 90, 242-252

Chandra, R., & Optican, L. M. (1997). Detection, classification, and superposition resolution of
action-potentials in multiunit single-channel recordings by an online real-time neural-
network. IEEE Trans. Biomed. Eng. 44, 403-12

Chichilnisky, E. J. (2001). A simple white noise analysis of neuronal light responses. Network:
Comput. Neural Syst. 12, 199-213

DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1993) The spatiotemporal organization of simple
cell receptive fields in the cat's striate cortex. II. Linearity of temporal and spatial
summation. Journal ofNeurophysiology. 69, 1118-1135

DeBoer, E., & Kuyper, P. (1968). Triggered correlation. IEEE Trans BiomedEng. 15,169-179

Diggle, P. J., Liang, K-Y., & Zeger S. L (1995). Analysis of longitudinal data. Oxford: Clarendon

Doucet, A. (1998). On sequential monte carlo sampling methods for Bayesian filtering. Department
of Engineering, University of Cambridge, UK, Tech. Rep.

Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analysis of
neural encoding by point process adaptive filtering. Neural Comput. 16(5), 971-998

Eggermont, J. J., Johannesma, P. I. M., & Aertsen, A. M. H. J. (1983) Reverse-correlation methods
in auditory researchQ. Rev. Biophysics. 16, 341-414

Fee, M. S., Mitra, P. P. & Kleinfeld D (1996). Automatic sorting of multiple-unit neuronal signals
in the presence of anisotropic and non-Gaussian variability. J. Neurosci. Meth. 69, 175-88

Frank, L. M., Eden, U. T., Solo, V., Wilson, M. A., & Brown, E. N. (2002). Contrasting patterns of
receptive field plasticity in thehippocampus and the entorhinal cortex: An adaptive filtering
approach. Journal ofNeuroscience. 22, 3817-3830

Frank, L. M., Stanley G. B.,&Brown, E.N. (2004). Hippocampal plasticity across multiple days of
exposure to novel environments. Journal ofNeuroscience, 24. 7681-7689









Fritsch, G., & Hitzig, E. (1870). Ueber dir elektrische Erregbarkeit des Grosshims. Arch. Anat.
Physiol. Lpz. 37, 300-332

Gabbiani, F, & Koch, C. (1998). Principles of spike train analysis. In: Koch C, Segev I, editors.
Methods in Neuronal Modeling: From Ions to Networks, 2nd edition. Cambridge MA: MIT,
313-60

Georgopoulos, A. P., Kalaska, J. F., Caminiti, R., & Massey, J. T. (1982). On the relations between
the direction of two-dimensional arm movements and cell discharge in primate motor
cortex. J. Neurosci. 2,1527-1537

Georgopoulos, A. P., Schwartz, A. B., & Kettner, R. E. (1986). Neuronal population coding of
movement direction. Science. 233, 1416-1419

Georgopoulos, A. P., Lurito, J. T., Petrides, M., Schwartz, A. B., & Massey, J. T. (1989). Mental
rotation of the neuronal population vector. Science. 243, 234-236

Gerstein, G. L., & Perkel, D. H. (1969). Simultaneously recorded trains of action potentials:
analysis and functional interpretation. Science. 164, 828-830

Gozani, S. N., & Miller, J. P. (1994). Optimal discrimination and classification of neuronal action-
potential wave-forms from multiunit, multichannel recordings using software-based linear
filters. IEEE Trans. Biomed. Eng. 41, 358-72

Gordon, N., Salmond, D., & Smith, A. F. M. (1993). Novel approach to nonlinear and non-gaussian
bayesian state estimation. in IEEproceedings-F. 140, 107-113.

Haykin, S. (2002). Adaptive filter theory. Prentice-Hall.

Hensel, H., &Witt, I. (1959). Spatial temperature gradient and thermoreceptor stimulation. J
Physiol. 148(1), 180-187

Jammalamadaka, S. R., & SenGupta, A. (1999). Topics in Circular Statistics. River Edge, NJ:
World Scientific Publishing Company.

Jones, J. P., & Palmer, L. A. (1987) The two-dimensional spatial structure of simple receptive
fields in cat striate cortex. Journal ofNeurophysiology. 58(6), 1187-1211

Kass, R. E., Ventura, V., A spike train probability model (2001). Neural Comput. 13, 1713-1720

Kim, S. P., Sanchez, J. C., Erdogmus, D., Rao, Y. N., Wessberg, J., Principe, J. C., & Nicolelis M.
A. (2003). Divide-and-conquer approach for brain machine interfaces: nonlinear mixture of
competitive linear models. Neural Network., 16, 865-871

Kim, S. P. (2005). Design and analysis of optimal encoding models for brain machine interfaces.
PhD. Dissertation. University of Florida









Lewicki, M. S. (1998). A review of methods for spike sorting: the detection and classification of
neural action potentials. Network Comput. Neural Syst. 9, R53-R78

Leyton, A. S. F., & Sherrington, C. S. (1917). Observations and excitable cortex of the chimpanzee,
orange-utan and gorilla. Q. J. Exp. Physiol. 11, 135-222

Makeig, S., Jung, T-P., Bell, A. J., Ghahremani, D., & Sejnowski, T. J. (1997). Blind separation of
auditory event-related brain responses into independent components. Proc. NatlAcad Sci.
USA. 94, 10979-84

Marmarelis, P. Z., & Naka, K. (1972). White-noise analysis of a neuron chain: An application of
the Wiener theory. Science. 175, 1276-1278

Martignon, L. G., Laskey, K., Diamond, M., Freiwald, W., & Vaadia E. (2000). Neural coding:
higher-order temporal patterns in the neurostatistics of cell assemblies. Neural Comput. 12,
2621-2653

Matthews, M. B. (1990). A state-space approach to adaptive nonlinear filtering using recurrent
neural networks. In Proceedings IASTED Internat. Symp. Artificial Intelligence Application
and Neural Networks. 197-200, 1990

McKeown, M. J., Jung, T-P, Makeig, S., Brown, G., Kindermann, S. S., Lee, T-W & Sejnowski, T.
J. (1998). Spatially independent activity patterns in functional magnetic resonance imaging
data during the Stroop color-naming task. Proc. NatlAcad. Sci. USA 95, 803-810

McLean, J., & Palmer, L. A. (1989). Contribution of linear spatiotemporal receptive field structure
to velocity selectivity of simple cells in area 17 of cat. Vision Research. 29, 675-679

Mehta, M. R., Quirk, M. C., & Wilson, M. (2000). A experience-dependent asymmetric shape of
hippocampal receptive fields. Neuron. 25, 707-715

Mehring, C., Rickert, J., Vaadia, E., de Oliveira, S. C., Aertsen, A., & Rotter, S. (2003). Inference
of hand movements from local field potentials in monkey motor cortex. nature
neuroscience. 6(12), 1253-1254

Meister, M., Pine, J., & Baylor, D. A. (1994). Multi-neuronal signals from the retina: acquisition
and analysis. J. Neurosci. Meth.. 51, 95-106

Moran, D. W., & Schwartz, A. B. (1999). Motor cortical representation of speed and direction
during reaching. J. Neurophysiol., 82, 2676-2692

Nicolelis, M. A. L., Ghazanfar, A. A., Faggin, B., Votaw, S., & Oliveira, L.M.O. (1997)
Reconstructing the engram: simultaneous, multiple site, many single neuron recordings.
Neuron. 18, 529-537.

Nirenberg, S., Carcieri, S. M., Jacobs, A. L. & Latham, P. E. (2001). Retinal ganglion cells act
largely as independent encoders. Nature. 411, 698-701









Okatan, M., Wilson, M. A., Brown, E. N. (2005). Analyzing functional connectivity using a
network likelihood model of ensemble neural spiking activity. Neural Comput.. 17, 1927-
1961

O'Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map: Preliminary evidence
from unit activity in the freely moving rat. Brain Res., 34, 171-175.

Paninski, L. (2003). Convergence properties of some spike-triggered analysis techniques. Network:
Computation in Neural Systems. 14, 437-464

Paninski, L., Fellows, M. R., Hatsopoulos, N. G., & Donoghue, J. P. (2004a) Spatiotemporal tuning
of Motor Cortical Neurons for Hand Position and velocity. J. Neurophysiol. 91, 515-532

Paninski, L., Shoham, S., Fellows, M. R., Hatsopoulos, N. G., & Donoghue, J. P. (2004b).
Superlinear population encoding of dynamic hand trajectory in primary motor cortex. J.
Neurosci., 24(39), 8551-8561

Parzen, E. (1962). On the estimation of a probability function and the mode. Annals of
Mathematical Statistics. 33(14), 1065-1076

Rieke, F., Warland, D., de Ruyter van Steveninck, R. R., & Bialek W. (1997). Spikes: Exploring
the Neural Code. Cambridge. MA: MIT

Reich, D. S., Victor, J. D., & Knight, B. W. (1998). The power ratio and the interval map: spiking
models and extracellular recordings. JNeurosci. 18,10090-10104

Reich, D. S., Melcher F. & Victor J. D. (2001). Independent and redundant information in nearby
cortical neurons. Science. 294, 2566-2568

Reid, R. C., & Alonso, J. M. (1995). Specificity of monosynaptic connections from thalamus to
visual cortex. Nature. 378(6554), 281-284

Reza, F. M. (1994) An Introduction to Information Theory. New York: McGraw-Hill New York:
Dover

Rieke, F., Warland, D., Steveninck, R. R., & Bialek W. (1997). Spikes: Exploring the Neural Code.
Cambridge, MA: MIT

Roitman, A. V., Pasalar, S., Johnso,n M. T. V., & Ebner, T. J. (2005). Position, direction of
movement, and speed tuning to cerebellar purkinje cells during circular manual tracking in
monkey. J. Neurosci. 25(40), 9244-9257

Sakai, H. M., & Naka, K. (1987). Signal transmission in the catfish retina. v. sensitivity and circuit.
J. Neurophysiol. 58, 1329-1350









Sanchez, J. C., Erdogmus, D., Principe, J. C., Wessberg, J. & Nicolelis, M. A. L. (2002a). A
comparison between nonlinear mappings and linear state estimation to model the relation
from motor cortical neuronal firing to hand movements. Proc. of SAB '02 Workshop on
Motor Control of Humans and Robots: On the Interplay of Real Brains and Artificial
Devices. 59-65

Sanchez, J. C., Kim, S. P., Erdogmus, D., Rao, Y. N., Principe, J. C., Wessberg, J., & Nicolelis, M.
A. (2002b) Input-output mapping performance of linear and nonlinear models for estimating
hand trajectories from cortical neuronal firing patterns. Proc. OfNeuralNet. Sig. Proc. 139-
148

Sanchez, J. C., Carmena, J. M., Erdogmus, D., Lebedev, M. A., Hild, K. E., Nicolelis, M. A.,
Harris, J. G., & Principe, J. C. (2003). Ascertaining the Importance of Neurons to Develop
Better Brain Machine Interfaces. IEEE Transactions on Biomedical Engineering. 61 943-
953

Sanchez, J. C. (2004). From cortical neural spike trains to behavior: modeling and analysis. PhD.
Dissertation. University of Florida

Sanchez, J. C., Principe, J. C., & Carney, P. R. (2005). Is Neuron Discrimination Preprocessing
Necessary for Linear and Nonlinear Brain Machine Interface Models? accepted to 11th
International Conference on Human-Computer Interaction

Sanchez, J. C., & Principe, J. C. (2007). Brain-Machine Interface Engineering. New York: Morgan
and Claypool

Schafer, E. A. (1900) The cerebral cortex. Textbook ofPhysiology, edited by Schafer. F. A.,
London: Yong J. Pentland. 697-782

Schwartz, A. B., Kettner, E., & Georgopoulos, A. P. (1988). Primate motor cortex and free arm
movements to visual targets in three-dimensional space. I. Relations between single cell
discharge and direction of movement. J. Neurosci. 8, 2913-2927

Schwartz, A. B. (1992). Motor cortical activity during drawing movements: Single-unit activity
during sinusoid tracing. JNeurophysiol 68, 528-541

Schwartz, A. B., Taylor D. M., & Tillery, S. I. H. (2001). Extraction algorithms for cortical control
of arm prosthetics. Current Opinion in Neurobiology. 11(6), 701-708.

Schmidt, E. M. (1980). Single neuron recording from motor cortex as a possible source of signals
for control of external devices. Ann. Biomed. Eng. 339-349, 1980.

Serruya, M. D., Hatsopoulos, N. G, Paninski, L., Fellows, M. R., & Donoghue, J. P. (2002). Brain-
machine interface: Instant neural control of a movement signal. Nature. 416, 141-142

Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons:
implications for connectivity, computation, and information coding. JNeurosci. 18, 3870-
3896









Sharpee, T., Rust, N. C., & Bialek, W. (2002). Maximally informative dimensions: Analyzing
neural responses to natural signals. Neural Information Processing Systems (NIPS02). 15,
Cambridge, MA. MIT Press

Silverman, B. W. (1981). Using Kernel Density Estimates to Investigate Multimodality. J. Roy.
Stat. Soc., Ser. B. 43, 97-99

Simoncelli, E.P., Paninski, L., Pillow, J., & Schwartz, O. (2004). Characterization of neural
responses with stochastic stimuli. The New Cognitive Neurosci., 3rd edition, MIT Press

Smith, A. C., & Brown, E. N. (2003). State-space estimation from point process observations.
Neural Computation. 15, 965-991

Strong, S. P., Koberle, R., de Ruyter van Steveninck, R. R., & Bialek, W (1998). Entropy and
information in neural spike trains. Phys. Rev. Lett. 80, 197-200

Suzuki, W. A., & Brown, E. N. (2005). Behavioral and Neurophysiological Analyses of Dynamic
Learning Processes. Behavioral and Cognitive Neuroscience Reviews. 4(2), 67-97

Taylor, D. M., Tillery, S. I. H., & Schwartz A. B. (2002). Direct cortical control of 3D
neuroprosthetic devices. Science. 296, 829-1832

Todorov, E. (2000) Direct cortical control of muscle activation in voluntary arm movements: a
model. Nature Neuroscience. 3, 391-398

Truccolo, W., Eden, U.T., Fellows, M.R., Donoghue, J. P., & Brown, E. N. (2005). A point process
frame work for relation neural spiking activity to spiking history, neural ensemble, and
extrinsic covariate effects. J. Neurophy. 93, 1074-1089

Tuckwell, H. (1988). Introduction to Theoretical Neurobiology, 2. New York: Cambridge
University Press

Wan, E. A., & Nelson, A. T. (1997). Neural dual extended Kalman filtering: applications in speech
enhancement and monaural blind signal separation. Proc. Neural Networks for Signal
Processing Workshop. IEEE

Wan, E.A., & Van Der Merwe, R. (2000). The unscented Kalman filter for nonlinear estimation.
Adaptive Systems for Signal Processing, Communications, and Control Symposium 2000.
AS-SPCC. The IEEE, 153-158

Wang, Y., Sanchez, J. C., Principe, J. C., Mitzelfelt, J. D., & Gunduz, A. (2006a). Analysis of the
Correlation between Local Field Potentials and Neuronal Firing Rate in the Motor Cortex.
Intl. Conf. of Engineering in Medicine and Biology Society 2006. 6186-6188

Wang, Y., Paiva, A. R. C., & Principe, J. C. (2006b). A Monte Carlo Sequential Estimation for
Point Process Optimum Filtering. IJCNN 2006. 1846-1850









Wang, Y., Paiva, A. R. C., & Principe, J. C. (2007a) A Monte Carlo Sequential Estimation of Point
Process Optimum Filtering for Brain Machine Interfaces. Neural Networks, 2007. IJCNN
'07. International Joint Conference on. 2250-2255

Wang, Y., Sanchez, J., & Principe, J. C. (2007b). Information Theoretical Estimators of Tuning
Depth and Time Delay for Motor Cortex Neurons. Neural Engineering, 2007. CNE '07. 3rd
International IEEE/EMBS Conference on. 502-505

Wang, Y., Sanchez, J., & Principe, J. C. (2007c). Information Theoretical Analysis of
Instantaneous Motor Cortical Neuron Encoding for Brain-Machine Interfaces, IEEE
transactions on Neural systems and Rehabilitation Engineering, under review

Wise, S. P., Moody, S. L., Blomstrom, K. J., & Mitz, A. R. (1998). Changes in motor cortical
activity during visuomotor adaptation. Exp Brain Res. 121(3), 285-99

Wessberg, J., Stambaugh C. R., Kralik, J. D., Beck, P. D., Laubach M, Chapin, J. K., Kim, J.,
Biggs, S. J., Srinivasan, M. A., & Nicolelis, M. A, (2000). Real-time prediction of hand
trajectory by ensembles of cortical neurons in primates. Nature. 408,361-365

Wu, W., Black, M. J., Mumford, D., Gao, Y., Bienenstock, E., & Donoghue, J. P. (2004). Modeling
and decoding motor cortical activity using a switching kalman filter. IEEE Trans. on
Biomedical Engineering. 51(6), 933-042

Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., Black, M. J. (2006). Bayesian population
decoding of motor cortical activity using a Kalman filter. Neural Comput. 18, 80-118

Zhang, K. C., Ginzburg, I., McNaughton, B. L,. Sejnowski, T. J. (1998). Interpreting neuronal
population activity by reconstruction: a unified framework with application to hippocampal
place cells. JNeurophys. 79,1017-44









BIOGRAPHICAL SKETCH

Yiwen Wang received a B.S. in engineering science with a minor in automatic control

from University of Science and Technology of China (USTC, Hefei, Anhui, China) in 2001. In

2004, she received a master's degree in engineering science with a minor in pattern recognition

and intelligent system from University of Science and Technology of China (USTC, Hefei,

Anhui, China). Right then, she joined the Department of Electrical and Computer Engineering at

the University of Florida-Gainesville, FL, USA, and received a Ph.D. in 2008. Under the

guidance of Dr. Jose C. Principe in computational neuro-engineering lab, she has investigated the

application of advanced signal processing and control methods to neural data for brain machine

interfaces (BMIs). Her research interests are in brain machine interfaces, statistical modeling on

biomedical signals, adaptive signal processing, pattern recognition, and information theoretic

learning.





PAGE 1

1 POINT PROCESS MONTE CARLO FILTERI NG FOR BRAIN MACHINE INTERFACES By YIWEN WANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2008

PAGE 2

2 2008 Yiwen Wang

PAGE 3

3 To my family

PAGE 4

4 ACKNOWLEDGMENTS My deepest gratitude goes to my advisor Dr. Jose C. Principe because of his guidance, support, critiques and sense of humor I feel blessed to be one of his students. Inspired by his suggestions, encouraged by his rigorous comments, relaxed by his jokes, touched by his caring, I was brought along into the fabulous world of researc h. It is Dr. Principe who taught me to how to think as a researcher. My gratitude also goes to Dr. Justin C. Sanc hez, Dr. John G. Harris, Dr. Dapeng Wu, (who are members of my committee), and Antonio Paiv a, for their suggestions and help in my research. I am also grateful to Aysegul Gunduz, J ohn DiGiovanna, Shalom Da rmanjian, Ruijiang Li, Weifeng Liu, Dr. Rui Yan, Dr. Ji anwu Xu, and Dr. Dongming Xu for their incredible supports and caring. I also would like to e xpress gratitude to Julie Veal, who helps me with my English writing. Last but not least, I am indebted to my moth er, my father and my fianc, for their endless love, support and strong belief in me. This dissertation is dedicated to them.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........7 ABSTRACT....................................................................................................................... ............10 CHAPTER 1 INTRODUCTION..................................................................................................................12 Description of Brain Machine Interfaces................................................................................12 Review of the Approaches in Spike Domain..........................................................................13 Spike SortingPreprocessing Neural Activities............................................................14 Spike-Based Association Analysis..................................................................................15 Spike-Based Modeling....................................................................................................17 Encoding analysis.....................................................................................................18 Decoding algorithms................................................................................................21 Outline........................................................................................................................ ............25 2 PROBABILISTIC APPROACH FOR POINT PROCESS....................................................30 Sequential State Estimation Problem Pros and Cons..........................................................30 Review of the Previous Probabilistic Approaches..................................................................31 Adaptive Algorithms for Point Processes...............................................................................32 Adaptive Filtering for Point Processes with Gaussian Assumption................................33 Monte Carlo Sequential Estimation for Point Processes.................................................35 Simulation of Monte Carlo Sequential Esti mation on Neural Spike Train Decoding............40 Interpretation................................................................................................................. ..........44 3 INFORMATION THEORETICAL ANALYSIS OF INSTANTANEOUS MOTOR CORTICAL NEURON ENCODING.....................................................................................48 Experimental Setups............................................................................................................ ...48 Data Recording................................................................................................................48 Simulation vs. Vivo Recordings......................................................................................50 Review of Tuning Analysis....................................................................................................51 Visual Inspection of a Tuning Neuron....................................................................................55 Metric for Tuning.............................................................................................................. .....55 Tuning Depth...................................................................................................................56 Information Theoretic Tuning Metric.............................................................................57 Simulated Neural Recordings..........................................................................................59 In Vivo Neural Recordings..............................................................................................63 Information Theoretical Neural Encoding..............................................................................64 Instantaneous Tuning Function in Motor Cortex............................................................64

PAGE 6

6 Information Theoretic Delay Estimation.........................................................................69 Instantaneous vs. Windowed Tuning Curves..................................................................71 Instantaneous vs. Windowed Encoding...........................................................................73 Discussion..................................................................................................................... ..........75 4 BRAIN MACHINE INTERFACES DE CODING IN SPIKE DOMAIN..............................89 The Monte Carlo Sequential Estimat ion Framework for BMI Decoding..............................89 Monte Carlo SE Decoding Re sults in Spike Domain.............................................................94 Parameter Study for Monte Carlo SE Decoding in Spike Domain........................................98 Synthesis Averaging by Monte Carl o SE Decoding in Spike Domain................................100 Decoding Results Comparison Analysis..............................................................................104 Decoding by Kalman.....................................................................................................105 Decoding by Adaptive Point Process............................................................................106 Exponential tuning.................................................................................................106 Kalman point process.............................................................................................108 Performance Analysis....................................................................................................109 Nonlinear & nonGaussian vs. linear & Gaussian...................................................110 Exponential vs. linear vs. LNP in encoding...........................................................113 Training vs. testing in different se gments nonstationary observation.................114 Spike rates vs. point process..................................................................................115 Monte Carlo SE Decoding in Spike Domain Using a Neural Subset...................................117 Neural Subset Selection.................................................................................................118 Neural Subset vs. Full Ensemble...................................................................................119 5 CONCLUSIONS AND FUTURE WORK...........................................................................138 Conclusions.................................................................................................................... .......138 Future Work.................................................................................................................... ......152 LIST OF REFERENCES.............................................................................................................161 BIOGRAPHICAL SKETCH.......................................................................................................169

PAGE 7

7 LIST OF TABLES Table page 2-1 Comparison results of all algorithms with different kQ ....................................................45 3-1 Assignment of the sorted ne ural activity to the electrodes................................................77 3-2 The statistical similarity results comparison......................................................................79 3-3 The comparison of percentage of Mont e Carlo results in monotonically increasing........79 4-1 The kinematics reconstructions by M onte Carlo SE for segment of test data.................121 4-2 Averaged performance by Monte Carlo SE of the kinematics reconstructions for segment of test data..........................................................................................................123 4-3 Statistical performance of the ki nematics reconstructions using 2 criteria......................123 4-4 Results comparing the kinematics re constructions averaged among Monte Carlo trials and synthetic averaging...........................................................................................126 4-5 Statistical performance of the kine matics reconstructions by Monte Carlo SE and synthetic averaging..........................................................................................................127 4-6 Results comparing the kinematics rec onstruction by Kalman PP and Monte Carlo SE for a segment of data........................................................................................................127 4-7 Statistical performance of the kinema tics reconstructions by Kalman PP and Monte Carlo SE (synthetic averaging)........................................................................................130 4-8 Statistical performance of the kine matics reconstructions by different encoding models......................................................................................................................... .....130 4-9 Statistical performance of the kinematics reconstructions Kalman filter and Kalman PP............................................................................................................................. ........133 4-10 Statistical performance of the kinema tics reconstructions by spike pates and by point process........................................................................................................................ ......133 4-11 Statistical performance of the kinema tics reconstructions by neuron subset and full ensemble....................................................................................................................... ...135 5-1 Results of the kinematics reconstruc tions by Kalman and dual Kalman for segment of test data................................................................................................................... .....159

PAGE 8

8 LIST OF FIGURES Figure page 1-1 Brain machine interface paradigm.....................................................................................29 2-1 The desired velocity generated by triangle wave with Gaussian noise.............................45 2-2 The simulated neuron spike train gene rated by an exponential tuning function...............45 2-3 The velocity reconstruc tion by different algorithms..........................................................46 2-4 ) | (k kN v p at different time.............................................................................................46 3-1 The BMI experiments of 2D target reaching task. The monkey moves a cursor (yellow circle) to a randomly pl aced target (green circle), a nd is rewarded if a cursor intersects the target.......................................................................................................... ..77 3-2 Tuning plot for neuron 72..................................................................................................77 3-3 A counterexample of neuron tuning evaluated by tuning de pth. The left plot is a tuning plot of neuron 72 with tuning depth 1. The right plot is for neuron 80 with tuning depth 0.93.............................................................................................................. .78 3-4 The conditional probability density estimation..................................................................78 3-5 The average tuning information across trials by different evaluation...............................79 3-6 Traditional tuning depth for all the ne urons computed from three kinematics..................80 3-7 Information theoretic tuning depth for al l the neurons computed from 3 kinematics plotted individually........................................................................................................... .81 3-8 Block diagram of Linear-Nonlinear-Poisson model..........................................................82 3-9 Sketch map of the time delay between neuron spike train (bottom plot) and the kinematics response (upper plot).......................................................................................82 3-10 The conditional probability density estimation..................................................................83 3-11 Mutual information as functi on of time delay for 5 neurons.............................................83 3-12 Nonlinearity estimation for neurons..................................................................................84 3-13 Correlation coefficient between the n onlinearity calculated from windowed kinematics and the instantaneous kinematics with optimum delay...................................86 3-14 Comparison of encoding results by inst antaneous modeling and windowed modeling....87

PAGE 9

9 3-15 Comparison of encoding similarity by instantaneous modeling and windowed modeling across kernel size...............................................................................................88 4-1 Schematic of relationship between enc oding and decoding proce sses for Monte Carlo sequential estimation of point processes..........................................................................121 4-2 The posterior density of the reconstructed kinematics by Monte Carlo SE....................122 4-3 The reconstructed kinematics for 2-D reaching task.......................................................123 4-4 Linear model error using different ..............................................................................124 4-5 cdf of noise distribution us ing different density..............................................................125 4-6 Nonlinearity of neuron 72 using different ..................................................................125 4-7 Decoding performances by different xn...........................................................................126 4-8 The reconstructed kinema tics for a 2-D reaching task.....................................................128 4-9 The decoding performance by algori thms in PP for different data sets...........................131 4-10 Threshold setting for sorted informa tion theoretic tuning depths for 185 neurons.........133 4-11 Selected neuron subset (30 neurons) distribution............................................................134 4-12 Statistical performance of reconstruc ted kinematics by different neuron subsets...........136 5-1 The reconstructed kinematics for 2-D reaching task by Kalman and dual Kalman filter......................................................................................................................... .........159 5-2 The tracking of the tuning parameters for the 10 most important neurons in dual Kalman filter.................................................................................................................. ..160

PAGE 10

10 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy POINT PROCESS MONTE CARLO FILTERI NG FOR BRAIN MACHINE INTERFACES By Yiwen Wang August 2008 Chair: Jose C. Principe Major: Electrical and Computer Engineering Brain Machine Interface (BMI) design uses linear and nonlinear models to discover the functional relationship between neural activity and a primate s behavior. The loss of time resolution contained in spike timing cannot be captured in traditional adaptive filtering algorithms and might exclude useful information for the generation of movement. More recently, a Bayesian approach based on the observed spike times modeled as a discrete point process has been proposed. However, it incl udes the simplifying assumption of Gaussian distributed state posterior density, which in general may be too re strictive. We proposed in this dissertation a Monte Carlo sequential estimation framework as a probabilistic approach to reconstruct the kinematics directly from the multi-channel neural spike trains. Sample states are generated at each time step to recursively evaluate the poste rior density more accurately. The state estimation is obtained easily by reconstructi ng the posterior density with Parzen kernels to obtain its mean (called collapse). This algorithm is systematica lly tested in a simulated neural spike train decoding experiment and then in BMI data. Implementing this algo rithm in BMI requires knowledge of both neuronal representation (encodi ng) and movement decoding from spike train activity. Due to the on-line nature of BMIs, an instantaneous encoding estimation is necessary which is different from the current models usi ng time windows. We invest igated an information

PAGE 11

11 theoretic technique to evaluate neurons tuning functional relati onship between the instantaneous kinematic vector and neural fi ring in the motor cortex by a pa rametric linear-nonlinear-Poisson model. Moreover, mutual information is utilized as a tuning criterion to provide a way to estimate the optimum time delay between motor co rtical activity and the observed kinematics. More than half (58.38%) of the neurons instan taneous tuning curves display a 0.9 correlation coefficient with those estimated with the temporal kinematic vector. With the knowledge gained from tuning anal ysis encapsulated in an observation model, our proposed Brain Machine Interface becomes a problem of state seque ntial estimation. The kinematics is directly reconstr ucted from the state of the ne ural spike trains through the observation model. The posterior density estim ated by Monte Carlo sampling modifies the amplitude of the observed discre te neural spiking events by th e probabilistic measurement. To deal with the intrinsic spike randomness in online modeling, s ynthetic spike trains are generated from the intensity function estimated from the ne urons and utilized as extra model inputs in an attempt to decrease the variance in the kinema tic predictions. The performance of the Monte Carlo Sequential Estimation methodology augmented with this synthetic spike input provides improved reconstruction further. The current me thodology assumes a stationary tuning function of neurons, which might not be true. The effect of the tuning function non-stationary was also studied by testing the decoding perf ormance in different segment of data. The preliminary results on tracking the non-stationary tuni ng function by a dual Kalman st ructure indicate a promising avenue for future work.

PAGE 12

12 CHAPTER 1 INTRODUCTION Description of Brain Machine Interfaces Brain-Machine Interfaces (BMIs) exploit the spatial and tem poral structure of neural activity to directly c ontrol a prosthetic devi ce. The early work in the 1980s by Schmidt [1980], and Georgopoulos, Schwartz and colleagues [1986], fi rst described the con cepts, application and design of BMI as an engineering interface to modulate the motor system by neural firing patterns. Two decades later, seve ral research groups have designed experimental paradigms to implement the ideas for Brain Machine Interfaces [Wessberg et al ., 2000; Serruya et al ., 2002]. These are illustrated in Figure 1-1. In this framework [Wessberg et al. 2000; Serruya et al. 2002], neuronal activity (local field potentials and single unit ac tivity) has been synchronously co llected from microelectrode arrays implanted into multiple cortical areas while animals and humans have performed 3-D or 2-D target-tracking tasks. Several signal-processi ng approaches have been applied to extract the functional relationship between th e neural recordings and the an imals kinematic trajectories [Wessberg et al. 2000; Sanchez, et al., 2002b; Kim, et al., 2003; Wu, et al. 2006; Brockwell, et al. 2004]. The models predict movements and contro l a prosthetic robot arm or computer to implement them. Many decoding methodologies use binned spike trains to predict movement based on linear or nonlinear optimal filters [Wessberg et al 2000; Sanchez et al 2002b; Kim et al., 2003]. These methods avoid the need for expl icit knowledge of the neurological dynamic encoding properties, and standard linear or nonlinear regression is used to fit the relationship directly into the decoding operation. Yet anot her methodology can be de rived probabilistically using a state model within a Ba yesian formulation [Schwartz, et al 2001; Wu et al 2006; Brockwell et al 2004]. From a sequence of noisy obser vations of the neural activity, the

PAGE 13

13 probabilistic approach analyzes and infers the kinematics as a state variable of the neural dynamical system. The neural tuni ng property relates the measurem ent of the neural activity to the animals behaviors, and builds up the observation measur ement model. Consequently, a recursive algorithm based on all available statistical information can be used to construct the posterior probability density function of each ki nematic state given the neuron activity at each time step from the prior density of the state. The prior density in turn becomes the posterior density of previous time step updated with th e discrepancy between an observation model and the neuron firings. Movements can be recovered pr obabilistically from the multi-channel neural recordings by estimating the expectation of th e posterior density or by maximum a posterior. Review of the Approaches in Spike Domain The mathematical model in Brain Machine Inte rfaces requires the application of signal processing techniques to functiona lly approximate the relationshi p between neural activity and kinematics, such as spike sorting and asso ciation analysis betw een neurons and neuron encoding/decoding algorithms. Ad aptive signal processing is a well-established engineering domain to analyze the temporal evolution of system characteris tics [Haykin, 2002]. Traditional adaptive processing requires continuous measuremen t of signals using tools such as the Wiener filter, least square algorithm, and Kalman filter. Early BMI research frequently employed a binning process to analyze and develop algorithms to obtain the neural firi ng rate as a continuous signal. This binning process conc eals the randomness of neural fi ring behaviors, and the binning window size is always a concern. In Brain Machine In terfaces, neural activ ity and plasticity are characterized by spike trains. The loss of tim e resolution for true neuron activities might exclude information useful for movement generati on. Thus an analysis of the spike domain is necessary for this specific application of BMI.

PAGE 14

14 Spike SortingPreprocessing Neural Activities In BMI neurophysiologic recordi ngs, extracellular neural ac tion potentials are recorded with multiple electrodes representing the simultaneous electrical activity of a neuronal population. To identify the action potentials of each neuron, the multi-channel data processing of the spike train data analysis or decoding in BMIs starts wi th a spike sorting step. Most commonly, an action potential is detected by imposing a thre shold on the amplitude of the amplified signal, thereby generatin g a pulse every time an action potential occurs. However, this method is subject to failure due to noise cont amination and spike overlapping, and the results may not contain a single threshol d for all conditions of interest Previous research introduced many algorithms to analyze spike shape features and to perform spike sorting by classifying multiple spike shapes at the sa me time [Lewicki, 1998]. Clustering provides a simple way to organize spikes by their shape, but also has an unfortunate trade-off betw een false positives and missed spikes. Clustering in Principal Component Analysis space avoids the noise problem and separates the different spike shapes according to the primary, or more robust, result. Templatebased Bayesian clustering quan tifies the certainty of the spik e classification by computing the likelihood of data given a particular class. Fee et al [1996] developed an approach to choose the number of classes for Bayesian clustering by guiding the histogram of the interspike intervals. An optimal filter-based method based on the as sumption of accurate es timation of the spike shapes and noise spectrum [Gozani & Miller, 1994] was also proposed to discriminate the spikes from each other and the background noise. Th ese methods remain unable to cope with overlapping spikes. Neural netw orks, however, showed improved performance by providing more general decision boundaries [Chandra & Op tican, 1997]. Multi-recording of neuron activity resulted in the ability to discriminate overl apping spikes. Independent Component Analysis (ICA) was successfully used for mu lti-channel spike sorting [Makeig et al ., 1997; McKeown et

PAGE 15

15 al ., 1998]. ICA has a strong assumption that each channel should be regarded as one signal source and that all sources are mixed linearly. Although a significant body of work has addressed spike detection/sorting algorithms, the problem is far from solved. The major shortcomings are (1) assumption of stationary spik e shapes across the experiment, wh ich disregards electrode drift; (2) assumption of stationary background noise; (3) the necessity of proper spike alignment techniques for overlapping action potentials. The accuracy of the spike detection/sorting techniques directly affects the prediction results of BMIs, but to what level this occurs is unknown. Sanchez [2005] showed th at the results of linear models using unsorted spike data differs little from the sorted spikes in simple movement prediction, but it may affect more complex movement prediction. Spike-Based Association Analysis The most common methods for sp ike train analysis are based on histograms, which require the assumption of stationary parameters. The as sociation among multi-neural spike trains can be analyzed with and without neural stimulation. The functional relationship between neural spikes and local field potentials can also be an alyzed based on pre-stimulus patterns. Brody [1999] proposed the unnormalized cross-co rrelogram (cross-covariance) to measure the pair-wise association between two binned spike rates over di fferent time lags, but this method lacks time resolution. Cross-intensity f unction [Brillinger, 1992], a similar concept, measures the spike rate of one neuron when a nother neuron fires a spike, and it preserves the temporal resolution. To quantify the association among more than two neurons in an ensemble ( i.e. the presence of spatiotemporal patterns); two statistical approaches to parameterize these interactions have been introduced (1) coefficients of log-linear models, (2) a Bayesian approach for inferring the existence or absence of interactions, and an estimation of the strength of those interactions

PAGE 16

16 [Martignon et al ., 2000]. A data-mining algorithm, orig inally developed to analyze the generation of interictal activity in EEG recordings [Bourien et al ., 2005] was also applied to automatically extract co-activat ed neurons. This method provided the statistical evidence for the existence of neuron subsets based on the stationa ry characteristics of ne ural activities. The automatic extraction of neuron subs ets needs long data segments in order to be useful. An online realization has yet to be developed. Another technique for the association anal ysis between neurons, appropriate when a stimulus is present, is the Joint-Peri-Stimul us-Time-Histogram (JPSTH) [Gerstein & Perkel, 1969], which is the extension concept of PSTH fo r a single neuron [Abeles, 1982]. JPSTH is the joint histogram between two spike trains, and describes the joint pdf of the synchrony when a stimulus occurs. The computation is based on th e null hypothesis that th e spike trains are the realization of independent Poi sson-point processes, and as su ch are independent. The neuron response to the stimulus is a ssumed statistica lly stationary. The association analysis between spike firings and local field poten tials (LFP) also has been investigated in terms of stimulus. Resear chers have described the temporal structure in LFPs and spikes where negative deflections in L FPs were proposed to reflect excitatory, spikecausing inputs to neurons n ear the electrode [Arieli et al ., 1995]. The most appropriate feature detection method explores the correlation between th e amplitude modulated (AM) components of the movement-evoked local fiel d potentials and singleunit activities recorded as stimulus at the same electrode across all movement trials [Wang et al ., 2006a]. The correlation between pairs of peri-event time histograms (PETH) and m ovement evoked local fiel d potentials (mEP) at the same electrode showed high correlation coeffi cients for some neurons suggesting that the extracellular dendritic potentials indicate the level of neuronal out put. A critical demonstration of

PAGE 17

17 this relationship was the process of averaging the LFP and single unit activity across the lever press trials, thus reducing th e noise contamination caused by the random realization of unmodeled brain spontaneous activities. More work is needed toward reducing noise contamination All the above histogram-based methods can be considered empirical ly as approximations to the probabilistic density, and information th eoretic measures can be introduced into each method. The information theoretic calculation for the spike train us es milliseconds, which, as the minimum time scale determined to contain in formation [Borst & Theunissen, 1999], is the limiting spiking timing precision. Entropy was proposed to qualif y the information carried by the spike arrival times [Strong et al ., 1998]. Mutual information can be used to measure the pairwise neural train association, the statistical significance conve yed by the neuron responding to the stimulus [Nirenberg et al ., 2001], and the evaluation of the independence and redundancy from the nearby cortical neuron recordings [Reich et al ., 2001]. The information theoretic calculation can be performed directly on the neur al activity, but the operation needs enough data to ensure that the histogram-based analysis performs well. The mutual information summarizes the relationship between multi spike trains and th e neural response to a biological stimulus, but in only a scalar quantity, whic h does not describe the complicat ed relationship as well as modeling does. Spike-Based Modeling In addition to determining stimulus response association through a sta tistical analysis of the neural spike train, researchers also invest igated parametric probability modeling using the likelihood method to estimate point process pr operties. A good model is an optimal way to theoretically predic t and analyze the underlying dynamics of neural spike generation. A simple inhomogeneous Poisson process has been used most frequently to model both the simulation and

PAGE 18

18 quantification of the neural ac tivity analysis of a single sp ike train [Tuckwell 1988; Rieke et al ., 1997; Gabbiani & Koch, 1998; & Reich et al ., 1998]. This model is particularly appealing because it can explicitly describe neuron spiking as a simple analytical Poisson process [Brown et al ., 1998; Zhang et al ., 1998]. The inhomogeneous Poi sson model cannot, however, completely describe the neuron behavior with a multimodal inner spike interval distribution [Gabbiani & Koch, 1998; Shadlen & Newsome, 1 998]. Non-Poisson spike train probabilistic models have been studied under the assumption that a neuron fires probabilistically, but the model depends on the experimental time and the elapsed time since the previous spike [Kass & Ventura, 2001]. Additionally, dependencies betw een multi-spike trains were analyzed through the pair-wise inte ractions among the ensemble of neur ons, where the firing rate in the inhomogeneous Poisson was modeled as a function of the inhibitory and ex citatory interaction history of nearby neurons [Okatan et al ., 2005]. Truccolo et al [2005] proposed a similar analysis as a statistical framework, based on th e point process likelihood f unction, to relate the neuron spike probability to the spiking history, concurrent en semble activity, and extrinsic covariates such as stimuli or behavior. All of these parametric modeling methods provided a coherent framework to understa nd neural behavior and the ba se to statistically apply mathematical models to study the relationship be tween spike patterns of ensembles of neurons and an external stimulus or biological response (the encoding), which ch aracterizes the neural spike activity as a functi on of the stimulus, and decoding, wh ich infers the biological response from the neural spikes. Encoding analysis The neural code refers how a neuron represents behavioral responses or how it responds to a stimulus. The parameterization of a t uning function requires an understanding of three interconnected aspects 1) What is the behavior /stimulus? 2) How does the neuron encode it? 3)

PAGE 19

19 What is the criterion for quantif ying the quality of the response? The tuning curve was measured initially as a cosine curv e between the stimulus and the response [Georgopoulos et al ., 1989] using mainly static stimuli to discriminate between the stimuli based on neural responses. For neurons located in the moto r cortex, researchers first developed the static descriptions of movement-related acti vity by applying electrical stimuli to motor areas to elicit muscle contraction [Fritsch & Hitzig, 1870; Leyton & Sherrington, 1917; Schafer, 1900]. Later, movement direction was correlate d with cortical firing in a ce nter-out task where the tuning function was initially modelled as a cosine curve [Georgopoulos et al 1982]. The peak discharge rate of a cell is called preferre d direction. To quantify the degr ee of tuning, the tuning depth has been proposed as a metric and it is define d as the difference between the maximum and minimum values in the firing rates, normalized by the standard deviation of the firing rate [Carmena et al ., 2003, Sanchez et al., 2003]. As a scalar, the t uning depth summarizes the statistical information contained in the tuning curve to evaluate the neural representation, which indicates how modulated th e cells firing rate is to the kinema tic parameter of interest. However, this metric has some shortcomings since it can exaggerate the value of tuning depth when the neuron firing rate standard de viation is close to 0. Additionally, it de pends on the binning window size to calculate the firing rate of the neuron. The t uning depth also relates to the scale of the behavior/stimulus and makes the analysis not comparable among neurons as we will see. A more principled metric, allowing comparisons among neurons and among kinematic variables, is necessary to mathematical ly evaluate the information encoded by neurons about the kinematics variables. If this is achieved, th e new tuning depth metric can be utilized to distinguish the neurons tuning ability in BMI.

PAGE 20

20 Besides the scalar description of tuning properties, different models are used to describe the tuning properties of the neurons parameteri zed by a few parameters. However, there is no systematic method to completely characterize how a specific stimulus parameter governs the subsequent response of a given ne uron. Linear decoding, proposed by researchers to model the stimulusresponse function, has been widely used [Moran & Schwartz, 1999]. The linear filter takes into account the sensitivity of preferred di rection, the position and speed of the movement to represent the firing rate in cortical activity [Roitman et al ., 2005]. However, linear encoding captures only a frac tion of the overall in formation transmitted because the neuron exhibits nonlinear behavior with respect to the input signal. Brown et al [2001] used a Gaussian tuning function for the hippocampal pyramidal neurons. Brockwell et al [2003] assumed an exponential tuning function fo r their motor cortical data. These nonlinear mathematical models are not optimal for dealing with real data because the tuned cells could have very different tuning properties. Based on the white noise analysis to characterize the neural light response [Chichilnisky, 2001], Simoncelli and Paninski et al [2004] proposed a cascading liner-nonlinear-Poisson mode l to characterize the neural response with st ochastic stimuli. The spike-triggered average (STA) a nd the spike-triggered covariance (STC) provided the first linear filter stage in a polynomial series expansion of the tuning function [Paninski, 2003]. This linear filter geometrically directs the high dimensional stimulus to where the statistical moments of spike-triggered ensemble differ most from the raw signals. The nonlinear transformation of the second stage is estimated by an intuitive nonpa rametric binning technique [Chichilnisky, 2001] as the fraction of the two smoothe d histograms. This gives a condi tional instantaneous firing rate to the Poisson spike-generating model. The nonlinear stage is then followed by a Poisson generator. This modeling method assumes that the raw stimulus distri bution is spherically

PAGE 21

21 symmetric for STA and Gaussian di stributed for STC, and that th e generation of spikes depends only on the recent stimulus and is historically independent of previous spike times. Both STA and STC fail when the mean or the variance of th e spike-triggered ensemble does not differ from the raw ensemble at the direction of the linear f ilter. For the information-theoretic metric, mutual information was proposed to quantify the pred ictability of the sp ike [Paninski & Shoham et al ., 2004; Sharpee & Rust et al ., 2002]. The multi-linear filters repres enting the trial directions were found to carry the most informati on between spikes and stimuli. The encoding analysis provided a deeper unde rstanding of how neuron spikes respond to a stimulus. This important mathematical modeli ng holds promise toward providing analytical solutions to the underlying mechan ism of neuron receptive fields. Decoding algorithms In decoding, the biological response is estimat ed from the neural spike trains. The initial method, a population vector algor ithm, was proposed by Georgopoulos et al [1986], who studied the preferred direction of each cell as its tuning prope rty. Using this method, the movement direction is predicte d by a weighted contribution of all cell preferred direction vectors. The weights are repres ented as a function of a cells binned firing rate. The population vector algorithm demonstrated th at effective decoding requires a pre-knowledge of the encoding models. A co-adaptive movement prediction al gorithm based on the population vector method was developed to track changes in cell tuning properties during brain-controlled movement [Taylor et al ., 2002]. Initially random, the estimate of cell tuning properties is iteratively refined as a subject attempts to make a seri es of brain-controlled movements. Another decoding methodology uses binned spike trains to pr edict movement based on linear or nonlinear opt imal filters. This method avoids th e neurological dynamic encoding model of the neural receptive field, and standard lin ear or nonlinear regression is used to fit the

PAGE 22

22 relationship directly into the decoding operation. The Wiener filter or time delay neural network (TDNN) was designed to predict the 3D hand us ing neuronal binned spike rates embedded by a 10-tap delay line [Wessberg et al ., 2000]. In addition to this forward model, a recursive multilayer perceptrons (RMLP) model was proposed by Sanchez et al [2002b] and improved with better performance using only relevant neuronal activities [Sanchez et al ., 2002a]. Subsequently, Kim et al [2003] proposed the development of switching multiple linear models combined with a nonlinear network to increase prediction performance in food reaching. Their regression model performed very we ll in decoding movement predicti on. It is difficult to derive the neurological dynamics propert ies directly from models; howev er, this model is yet another viable method to use weight coefficients to analyze the active pr operties of neurons. A bridge is needed to link the performance of the adaptive signal processing methods with the knowledge from the receptive field neuron dynamics. These symbioses will greatly improve the present unders tanding of decoding algorithms. The probabilistic method based on the Bayesian formulation estimat es the biological response from the ensemble spike trains. From a sequence of noisy obser vations of the neural activity, the probabilis tic approach analyzes and infers the response as a state variable of the neural dynamical system. The neur al tuning property relates the m easurement of the noisy neural activity to the stimuli, and builds up the obser vation measurement model. Probabilistic state space formulation and information updating depend on the Bayesian approach of incorporating information from measurements. A recursive algorithm based on all available statistical information is used to construct the posterior probability density function of the biological response for each time, and in principle yields the solution to the decoding problem. Movements

PAGE 23

23 can be recovered probabilistically from the multi-channel neural recordings by estimating the expectation of the posterior de nsity or by maximum a posterior. As a special case, the Kalman filter was app lied to BMI that embodied the concepts of neural receptive field properties [Wu et al ., 2006]. The Kalman filter assumes strongly that timeseries neural activities are gene rated by kinematic stimulus through a linear system, so the tuning function is a linear filter only. Another strong assumption is th at, given the neural spiking activities at every time step, the Gaussianity of the posterior density of the kinematic stimulus, which reduces all the richness of the interactions to second or der information (mean and the covariance). These two assumptions may be too re strictive for BMI applications. The particle filter algorithm was also invest igated to recover movement velocities from continuous spike binned data [Brockwell et al ., 2004]. The particle filter can provide state estimation for a nonlinear system where the tuning function is assumed to be an exponential operation on linear filtered velocities [Schwartz, 1992]. All of the above algorithms, when applied to sp ike rates, are coarse approaches that lose spike timing resolution and may exclude rich neural dynamics. The primary reason for this limitation is that the sequentia l state estimation is applied normally to continuous value observations, and cannot be applie d directly to discrete point processes. Indeed, when the observation becomes the spike trai n point process, only the time instance of the spike event matters without amplitude. In itially, Diggle, Liang and co lleagues [1995] mentioned the estimation from the point process observations wi thout a specific algorith m. Chan and Ledolter [1995] proposed a Monte Carlo Expectation-ma ximization (EM) algorithm using the Markov Chain sampling technique to calculate the expecta tion in the E-step of the EM algorithm. This method later became the theoretical base to derive an EM algorithm for a point process recursive

PAGE 24

24 nonlinear filter [Smith & Brown, 2003]. The al gorithm combined the inhomogeneous Poisson model on point process w ith the fixed interval smoothing al gorithm to maximize the expectation of the complete data log likeli hood. In this particular case, th e observation process is a point process from an exponential family and the natural para meter is modeled as a linear function of the latent process. A general point process adap tive filtering paradigm was recently proposed [Brown et al 2001] to probabilistically recons truct a freely running rats positi on from the discrete observation of the neural firing. This algor ithm modeled the neural spike trai n as an inhomogeneous Poisson process feeding a kinematic model through a non linear tuning function. This approach also embodies the conceptual Bayesian filtering algori thm to predict the poster ior density by a linear state update equation and revise it with the next observation meas urement. More properties of this algorithm were discussed in Frank et al [2002], Frank and Stanley et al [2004], and Suzuki and Brown [2005]. The point process filter analogue of the Kalman filter, recursive least squares and the steepest descent algorithms were derived and compared to decode the tuning parameters and state from the ensemble neural spiking activity [Eden et al ., 2004]. In this case, the point process analogue of the Kalman f ilter performs the best because it provides more adjustable step size to update the state, which is estimated fr om the covariance information. However, the method assumes incorrectly that the posterior de nsity of the state vect or, given the discrete observation, is always Gaussian distributed. A Monte Carlo sequential estimation algorithm on point process was addressed as a probabilistic approach to infer the kinematic information directly from the neural spike train [Wang et al ., 2006b]. The posterior density of the kinematic stimulus, given the neural spike train was estim ated at each time step without the Gaussian

PAGE 25

25 assumptions. The preliminary simulations showed a better velocity reconstruction from the exponentially tuned neural spike train w ithout imposing a Gaussian assumption. Using all the probabilistic appro aches to derive the kinematic information from the neural activity for the BMI requires preknowledge of the neuron receptiv e properties. In other words, the estimation of the tuning function between a kinematic stimulus and neural receptive responses and the good initialization of all the parameters in the al gorithm can directly affect the results of the prediction of the primates movement s in BMI. This is because all the probabilistic approaches are based on the Bayesian formulatio n to construct the posterior density at each time step from the prior density of the kinematic state, which is the posterior density of previous time step. The population vector al gorithm hints that an accurate decoding prediction needs the encoding of the neuron tuning prope rty. For the Bayesian appro ach, the knowledge of the prior density, including the good initiali zation of all the parameters and the format of the tuning functions, is also a key step if we want to prob abilistically infer an accurate kinematic estimation from the posterior densities. Outline We are interested in building an adaptive signal processing framew ork for Brain Machine Interfaces working directly in the spike domai n. The model will include the stochastic time information of neuron activities, which is di fferent from conventional methods working on binned spike rates. The Bayesian approach will convert the decoding of neural activity required in BMIs into a state-estimation problem. The ki nematics are described by a dynamic state model and inferred as a state from multi-neuron spike tr ain observation, which is connected with the state through neuron tuning function. The good estim ation of the state (decoding) depends on the well educated guess of the tuning property of th e neuron (encoding). The sc hematic is shown in Figure 1-2.

PAGE 26

26 Previous tuning analysis is done on windowed based estimation, which maps kinematics information from a segment to one spike, whic h is not appropriate when the decoding process tries to infer kinematics online from the spike trai n. Here we develop an instantaneous model for the tuning properties, which build s a one-to-one mapping from the kinematics state to the neuron spike trains. It would be inte resting to also compare the instantaneous estimator with the traditional windowed estimator in term of encoding performance. We will then implement the Bayesian algor ithm to decode the kinematics from spike trains. The non-parametric estimation provide s a nonlinear neuron tuning function with no constrains, which goes beyond the Gaussian assump tion on the posterior density that is usually made in the previous Bayesian approaches. We are interested in lifting this assumption by designing an algorithm based on Mo nte Carlo sequential estimation on point process. In this algorithm, the full information of posterior densit y is estimated without Ga ussian constrains in order to gain better performance on state estimation, which, will unfortunately be paid with higher computational complexity. The trade off between the perfor mance and computational cost will be quantified. In addition to the interest in non-Gaussian assumption, we w ould also like to investigate the stochasticity and the non-st ationary of the neuron behavi or in terms of the decoding performance. Due to experimental constraints, only a few neurons are recorded from the motor cortex. To study the effect of st ochasticity intrinsic in single neuron repres entation of a neural assembly in online modeling, several synthetic spike trains are generated from the intensity function estimated from the neurons and uti lized as extra model inputs. The decoding performance is averaged across th e realizations in the kinemati cs domain to reduce the variance of original spike recordings as single realization. Lastly, th e non-stationary of the neuron

PAGE 27

27 behaviors is studied in the dec oding performance of different test data segments with the fixed tuning function. Preliminary results show that a dua l Kalman filter approach is able to track the tuning function change in the test data set, whic h indicates that the non-s tationary of the neuron tuning could be promisingly overc ome by dual decoding structure. The outline of the dissertation is the following. In Chapter 2, we review the traditional probabilistic approach for adapti ve signal processing as a state estimation problem, followed by our new proposed Monte-Carlo sequential estimation for the point process optimum filtering algorithm. This methodology estimates directly th e posterior density of the state given the observations. Sample observations are generate d at each time to recursively evaluate the posterior density more accurately. The state estimation is obtained easily by collapse, for example, by smoothing the posterior density with Gaussian kernels to estimate its mean. When tested in a one-channel simulated neuron spike train decoding experime nt, our algorithm better reconstructs the velocity as co mpared with the point process adaptive filtering algorithm with the Gaussian assumption. In Chapter 3, we descri be the experimental setups for Brain Machine Interfaces and state the differe nces between the simulation data and real BMI data. The neuron tuning properties are modeled to instantaneous ly encode the movement information of the experimental primate as the preknowledge for Monte-Carlo sequent ial estimation for BMI. It is also analyzed and compare in details with the traditional windowed encoding methods. In Chapter 4, the decoding framework for the Brain M achine Interfaces is pres ented directly in the spike domain and is followed with kinematics re construction results and performance analysis comparing to the adaptive filtering algorithm in spike domain with different encoding models. The results by synthetic averagi ng to reduce and variance of the kinematics prediction and the efforts to reduce the computational complexity by selecting the neuron subset in decoding

PAGE 28

28 process are also presented in Chapter 4. Conclu sions and future work, including the preliminary results on tracking the non-stationary neuron tuning property by Dual Kalman filter, are described in Chapter 5.

PAGE 29

29 Figure 1-1. Brain machine interface paradigm Figure 1-2. Schematic of relationship betw een encoding and decoding processes for BMIs State space Observation model (Tuning function) Spike trains observation 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 timespike 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 10 5 -1.5 -1 -0.5 0 0.5 1 1.5 time ( ms ) velocityDecoding Encoding State model

PAGE 30

30 CHAPTER 2 PROBABILISTIC APPROACH FOR POINT PROCESS Sequential State Estimation Problem Pros and Cons In sequential state estimation, the system st ate changes over time with a sequence of noisy measurements observed continuously on the system The state vector that contains all the relevant information describes the system through a time-series model. Two models are required to analyze and infer th e state of a dynamical system, the sy stem model, which describes the evolution of the state with time, and the c ontinuous observation measurement model, which relates the noisy measurements to the state. The probabilistic state sp ace formulation and the updating of information are rooted in the Bayesian approach of incorporating information from measurements. A recursive algorithm based on all available information, including all available statistical information and, in principle, the solution to the estimation problem, is used to construct the posterior probability density functi on of the state for each observation. Adapting the filter is a two-stage process. The first stage, prediction, uses the system model to predict the posterior probability density of the state given the observation from one measurement to the next; the second stage, updating, revises the predicted posterior probability density based on the latest measurement of the observation. The Kalman filter exemplifies an an alytical solution that embodies this conceptual filtering under the assu mption that the time-series created by a linear system and the posterior density of the state, given the observation at every step, is Gaussian, hence only parameterized by mean and covariance. Sequential state estimation can describe the decoding problem in Brain Machine Interfaces. Information on the primates movements can be re garded as the state, which changes over time through a kinematic dynamic system model. The neuron spike trains f unctionally encode the kinematic states, and this can be designed as a tu ning function. This tuning function acts as the

PAGE 31

31 observation model in the state se quential estimation problem. It probabilistically models the randomness of the neuron behaviors and characteri zes the nonlinear neuron firing properties with the preferred kinematic directi ons, thereby describing the neur on receptive fields from the neurophysiologic point of view. Th e parameters of the tuning f unction can also represent the state changing slowly over the time, suggesti ng a possible investigati on of the nonstationary aspects of neuron tuning properties. The Brain Machine Interface then c onverts the observations of multi-channel neuron spike trains to infer the kinematics as the state. This approach is problematic in BMI because channels of neuron spike trains are multi-dimensional observations driven by one state vector. A possi ble assumption is that all the neuron spike trains are motivated independently with the cooperation of the kinema tic information, but this may not be true. Another problem with this method is that the pr obabilistic approach is based on the Bayesian formulation, which constructs th e posterior density from the prio r recursively. To develop a good estimation of the states, the information descri bing how the system works must correspond with the pre-knowledge of the kinematic dynamics system and the neuron tuning function. Review of the Previous Probabilistic Approaches In Chapter 1, we reviewed several probabilist ic approaches to dec ode the neuron activities that take place during a primat es movement. The probabilistic me thods investigated and applied to BMI by different research groups include the Kalman filter [Wu & Gao et al ., 2006], and the particle filter algorithm [Brockwell & Rojas et al ., 2004]. Both of these algorithms employ concepts of sequential state estimation. The useful ness of the Kalman filter is limited in that it reduces all the richness of the in teractions to second order information (mean and the covariance) because it assumes the linear tuni ng property and the Gaussianity of the posterior density of the movements given the neural spiking activities at every time step. Although the particle filter provides state estimation for a nonlinear system, th e tuning function was directly assumed to be

PAGE 32

32 an exponential operation on linear filtered veloci ties [Schwartz, 1992]. Both of the algorithms were applied to continuous spike binned data an d cannot be directly adap ted to discrete point processes. A point process adaptive filter ing algorithm was recently proposed by Brown et al [2001]. In their approach, discrete observations of the neural firi ng spikes were utilized as the state to probabilistically recons truct the position of a freely runni ng rat in space. This approach also reflects the conceptual Bayesian filtering algorithm to predict the posterior density by a linear state, update the equation and then revise it with the next observation measurement. However, given the discrete observation, this me thod assumes that the pos terior density of the state vector is always Gau ssian distributed, which may not be the case. We proposed a probabilistic filtering algorithm to reconstruct the state from the discrete observation the spiking event by generating a sequential set of samples to estimate the distribution of the state posterior density without the Gaussian assumption. The poste rior density is recursively propagated and revised by sequential spike observa tions over time. The state at each time is determined by the maximum a posterior or the e xpectation of the posterior density inferred by a collapsing of the mixture of Gaussian kernel s when estimating the posterior density. The algorithm will be described in the next secti on, followed by an illustration of algorithm performance in a simulated neuron decoding example and a comparison to the probabilistic velocity reconstruction with Gaussi an assumption on posterior density. Adaptive Algorithms for Point Processes In this section, we review the design of adaptive filters for point processes under the Gaussian assumption, and then introduce our me thod, a Monte Carlo sequential estimation, to probabilistically reconstruct the state from discrete (spiking) observation events.

PAGE 33

33 Adaptive Filtering for Point Proc esses with Gaussian Assumption One can model a point process using a Bayesian approach to estimate the system state by evaluating the posterior density of the state given the disc rete observation [Eden & Frank et al ., 2004]. This framework provides a nonlinear time-s eries probabilistic m odel between the state and the spiking event [Brown et al ., 1996]. Given an observation interval] 0 ( T the number ) ( t N of events (spikes) can be modeled as a stochastic inhomogeneous Poiss on process characterized by its conditional intensity function )) ( ), ( ), ( | ( t t t t H x ( i.e., the instantaneous rate of events), defined as t t t t t N t t N t t t tt )) ( ), ( ), ( | 1 ) ( ) ( Pr( lim )) ( ), ( ), ( | (0H x H x (2-1) where ) ( t x is the system state, ) ( t is the parameter of the adaptive filter, and ) ( t H is the history of all the states, parameters and the di screte observations up to time t. The relationship between the single parameter Poisson process the state) ( t x and the parameter ) ( t is a nonlinear model represented by )) ( ), ( ( )) ( ), ( | ( t t f t t t x x (2-2) Using the nonlinear function) ( f assumed to be known or specified according to the application, let us consider hereafter the parameter ) ( t as part of the state vector) ( t x Given a binary observation eventkN over the time interval) (1k kt t, the posterior density of the whole state vector ) ( t xat time kt can be represented by Bayes rule as ) | ( ) | ( ) | ( ) | (k k k k k k k k k kN p p N p N p H H x H x H x (2-3) where ) | (k k kN p H x is the probability of observi ng spikes in the interval ) (1k kt t, considering the Poisson process

PAGE 34

34 ) ) | ( exp( ) ) | ( ( ) | Pr( t t t t Nk k k N k k k k k kk H x H x H x (2-4) and ) | (k kp H x is the one-step prediction density given by the Chapman-Kolmogorov Equation as 1 1 1 1 1) | ( ) | ( ) | ( k k k k k k k k kd N p p p x H x H x x H x (2-5) where the state kx evolves according to the linear relation k k k kF 1x x (2-6) kF establishes the dependence on the previous state and k is zero-mean white noise with covariance kQ Substituting Equations 2-4 and 2-5, in 23 the posterior density of the state ) | (k k kN p H x can be recursively estimated from th e previous one based on all the spike observations. Assuming the posterior density given by Equation 2-3 and the noise term k in the state evolution Equation 2-6 are Ga ussian distributed, the Chap man-Kolmogorov Equation 2-5 becomes a convolution of two Gaussian curves, from which the estimation of the state at each time has a closed form expression given by (see [Eden et al., 2004] for details). 1 | 1 1 | k k k k kFx x (2-7a) k k k k k k kQ F W F W 1 | 1 1 | (2-7b) 1 |] log ) ( ) log ]( [ ) log [( ) ( ) (' 2 1 1 | 1 | k kk k k k k k k k k k kt N t W Wxx x x x (2-7c) 1 |)] ( ) log [(' | 1 | | k kk k k k k k k k kt N Wxx x x (2-7d) The Gaussian assumption was used initially b ecause it allows one to solve analytically Equation 2-5 and therefore, for a closed form solution of Equation 2-3 as Equation 2-7.

PAGE 35

35 Although the above set of equations may seem daunting, each can be interpreted quite easily. First, Equation 2-7a esta blishes a prediction for the state based on the previous state. Then, Equations 2-7b and 2-7c are used in Equa tion 2-7d to correct or refine the previous estimate, after which the recurrent process is repeated. Monte Carlo Sequential Estimation for Point Processes The Gaussian assumption applied to the poste rior distribution in the algorithm just described may not be true in general. Theref ore, in terms of discrete observations, a nonparametric approach is developed here which pos es no constraints on the form of the posterior density. Suppose at time instant k the previous system state is 1 kx. Recall that because the parameter was embedded in the state, we need on ly the estimation of the state from the conditional intensity function Equation 2-1, since the nonlinear relation ) ( f is assumed known. Random state samples are generated using Mont e Carlo simulations [C arpenter & Clifford et al ., 1999] in the neighborhood of the previous state accor ding to Equation 2-6. Then, weighted Parzen windowing [Parzen, 1962] wa s used with a Gaussian kernel to estimate the posterior density. Due to the linearity of the integral in the Chap man-Kolmogorov Equation and the weighted sum of Gaussians centered at the samples, we are still able to evaluate directly from integral samples. The process is repeated r ecursively for each time instant, propagating the estimate of the posterior density, and the state itself, based on the discrete events over time. Notice that due to the recurs ive approach, the algorithm not only depends on the previous observation, but also depends on the entire path of the spike observation events. Let SN i i k i kw1 : 0} {x denote a Random Measure [Arulampalam & Maskell et al ., 2002] in the posterior density ) | (: 1 : 0k kN px, where } 1 {: 0S i kN ix is the set of all state samples up to time

PAGE 36

36 k with associated normalized weights } 1 {S i kN i w and SN is the number of samples generated at each time index. Th en, the posterior density at time k can be approximated by a weighted convolution of the sample s with a Gaussian kernel as SN i i k k i k k kk w N p1 : 0 : 0 : 1 : 0) ( ) | (x x x (2-8) where kN: 1 represents the spike observation events up to time k modeled by an inhomogeneous Poisson Process described in the previous section, and ) (x x k is the Gaussian kernel in terms of x with mean x and covariance By generating samples from a proposed density ) | (: 1 : 0 k kN qx according to the principle of Importa nce Sampling [Bergman, 1999; Doucet, 1998], which usually assumes dependence on 1 kxandkN only, the weights can be defined by ) | ( ) | (: 1 : 0 : 1 : 0 k i k k i k i kN q N p wx x (2-9) Here, we assume the importance density obeys the properties of Mar kov Chain such that ) | ( ) | ( ) | (1 : 1 1 : 0 : 1 1 : 0 : 1 : 0 k k k k k k kN q N q N qx x x x ) | ( ) | (1 : 1 1 : 0 1 k k k k kN q N qx x x (2-10) At each time iteration, the posterior density ) | (: 1 : 0 k kN px can be derived and approximated by the posterior density in the pr evious iteration as Equation 2-11. ) | ( ) | ( ) | ( ) | (1 : 1 1 : 1 : 0 1 : 1 : 0 : 1 : 0 k k k k k k k k kN N p N p N N p N px x x ) | ( ) | ( ) | ( ) | (1 : 1 1 : 0 1 : 1 1 : 1 1 : 0 1 : 1 : 0 k k k k k k k k k kN p N N p N p N N px x x x ) | ( ) | ( ) | ( ) | (1 : 1 1 : 0 1 : 1 1 k k k k k k k kN p N N p p N px x x x

PAGE 37

37 ) | ( ) | ( ) | (1 : 1 1 : 0 1 k k k k k kN p p N px x x x (2-11) By replacing Equations 2-10 and 2-11 into Equa tion 2-9, the weight can be updated recursively as Equation 2-12. ) | ( ) | ( ) | ( ) | ( ) | (1 : 1 1 : 0 1 1 : 1 1 : 0 1 k i k k i k i k k i k i k i k i k k i kN q N q N p p N p wx x x x x x x i k k i k i k i k i k i k kw N q p N p1 1 1) | ( ) | ( ) | ( x x x x x (2-12) Usually the importance density ) | (1k i k i kN q x x is chosen to be the prior density) | (1i k i kpx x, requiring the generation of new samples from ) | (1i k i kpx x by Equation 2-6 as a prediction stage. After the algorithm is applied for a few iterations, a phenomenon called degeneracy may arise, where all but one sample has negligib le weight [Doucet, 1998], implying that a large computational effort is taken to update the samp les that have almost no contribution to estimate the posterior density. When a significant degeneracy appears, resampling is applied to eliminate the samples with small weight and to concentrate on samples with large weights according the samples cdf In our Monte Carlo sequential estima tion of the point process, Sequential Importance Resampling [Gordon & Salmond et al. 1993] is applied at every ti me index, so that the sample is i.i.d. from the discrete unifo rm density with weights S i kN w / 11. The pseudo code of the scheme to resample SN i i k i kw1} {x to SN i j k j kw1 *} {x is the following [Arulampalam & Maskell et al. 2002]. Initial the cdf : 01 c; For i = 2 : NS --construct the cdf : i k i iw c c 1 End For Start at the bottom of the cdf : i =1;

PAGE 38

38 Draw a starting point: ] 0 [ ~1 SN uU For j =1: N -move along the cdf : ) 1 (1 1 j N u uS j -While i jc u 1 i i -end While -Assign sample i k j kx x* -Assign weight S j kN w / 1 End For The weights then change proportionally, given by ) | (i k k i kN p w x (2-13) where ) | (i k kN px is defined by Equation 2-4 in this sect ion. Using Equations 2-6, 2-13 and the resampling step, the posterior density of the state kx given the whole path of the observed events up to time kt can be approximated as SN i i k k i k k k kk N p N p1 : 1) ( ) | ( ) | (x x x x (2-14) Equation 2-14 shows that, given the observation, the posterior density of the current state is modified by the latest probabilistic measurement of the observing spike event ) | (i k kN px which is the updating stage in adaptive filtering. Without a close form of the state estimation, we measure the posterior density of the state given the observed spike event ) | (: 1 k kN px every time and apply two methods to get the state estimation kx ~ One method is Maximum A Posterior (MAP), which picks out the sample i kx with maximum posterior density. The second method is to use the expecta tion of the posterior density as the state estimation. As we smoot h the posterior density by convolving with a

PAGE 39

39 Gaussian kernel, we can eas ily obtain the expectation kx ~ and its error covariance kV by collapse [Wu & Black et al. 2004]. SN i i k i k k kN p1 ~) | (x x x (2-15) ) ) ( ) ( ( ) | (~ 1 ~ T k i k N i k i k i k k kSN p V x x x x x (2-16) From Equations 2-15 and 2-16, we can see that without complex computation we can easily estimate the next state. Hence, the e xpectation by collapse is simple and elegant. The major drawback of the algorithm is computational complexity because the quality of the solution requires many particles } 1 {: 0 S i kN ix to approximate the posterior density. Smoothing the particles with kernels as in Equa tion 2-14 alleviates the problem in particular when collapsing is utilized, but still the comput ation is much higher than calculating the mean and covariance of the PDF with a Gaussian assumption. We have to point out th at both approaches assume we know the state model kF in Equation 2-6 and the observation model ) ( f in Equation 2-2, which actually are unknown in real applications. The state model is normally assumed linear and the parameters are obtained from the data using least squares. The knowledge of the observation model is very important for decoding (deriving states from observations), because the probabilistic approach based on Bayesian estimation constructs the posterior dens ity of each state given the spike observation at each time step from the prior density of the state. The prior density in turn is the posterior density of previous time step updated with the discrepa ncy between an observation model and the spike event. The observation model basically quan tifies how each neuron encodes the kinematic

PAGE 40

40 variables (encoding), and due to the variability of neural responses it should be carefully estimated from a training set for the purpose of Monte Carlo decoding models. Simulation of Monte Carlo Sequential Esti mation on Neural Spike Train Decoding Neurons dynamically change their responses to specific input stim uli patterns through learning, which has been modeled with the help of receptive fields Neural decoding can be used to analyze receptive field plasti city and understand how the neur ons learn and adapt by modeling the tuning function of neuronal responses. In the rat hippocampus, for example, information about spatial movement can be extracted from neural decoding, such as from the activity of simultaneously recorded noisy place cells [Mehta & Quirk et al ., 2000, OKeefe & Dostrovsky, 1971] representing the sp ike-observed events. In a conceptually simplified motor cortical neural model [Moran & Schwartz, 1999], the one-dimensional velocity can be reconstructe d from the neuron spiking events by the Monte Carlo sequential estimation algor ithm. This algorithm can provide a probabilistic approach to infer the most probable velocity as one of the co mponents of the state. This decoding simulation updates the state estimation simultaneously and app lies this estimation to reconstruct the signal, which assumes interdependence between the enc oding and decoding so that the accuracy of the receptive field estimation and the accuracy of the signal reconstruction are reliable. Let us first explain how the simulated data was generated. The tuning function of the receptive field that models the relation between the velocity and the firing rate is assumed exponential and given by ) exp( ) (k k kv t (2-17) where ) exp( is the background firing rate without any movement and k is the modulation in firing rate due to the velocity kv In practice in the electrophysio logy lab, this function is

PAGE 41

41 unknown. Therefore, an educated guess needs to be made about the functional form, for which the exponential function is widely utilized. The desired velocity was generated as a fre quency modulated (chirp) triangle wave added with Gaussian noise (variance510 5 2 ) at each 1ms time step, as shown in Figure 2-1. The design of the desired signal enable s us to check if the algorithm could track the linear evolution and the different frequency of the movement. The background-firing rate ) exp( and the modulation parameter k are set to be 1 and 3 respectively for the whole simulation time, 60s. A neuron spike is drawn as a Bernoulli random variable with probability t tk ) ( within each 1ms time window [Brown et al. 2002]. A realization of a neuron spike tr ain is shown in Figure 2-2. With the exponential tuning func tion operating on the velocity, we can see that when the velocity is negative, there are few spikes; whil e when the velocity is positive, many spikes appear. The problem is to obtain from this sp ike train the desired ve locity of Figure 2-2, assuming the Poisson model of Equation 2-17 an d one of the sequential estimation techniques discussed. To implement the Monte Carlo sequential estimation for the point process, we regard both modulation parameter k and velocity kv as the state T k k kv ] [x. Here we set 100 samples to initialize the velocity iv0 and modulation parameter k respectively with a uniform and with a Gaussian distribution. Note th at too many samples would increas e the computational complexity dramatically, while an insufficient number of sa mples would result on a po or description of the non-Gaussian posterior density. The new sample s are generated according to the linear state evolution Equation 2-6, where kF is obtained from the data using least squares for kv and 1 for

PAGE 42

42 k (implicitly assuming that the modulation pa rameter would not change very fast). The i.i.d. noise for velocity state in Equation 2-6 was draw n from the distribution of the error between the true velocity and the li near predicted results by kF The i.i.d. noise for estimating the modulation parameter k is approximated by a zero mean Ga ussian distribution with variancekQ (default 10-7). The kernel size utilized in Equation 2-14 to estimate the maximum of the posterior density (thru MAP) obeys Silvermans rule [Silverma n 1981]. Because the spike train is generated according to the Poisson model, there is stochastic ity involved. We then generate 10 sets of the spike train from the same time series of the fi ring rate by the tuning f unction Equation 2-17 from the desired velocity. The averaged performan ces evaluated by NMSE between the desired trajectory and the model output are shown in Tabl e 2-1, for different runs of the covariance matrices of the state generation kQ Notice that the noise vari ance should be small enough to track the unchanged k set in the data. In general, if kQ is too large, the continuity constraint of the whole sequential sample generated has little e ffect. If it is too small, this constraint may become too restrictive and the reconstructed velocity may get stuck in the same position while the real velocity moves away by a distance much larger thankQ In order to obtain realistic performance asse ssments of the different models (Maximum a posterior and collapse), the state estimations k kv ~ ~ for the duration of the trajectory are drawn 10 different times. The best velo city reconstruction is shown in Figure 2-3. The Normalized Mean Square Error (MSE normalized by the powe r of the desired signal) between the desired trajectory and the model output for the adaptive filtering with Gaussian assumption is 0.3254. NMSE for sequential estimation by MAP is 0.2352 and by collapse is 0.2140.

PAGE 43

43 From Figure 2-3, we can see that compared w ith the desired velocity (dash-dotted red line), all the methods obtain close estima tion when there are many spikes ( i.e., when the velocity is at the positive peaks of the triangle wave). Th is is because the high likelihood of spikes corresponds to the range of th e exponential tuning function wh ere the modulation of the high firing probability is easily dis tinguished and the posterior dens ity is close to the Gaussian assumption. However, in the negative peaks of the desired velocity the sequential estimation algorithm (using collapse for expectation or MA P) performs considerably better. This is primarily because the modulation of the firing rate is nonlinea rly compressed by the exponential tuning function, leading to non-Gaussian posterior densities, and thus violating the Gaussian assumption the adaptive filtering method relies on. Although there is nearly no neuronal representation for negative veloc ities and therefore bot h algorithms are infe rring the new velocity solely on the previous state, the non-parametric estimation of the pdf in the sequential estimation algorithm allows for more accurate inference. As an example in Figure 2-4A, the posterior density at time 6.247s (when the desired velocity is close to the positive peak) is shown (dotted pink line) Gaussian-like sh ape, all the methods provide similar estimations close to the true value (red star). In Figure 2-4B, the posterior densit y at time 35.506s (when th e desired velocity is close to the negative peak) is shown (dotted pink lin e) non-symmetric with 2 ripples and is obviously not Gaussian distributed. The adaptiv e filtering on point process under a Gaussian assumption provides poor estimation (gray dotte d line), not only because of its Gaussian assumption but also because the algorithm propa gates the poor estimation from previous time resulting in an accumulation of errors. The velo city estimated by the se quential estimation with collapse denoted by the blue circle is the closest to the desired velocity (red star). Notice also that in all cases the tracking performance gets progress ively worse as the frequency increases. This is

PAGE 44

44 because the state model is fixed from the whole set of the data by a linear model, which tracks the velocity state at the average frequency. If a time-variant state model is used on a segment-bysegment basis, we could e xpect better reconstructions. In summary, the Monte Carlo sequential esti mation on point processes seems promising to estimate the state from the discrete spiking events. Interpretation Point process adaptive filtering is a two-st ep Bayesian approach based on the ChapmanKolmogorov Equation to estimate parameters from discrete observed events. However, the Gaussian assumption of posterior density of the state, upon ob servation, may not accurately represent state reconstruction due to the less accura te evaluation of posterior density. We present in this paper a Monte Carlo sequential estimation to modify the amplitude of the observed discrete events by the probabilistic measuremen t, posterior density. A sequence of samples is generated to estimate the posterior density mo re accurately. Through sequential estimation and weighted Parzen windowing, we avoid the numeric al computation of the integral in the C-K Equation. By smoothing the posterior density with the Gaussian kernel from Parzen windowing, we can collapse to easily derive the expectation of the posterior de nsity, leading to a better result of state estimate than noisy Maximum a poste rior. The Monte Carlo estimation shows better capability to probabilistically es timate the state because it better approximates the posterior density than does the point process adaptive filte ring algorithm with Gaussian assumption.

PAGE 45

45 0 1 2 3 4 5 6 x 104 -2 -1 0 1 2 time(msec)velocity Figure 2-1. The desired velocity genera ted by triangle wave with Gaussian noise 0 1 2 3 4 5 6 x 104 0 0.5 1 1.5 time(ms)spike Figure 2-2. The simulated neuron spike trai n generated by an expone ntial tuning function Table 2-1. Comparison results of all algorithms with different kQ Sequential estimation kQ Adaptive filtering of point process Collapse MAP 10-5 0.4434 0.38030.3881 10-6 0.3940 0.35750.3709 10-7 0.3583 0.29560.3252

PAGE 46

46 0 1 2 3 4 5 6 x 104 -1 -0.5 0 0.5 1 1.5 time(ms)velocityvelocity reconstruction desired velocity velocity by seq. estimation (EXP) velocity by seq. estimation (MAP) vecloity by adaptive filtering Figure 2-3. The velocity recons truction by different algorithms A Figure 2-4. ) | (k kN v p at different time. A) At time 6.247s. B) At time 35.506s 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 velocityprobabilitypdf at time index 6.247s posterior density desired velocity velocity by seq. estimation (EXP) velocity by seq. estimation (MAP) velocity by adaptive filtering

PAGE 47

47 B Figure 2-4. Continued -2.5 -2 -1.5 -1 -0.5 0 0.5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 velocityprobabilitypdf at time index 35.506s posterior density desired velocity velocity by seq. estimation (EXP) velocity by seq. estimation (MAP) velocity by adaptive filtering

PAGE 48

48 CHAPTER 3 INFORMATION THEORETICAL ANALYSIS OF INSTANTANEOUS MOTOR CORTICAL NEURON ENCODING Experimental Setups In Chapter 2, we presented a Monte Carlo se quential estimation algo rithm to reconstruct the continuous state variable di rectly from point process obser vations. In the one-neuron spike train decoding simulation, this algorithm provide d a better estimate of the state recursively without Gaussian distribution. The Monte Carlo sequential es timation in spike domain is a promising signal processing tool to decode the continuous kinema tics variable directly from neural spike trains in Brain Machine Interfaces With this method, spike binning window size is no longer a concern, as one can directly utilize the spike timing event. The online state estimation is suitable for real-time BMIs decodi ng without the desired signal; however, both the neural activity recoding and desi red trajectories are required to estimate the neuron tuning function. The decoding results by Monte Carlo estimation could be different between realizations because of the random manner in which samples are generated to construct the posterior density. Data Recording The Brain-Machine Interface paradigm was designed and implemented in Dr. Miguel Nicolelis laboratory at Duke University. Chroni c, neural ensemble recordings were collected from the brain of an adult female Rhesus m onkey named Aurora, and synchronized with task behaviors. Several micro-electrode arrays were chronically implanted in five of the monkeys cortical neural structures, right dorsolateral premotor area (PMA), right primary motor cortex (MI), right primary somatosensory cortex (S1), right supplemen tary motor area (SMA), and the left primary motor cortex (MI). Each electrode array c onsisted of up to 128 microwires (30 to 50 m in diameter, spaced 300 m apart), distributed in a 16 8 matrix. Each recording site occupied a

PAGE 49

49 total area of 15.7 mm2 (5.62.8 mm) and was capable of record ing up to four single cells from each microwire for a total of 512 neurons (4128) [Sanchez, 2004]. After the surgical procedure, a multi-channel acquisition processor cluster (MAP, Plexon, Dallas, TX) was used in the experiments to reco rd the neuronal action pot entials simultaneously. Analog waveforms of the action potential were am plified and band pass filtered from 500 Hz to 5 kHz. The spikes of single neurons from each microwire were discriminated based on timeamplitude discriminators and a principal co mponent analysis (PCA) algorithm [Nicolelis et al. 1997; Wessberg et al. 2000]. The firing times of each spik e were stored. Table 3-1 shows the assignment of the sorted neural activity to the electrodes for diffe rent motor cortical areas [Kim 2005]. The monkey performed a two-dimensional targ et-reaching task to move the cursor on a computer screen by controlling a hand-held joystick to reach the target (Figure 3-1). The monkey was rewarded when the cursor intersected the target. The corresponding position of the joystick was recorded continuously for an initial 30-min period at a 50 Hz sampling rate, referred to as the pole control period [Carmena & Lebedev et al 2003]. The monkey performed a two-dimensional targ et-reaching task to move the cursor on a computer screen by controlling a hand-held joystick to reach the target (Figure 3-1). The monkey was rewarded when the cursor intersected the target. The corresponding position of the joystick was recorded continuously for an initial 30-min period at a 50 Hz sampling rate, referred to as the pole control period [Carmena et al. 2003].

PAGE 50

50 Simulation vs. Vivo Recordings BMI data provides us with 185 neural spike train channels and 2-dimensional movement trajectories for about 30 minutes. Compared to the one-neuron dec oding simulation in Chapter 2, there are big differences. At first glance, it is remarkable that the time resolution for the neural spike train is about a millisecond, while the movement trajectories have a sampling frequency 50Hz. The neural spike trains allow us to more closely observe the tr ue random neural behavior Consequently, however the millisecond scale requires more computational complexity. We must bridge the disparity between the microscopic neural spikes and the macroscopic kinematics. The tuning function provides a basis on which to build a simultaneously functional relationship. In the simulation, we simply assume that the tuning function characterizes the exponentially increasing firing rate conditioned on the velocity. For the real BMI data, is this tuning function still valid and cogent? As presen ted in Chapter 2, our Monte Carlo sequential estimation algorithm works as probabilistic approach directly in the spike domain. The major assumption supporting the entire algorithm is th at we have enough knowledge of both the system model and the observation model. This assumption establishes a reliable base to propagate the posterior density leading to the state estimati on at each time iteration. How can we obtain the knowledge? The work by Georgopoulos and Schwartz et al [1986] provides some guidance. The population coding presented in thei r paper analyzed the individual neural activities tuned broadly to a particular direction. Base d through trials on the weighted distribution of i ndividual neurons to the preferred direction, the direction of movement was found to be uniquely predicted. The principle behind this work is letting the data speak for itself We gain insight into neural tuning properties by analyzing the exis ting neuron and kinematics data. Th is analysis leads to better kinematics decoding from neural activities in the future.

PAGE 51

51 Anther issue to resolve is dealing with multi-channel neural spike trains when there is only one neural channel in the simulation. In th e real BMI data, how can we account for the association between cha nnels? In Chapter 1, we reviewed the work done by many researchers in this field with multiple outcomes. Most of the work focused on the exclusive relationship between neural activities, such as the correla tion between neurons char acterized by the neural firing, or between neuron micros copic spiking and field potentials. With regard to both external kinematics and neural activities, neural spike trains between cha nnels are usually assumed to be conditionally independent of kinema tics. In other words, spike ge neration is determined once the kinematics and parameters of the neuron t uning are known. We should emphasize that the assumption of conditional independence does not conflict with the association analysis between neurons. If the firing rates of two neurons ar e generated independen tly through two similar tuning functions in a certain time period, simila r firing patterns are expected during this time period, and the analysis on the correl ation between them is still valid. Review of Tuning Analysis The probabilistic approach based on Bayesian estimation constructs the posterior density of each kinematic state given the spike trains at ea ch time step from the prior density of the state. The prior density in turn is the posterior density of previ ous time step updated with the discrepancy between an observa tion model and the spike train. The observation model linking the measurement of the noisy neural activity to the kinematics implicitly utilizes the tuning characteristics of each neuron. In our newl y proposed Monte Carlo sequential estimation algorithm operating directly on point processes [Wang et al ., 2006b], the Bayesian approach analyzes and infers the kinematics as a state vari able of the neural dynam ical system without the constraints of linearity and Gaussianity. Accurate modelling of the neuron tuning properties in

PAGE 52

52 the observation model is critical to decode the kinematics by expectation of the posterior density or by maximum a posterior. The tuning, also called the encoding functi on, mathematically models how a neuron represents behavioral consequen ces or how it responds to a stim ulus. The parameterization of a tuning function requires an unde rstanding of three interconnected aspects 1) What is the behavior/stimulus? 2) How does th e neuron encode it? 3) What is the criterion for quantifying the quality of the response? For neurons located in the motor cort ex, researchers first developed the static descriptions of move ment-related activity by applying electrical stimuli to motor areas to elicit muscle contraction [Fritsch & Hitz ig, 1870; Leyton & Sherri ngton, 1917; and Schafer 1900]. Later, movement direction wa s correlated with cor tical firing in a cent er-out task where the tuning function was initially mode lled as a cosine curve [Georgopoulos et al ., 1982]. The peak discharge rate of a cell is called preferred direction. To quantify the degree of tuning, the tuning depth has been proposed as a metric and it is defined as the difference between the maximum and minimum values in the firing rates, normalized by th e standard deviation of the firing rate [Carmena et al ., 2003; Sanchez et al ., 2003]. As a scalar, the tuning depth summarizes the statistical information contained in the tuning curve to evaluate the neural representation, which indicates how modulated the cells firing rate is to the kinematic parameter of interest. However, this metric has some shortcomings si nce it can exaggerate the value of tuning depth when the neuron firing rate standard deviation is close to 0. Additionally, it depends on the binning window size to calcu late the firing rate of the neuron. The tuning depth also relates to the scale of the behavior/stimulus and makes the anal ysis not comparable among neurons as we will see. A more principled metric, allowing comparisons among neurons and among kinematic variables, is necessary to math ematically evaluate the informa tion encoded by neurons about the

PAGE 53

53 kinematics variables. If this is achieved, th e new tuning depth metric can be utilized to distinguish the neurons tuning ability in BMI. In addition to tuning depth, Researchers have also proposed a variety of parametric models to describe the motor representa tion neurons. Linear re lationships from moto r cortical discharge rate to speed and direc tion have been constructed [Moran & Schwartz, 1999]. The linear filter took into account the sensitivity of preferred dire ction, the position and speed of the movement to represent the firing rate in cortical activity [Roitman et al ., 2005]. However, linear encoding captures only a fraction of the overall inform ation transmitted because the neuron exhibits nonlinear behavior with respec t to the input signal. Brown et al [2001] used a Gaussian tuning function for the hippocampal pyramidal neurons. Brockwell et al [2003] assumed an exponential tuning function for their motor cortical data. These nonlinear mathematical models are not optimal for dealing with real data because th e tuned cells could have very different tuning properties. Simoncelli and Paninski et al [2004] further improved the linear idea and proposed a Liner-Nonlinear-Poisson (LNP) model to casc ade the linear stage with a nonlinear transformation as the second stage, which gives a conditional instantaneou s firing rate to the Poisson spike generating model at the third stage. In the LNP model, the position or velocity at all relevant times within a temporal window was utilized to extract the information be tween neuronal activity and animal movement trajectories. During a continuous target tracking task, Paninski et al [2004b] studied the temporal dynamics of M1 neurons for position an d velocity of hand motion given the firing rate. The linear filter in the LNP model averages the temporal pos ition or velocity within the window and so it smoothes the statistical curves on the stimulus distribution a nd provides the widely known exponential increasing nonlinear ity that relates neuronal fi ring rate to the projected

PAGE 54

54 kinematics. Unfortunately, the averaging builds up an N to 1 temporal mapping between the kinematic variables (position or ve locities) and the neural spikes that negatively impacts our goal of building sequential estimation algorithms. I ndeed, sequential models require inferring the estimation of the kinematics from current neurona l spike times. Therefore, the instantaneous one to one functional tuning relationship between the kinematics and neuron activ ities are needed to decode the kinematics online and to avoid the error accumulation within the windowed kinematic vector. Moreover, the analysis of th e receptive fields of motor cortex neurons is different from the stimulus-response analysis in sensory cortices, because there is always a time delay between the initiation of the neuron spiking and the movement response. This delay must be taken into consideration in BMI decoding al gorithms. The estimation of instantaneous tuning parameters is more difficult and more prone to errors, therefore we will have to evaluate how much nonlinearity properties still hol ds or changes compared to the temporal kinematic vectors. In the literature, mutual information has b een used to differentiate the raw stimulus ensemble from the spike-triggered stimulus distribution [Simoncelli et al ., 2004; Sharpee et al ., 2002], as well as to estim ate the minimal number of delay sa mples in the temporal kinematics needed to represent the information extracted by the full preferred traj ectory of a given cell [Paninski et al ., 2004b]. In this Chapter 3, we also apply an information theoretical analysis but on the instantaneous tuning propert ies of the motor cortical neur ons. We propose the concept of mutual information to estimate tuning depth to an alyze the information that neurons in different cortical areas share with respect to the animals position, velocity and acceleration. This criterion is first tested in synthetic data, and then applie d to motor cortex data. We elaborate how to build our instantaneous tuning function of motor cortical neurons for BMIs. The information theoretical analysis is applied to the projectiv e nonlinear-Poisson encoding analysis to estimate

PAGE 55

55 the causal time delay. The nonlinearity of the instantaneous tuning curves is compared to the method computed from windowed kinematics. Visual Inspection of a Tuning Neuron Neurophysiologic evidence suggests that neuron s encode the direction of hand movements with cosine shaped tuning curves [Georgopoulos et al ., 1982]. For each neuron, the polar plot of the neuron activity with regard to a kinematic ve ctor, such as hand position, hand velocity, or hand acceleration, is investigated to compute the kinematic direction as an angle between 0 and 360 degrees. 45 degree bins are chosen to coarsely cl assify all the directions into 8 bins. For each direction, the average neuron firing rate obtaine d by binning defines the ma gnitude of the vector in a polar plot. For a tuned neuron, the average fi ring rate in each direction is expected to be quite different. The preferred direction is comput ed using circular statis tics [Jammalamadaka & SenGupta, 1999] as ) arg( N i NNe r mean circular (3-1) where Nr is the neurons average firing rate for angle N and N covers all the angle range. Figure 3-2 shows the polar plot of neuron 72. Th e direction of the vect or on the polar plot indicates the direction of veloci ties, and the magnitude of the v ector is the average firing rate, marked as a blue circle, for each direction. The computed circular mean, estimated as the firing rate weighted direction, is show n as a solid red line on the polar plot. This indica tes clearly that neuron 72 fired most frequently to ward the preferred direction. Metric for Tuning A metric is necessary to evaluate the neural tuning. A comparative analysis between the neural firing and the kinematics based on the me tric could provide a better understanding of the neuron receptive field properties. A metric would also present a way to select the tuning neuron

PAGE 56

56 subset that contributes most to movement generation, potentially reducing the decoding complexity. In this section, we review the previous tuning metric and then compare it to our newly proposed tuning metric. Tuning Depth The metric for evaluating the tuning property of a cell is the tuni ng depth of the cells tuning curve. This quantity is defined as th e difference between th e maximum and minimum values in the cellular tuning normalized by the st andard deviation of the firing rate [Carmena et al., 2003; Sanchez, 2004]. The tuning depth is normalized between 0 and 1 through all the channels, which looses the scale for comparisons among different neurons. ) ( ) min( ) max( rate firing std r r depth tuningN N (3-2) The normalization in Equation 3-2 used to equa lize the firing of diffe rent cells can wrongly evaluate a shallow tuned neuron as a deeply tuned neuron, when bot h fire with a small variance. The normalization inaccurately exag gerates the tuning depth when th e standard deviation is close to 0. In fact, the tuning metric to evaluate neuronal participat ion in kinematics should depend not only upon the mean firing rate in a certain direction, but also on the distribution of the neural spike patterns. Normalizing by the firing rate alon e may not be the best way to evaluate neuron tuning. A counterexample using tuning de pth as the metric is shown in Figure 3-3. Neuron 72 is plotted on the left, and neuron 80 is on the right. Neuron 72 fires less in other directions than the preferred one, while neuron 80 not fire at most of the directions expected for the preferred one. By visually inspecting the plots, we can infer that neuron 80 is more tuned than neuron 72. However, by tuning depth metric, neuron 80 was assigned smaller tuning depth, 0.93, than neuron 72s tuning depth of 1. This may be due to normaliza tion by the standard derivation of

PAGE 57

57 the firing rate, which inaccurately exaggerates the tuning depth for some neurons with stable activities (standard derivation is close to 0). In fact, the tuning metric to evaluate differences between neuron reactions to kinematics depends not only upon the mean firi ng rate on a certain direction, but also on the distri bution of the neural spike pattern s. Normalizing by only the firing rate does not appear to be a very cogent or effective way to evaluate neuron tuning. Information Theoretic Tuning Metric The traditional tuning curves do not intrinsically allow us to measure information content. We have used indirect observat ional methods such as tuning dept h but they are not optimal. An information theoretic tuning depth as a metric for evaluating neuron instantaneous receptive properties is based on information theory and would capture much mo re of the neuronal response [Paninski et al., 2004b; Wang et al ., 2007b]. Define a tuned cell as a cell that extracts more information between the stimulus direction angle and its spiking output. If a cell is tuned to a certain angle, the well-established con cept of mutual information [Reza, 1994] can mathematically account for an information theoretic metric be tween the neural spikes and direction angles, which is given by 1 0 2) ) ( ) | ( ( log ) | ( ) ( ) ; (spkspk p spk p spk p p spk I (3-3a) 1 0 2 1 0 2)) | ( ( log ) | ( ) ( )) ( ( log ) (spk spkspk p spk p p spk p spk p (3-3b) where ) ( p is the probabilistic density of all the direction angles, which can be easily estimated by Parzen window [Parzen 1962]. The direction angl es of the kinematic vectors are evaluated between and ) ( spk p can be calculated simply as the pe rcentage of the spike count during the entire spike train. ) | ( spk p is the conditional probability density of the spike given the direction angle.

PAGE 58

58 For each neuron, the conditional probability density ) | ( spk p was estimated directed from the data by an intuitive nonparametric technique [Chichilnisky 2001, Simoncelli et al 2004], as the fraction of the two kernel smoothed histograms of marginal ) ( p and joint distribution ) 1 ( spk p The histogram of the spike-trigge red angle is smoothed by a Gaussian kernel according to Silvermans rule [Silverman, 1981] and normalized to approximate the joint probability) 1 ( spk p depicted as the solid red line in upper plot of Figure 3-4. In other words, the direction angle is accounted for in the histogram during the corresponding direction angle bin only when there is a spike. Then the conditional probability density) | 1 ( spk p depicted as solid blue line in the bottom plot of Figure 1, is approximated by dividing the kernelsmoothed histogram of ) 1 ( spk p by the kernel-smoothed histogram of (blue dot line in the upper plot of Figure 3-4), whic h is in fact Bayesian rule, ) ( ) 1 ( ) | 1 ( p spk p spk p (3-4) where ) | 1 ( 1 ) | 0 ( spk p spk p When ) ( p is 0, ) 1 ( spk p is set to be 0. Note that because ) ( spk p is always not greater than ) ( p this actually does not share the same problem as Equation 3-2. The traditional computation of Nr in the tuning depth, which is the average firing rate for certain angleN is actually a rough approximation of Equation 3-3 because ) | ( ) ( ) ( ) ( # ) ( # ) ( ) ( ) () ( 1 spk p p spk p spike M rM i i (3-5) where ) ( M is the total sample number at angle from the whole data set, as well as ) ( # and ) ( i is the firing rate co rresponding to sample i of angle ) ( # spike is the total number of

PAGE 59

59 spike counts when the movement angle is The conditional probability density ) | ( spk p can be regarded as the non-linear functional relationship between instantaneous neuron firing probability and movement directi ons. We can see that the traditional tuning depth analysis actually works only with the di fference between the maximum and minimum of the nonlinear tuning curve, scaled by the bi nning window. During the experiment the monkey very likely will not explore all the possible angles equally so it will achieve different prior distributions for) ( p The uniformly distributed ) ( p provides the ideal estimation for tuning curves. When there is insufficient data to estimate the accurate shape of) | ( spk p the traditional tuning depth will certainly provide a bias. In the experiment, there is no guarantee of the data sufficiency. Its effect will be tested in synthetic data. The normalization by the standard deviation of the firing rate in Equation 3-2 brings the concern of binning window size as well. The information theoretical tuning depth works directly on the sp ike train. It takes into account not only the spike nature of the data, which we can tell from the first term in Equation 3-3b, but also the every point of the nonlinearity ) | ( spk p and the prior distribution ) ( p as well, which is shown in the second term in Equation 3-3b. Simulated Neural Recordings We first test our information theoretical cr iterion on synthetic data using a single random realization of the spike train. Three sets of 2-dimensional move ment kinematics are generated. The magnitude and the direction of first dataset are both uniformly distributed within the range [0, 1], [, ] respectively. The second dataset has ma gnitude uniformly distributed while the direction is Gaussian dist ributed, centered at 2/3 with standard deviation 0.1 The third data set has Gaussian distributed magn itude centered at 0.7 with standa rd deviation 0.1, and Gaussian

PAGE 60

60 distributed direction centered at 2/3 with standard deviation 0.1 The velocity train is passed through a LNP model with the assumed non linear tuning function in Equation 3-6. ) exp(prefer t tD v (3-6) wheret is the instantaneous firing probability, is the background firing rate, represents the modulation factor to a certain preferred di rection, which is represented by a unit vector preferD. The spike train is generated by an inhomog eneous Poisson spike generator, once we have the knowledge of t We generate each velocity dataset with 100 Hz sampling frequency and 100 sec duration (10000 samples totally) or 10 sec duration (1000 samples totally) to test the reliability of the tuning criterion when there is fe wer data. The background-firing rate is set to 0. The preferred direction is set as 1/3 We implemented 10 synthetic neurons distinguished by their modulation factor varying from 1 to 10, which hints at a monotonically increasing tuning. The first uniformly distributed data set is supposed to give full perspect ive of the tuning curve, since it explores all possible direction angles. The Gaussian distributed direction in the second data set favors samples at a certain direct ion. It wont change the inform ation about the tuning curves in terms of direction angle when comp ared to the first dataset. The third data set have the Gaussian distribution magnitude with center at 0.7, which means for given direction angle the instantaneous firing probability is higher than the uniformly dist ributed magnitude with mean at 0.5. Since randomness is involved in the genera tion of the velocity and spike trains, we will evaluate the tuning depth criter ion for 100 Monte Carlo trials. Figure 3-5 shows the average t uning information with standa rd deviation across 100 Monte Carlo trials evaluated for 10 neurons with 100 se c duration. The dotted li ne group is the tuning

PAGE 61

61 information estimated by traditional tuning depth for all 3 datasets. In order to get the statistical evaluation between Monte Carlo runs, the traditi onal tuning depth were not normalized to [0, 1] for each realization as normally done in real da ta. The solid dot group is the tuning information estimated by information theoretical analysis for all 3 datasets. Both groups show higher information amount evaluated for each neuron fro m dataset 3 than the other 2 datasets as expected. However, the 2 lines evaluated from dataset 1 and dataset 2 are grouped much closer, which means less bias affected by prior distribu tion by information theoretical analysis than by traditional tuning depth. Since the more samples on the certain direction angle should not affect the information amount, the information theoretica l analysis provides the estimation that makes more sense. The tuning criterion is expected to steadily represent the t uning information amount across different Monte Carlo trials. However, for each ne uron directly comparing the standard deviation through Monte Carlo trials between 2 methods is not fair, since th eir scales are quite different. We use correlation coefficient to measure the si milarity of the tuning information curve along 10 neurons between each trial and the average perfor mance. The statistical similarly results through 100 trials for 3 datasets evaluated by 2 methods with both durations are shown in Table 3-2. For each data set, pair-wise student t -test was performed to see if the results are statistically different from the traditional tuning depth. The test is performed against the alternative specified by the left tail test CCTuning depth
PAGE 62

62 For each dataset, the tuning information criter ion by information theoretical analysis shows steadily information representation with higher correlation and less standard deviation in terms of similarity to average tuning information. All the ttest confirms the statistical performance improvement. In the real data analysis, there is no guarantee that we always have sufficient data to estimate the tuning abilities. Note that wi th less duration (1000 samples), the information theoretical criterion performs better than the traditional one. To distinguish the 10 neurons, we expect the criterion will be able to accurately rank the neurons monotonically related to the modulation factor from 1 to 10, even for the single realization of the spike train. Throughout the 100 Monte Carlo trials, the monotonicity of the tuning depth along 10 neurons for 3 datasets by both methods for th e two durations is shown in Table 3-3. For example, among 100 Monte Carlo tria ls on 1000 sample simulation, only 7 trials show monotonicity by traditional tuning depth, wh ile 62 trials show monotonicity by information theoretical analysis. Note that the traditi onal tuning depth shows much poorer monotonicity for all the dataset compared to information theoretical analysis. It ev en fails on monotonicity test in dataset 3. This is because the normalization term in the traditi onal tuning depth (the sta ndard deviation of the firing rate) is exponentia lly increasing when both modulation factor and mean of speed increases. When there is enough data, all the datasets show 100% monotonicity of tuning information across the 10 neurons evaluated by th e information theoretical analysis. Even with insufficient data, the information theoretic tu ning again shows much greater monotonicity percentage than the traditional tuning depth. Thus the information theoretical tuning depth is more reliable to rank neurons.

PAGE 63

63 In Vivo Neural Recordings Since we have tested the reliability of the information theoretical analysis on the tuning information in synthetic data, we now implement this criterion for our BMI data, where the neural activity is processed as binary spike trains sample d at 100 Hz. All the kinematics variables, hand position, velocity and acceleration, are upsampled to be synchronized with the neuron spikes trains. The traditional tuning dept h for all 185 neurons is computed from each of the kinematic variables and normalized into [0,1] as shown in Figure 3-6. The top plot is tuning depth computed from position, the middle from velocity, and the bottom from acceleration. The cortical areas where the micro-arrays were plac ed are also marked in the Figure. We can see clearly that most tuned neurons are in the primary motor cortex regardless of which kinematic vectors are used to calculate the tuning depth. Figure 3-7A shows the information theoretic depth calculated from all 3 kinematic directions for all the neurons. Compared to Figur e 3-6, in which the tuning depths are usually normalized to [0, 1] for all kinematics, the mutual information shows clearly that the velocities (the middle plot) relatively conveys more tuning information than position or acceleration, as reported in the literature [Paninski et al ., 2004a] Since mutual information is a distance (it is selfnormalized) it allows the relative assessment of t uning across different kinematics. For example, we found that neuron 121 is tuned more to positi on, while neuron 149 is tuned more to velocity. In Figure 3-7A, with the exception of the M1 cortical area, the neuronal information theoretic tuning depths seem almost flat, which could be erroneously interpreted as meaning that these neurons have little or no tuning. Actually, the mutual information is a nonlinear measure, emphasizing the large distances. Due to the large dyna mic range of the mutual information, it is preferable to display the results in logarithmi c scale. The difference between neurons in other cortical area is much clearly depicted in Fig 3-7B.

PAGE 64

64 Information Theoretical Neural Encoding This section implements an informa tion theoretical methodology to address instantaneous neuronal encoding properties. The analysis is based on a statis tical procedure for quantifying how neuronal spike trains direc tly encode arm kinematics. All of the evaluation is performed directly with the neural spike times, which pres erves the fine time structure of the representation without determining a rate c ode and its associated window size commonly chosen by the experimenter Instantaneous Tuning Function in Motor Cortex The literature contains many differe nt types of tuning functions ( i.e., linear, exponential, Gaussian) [Moran & Schwartz, 1999; Eden & Frank et al ., 2004]. These nonlinear mathematical models are not optimal for dealing with the re al data because each neuron very likely has different tuning properties [Wise et al ., 1998]. The accuracy of the tuning function estimation will directly affect the Bayesian decoding approach and, therefore, the results of the kinematic estimation in BMIs. The spike-triggered average (S TA) is one of the most commonl y used white noise analysis [deBoer & Kuyper, 1968; Marmarelis & Naka, 1972; Chichilnisky, 2001], applicable when data is uncorrelated. It is applied for instance in the study of auditory neurons [Eggermont et al ., 1983], retinal ganglion cells [S akai & Naka, 1987; Meister et al ., 1994], lateral geniculate neurons [Reid & Alonso, 1995], simple cells in primary visual cortex (V1) [Jones & Palmer, 1987; McLean & Palmer, 1989; DeAngelis et al ., 1993]. STA provides an estimate of the first linear term in a polynomial seri es expansion of the system response function with the assumptions that the raw stimulus distribution is spherically symmetric or elliptically symmetric (whitening operation is then necessary), and th e raw stimuli and the spike-triggered stimuli distribute differently in terms of the mean. If the system is truly linear, STA provides a complete

PAGE 65

65 characterization. This linear approximation was improved by Simoncelli, Paninski and colleagues [Simoncelli et al ., 2004]. By parametric model identification, the nonlinear property between the neural spikes and the stimuli was directly estimated from data, which is more reliable than just assuming linear or Gaussian dependence. In our sequential estimation for BMI studies [Wang et al ., 2007b] it provides a very practical wa y to acquire the prior knowledge (the tuning function) for decoding purposes. This technique estimates the tuning functi on by a Linear-Nonlinea r-Poisson (LNP) model [Simoncelli et al ., 2004], which is composed of a linear f ilter followed by a static nonlinearity then followed by a Poisson model, as shown in Figure 3-8. The linear filter projects the multi-dimensional kinematic vector into its weight vector k (representing a direction in space), which produces a scalar va lue that is converted by a nonlinear function f and applied to the Poisson spike-generating model as the instantaneous conditional firing probability ) | (x k spike p for that particular direction in the high dimensional space. In our work the optimal linear filter actually projects the multi-dimensional kinematic vector x built from the position, velocity and acceleration in x and y along the direction where they differ the most from the spike triggered kinematic vectors. This projection could represent the transformation between kinematics to muscle activation [Todorov, 2000]. The nonlinear function f represents the neuron nonlinear response, which accounts for all of the processing of the spinal cord and deep brain structures to condition the signal for activation operations [Todorov, 2000]. The Poisson model, which encodes the randomness of neural behavior, generates spike trains with an instantaneous firing probability define d by the nonlinear output. This modeling method assumes that the generation of spikes depends onl y on the recent stimulus and is historically independent of previous spike times.

PAGE 66

66 Previous work [Paninski et al ., 2004a, Paninski et al ., 2004b] utilized a window in time approach to build a smoother statistical tu ning function from temporal kinematics to instantaneous neural firing rate. In the encodi ng stage, the kinematic va riable within a window that embeds temporal information before and after the current neuron firing time is used as a high dimensional input vector. The linear-nonlinear stage of the LNP mo del generates a onedimensional output as the estima ted firing rate for the Poisson stage. However, the sequential estimation model of our BMI re quires just the opposite ( i.e., we need to predict from the current neural activity event a sequence of kinematics), especially for the neurons in M1. When we infer the kinematics during a certain window length with respect to a particular spike, the state estimation error can accumulate easily as the estim ation is recursively propagated into the next time iteration to build the vector during the window. Thus, a one-toone mapping between the instantaneous kinematics and the neural activit ies is of paramount importance for the online decoding purpose. The other issue is to esti mate appropriately the optimal delay in the instantaneous functional mapping. Due to the decr ease in the amounts of da ta, the instantaneous decoding is expected to be noisier (fewer data to identify the tran sfer function), but there are also possible advantages. Compared to the windowed method of Paninski et al [2004b], instantaneous estimation works directly in the dyn amic range of the kinematic signals instead of being affected by all the temporal information embedded within the window. To deal with the sensitivity issue for neural tuning identification, the method works with the full kinematic vector containing the instantaneous position, velocity and acceleration to include the information that each kinematic variable conveys for tuning, which ultim ately is what is needed in BMI decoding. Estimation of the instantaneous encoding depends upon the ability to estimate the appropriate time delay between motor co rtical neuron activity and kinematics [Wu et al ., 2006].

PAGE 67

67 Due to the propagation effects of signals in th e motor and peripheral nervous system and to preserve causality, the intended movement is executed after the motor cortical neuron fires (Figure 3-9). In the temporal kinematic encoding by LN P models, a window that usually samples 300 msec before and 500 msec after the current neural firing rate [Paninski et al ., 2004b] is used to construct the high dimensional kinematic vector. Although the causal time delay is already taken into account, the temporal kinematic information before the neuron fires actually has no causal relation with respect to the current spike. For the instantaneous kinematic encoding model, the optimum time delay has to be estimated to draw as much information as possible. The instantaneous motor cortical neural activity can be modelled as ) (lag t tk f x (3-7) ) (t tPoisson spike (3-8) where lag t x is the instantaneous kinematics vector defined as T lag t y y y x x x ] [ a v p a v p with 2dimentional information of position, veloci ty and acceleration with causal time delay. k is a linear filter, representing the preferred instantaneous direction in high-dimensional kinematics space. The weight estimation of the linear filter is based on the standard technique of spiketriggered regression. ] [ ) ] [ (| 1 lag t spike x lag t T lag tt lag tE E k x x x (3-9) Equation 3-9 represents the least square so lution for the linear adaptive filter, where ] [lag t T lag tE x x gives the autocorrelation matrix R of the input vector considering causal time delay. is a regularization factor, which avoids ill-conditioning in the inverse. In the experiment, is chosen to maximize the linear fi lter performance. From a statistical

PAGE 68

68 perspective, ] [| lag t spiket lag tExx mimics the role of the cross-co rrelation vector P between the input and the binary spike train considering a causal ti me delay. Therefore, Equation 3-9 reduces to a conditional expectation of the binary spike train (i.e., this linear filter gives the spike-triggered average instantaneous kinematic vector ( ] [| lag t spiket lag tExx) scaled by the decorrelated kinematic data 1) ] [ ( lag t T lag tE x x). t is the instantaneous firing rate in an i nhomogeneous Poisson spik e generator. For the time interval selected for the spike analysis (i.e. the time interval valid for a Poisson assumption in the collected data, which has to be experime ntally determined), a number is randomly drawn from a normalized un iform distribution (i.e., 0 to 1) and compared with the instantaneous conditional firing probability. If the number is smaller than the probabil ity, then a spike is generated in this time interval. This modelling approach is therefore intrinsically stochastic, which carries implications (large variance) to on-line real time implementations. f is the nonlinear function estimated by an intuitive nonparametric technique [Chichilnisky 2001; Simoncelli et al., 2004] as the conditi onal probability density ) | ( x k spk p directed from the data. It is the fraction of the two kernel smoothed histograms of marginal ) ( x k pand joint distribution ) ( x k spk p. It is the same way when we describe in Figure 3-4. The only difference is that we are plot the joint and marginal pdf in term of filtered kinematics x k. The histogram of the spike-triggered an gle is smoothed by a Gaussian kernel according to Silvermans rule [Silverman, 198 1] and normalized to approximate the joint probability ) ( x k spk p, depicted as the solid red line in upper plot of Figure 3-10. In other words, the direction angle is accounted for in the histogram during th e corresponding direction

PAGE 69

69 angle bin only when there is a spike. Then the conditional probability density ) | ( x k spk p, depicted as the line in the bottom plot of Fi gure 3-10, is obtained by dividing the kernelsmoothed histogram of ) ( x k spk p by the kernel-smoothed histogram of x k (dotted line in the upper plot of Figure 3-10), which in fact implements Bayes rule, ) ( ) 1 ( ) | 1 ( x x x k p k spk p k spk p (3-10) where ) | 1 ( 1 ) | 0 ( x x k spk p k spk p. When ) ( x k p is 0, ) 1 ( x k spk p is set to be 0. The peak in the conditional probability of Fi gure 3-10 is associated with the maximal firing probability, which is linked with specific values of the kinematic variables, and produces an increase in the firing rate of the neuron. Likewise, the re gion of low probability shows a deviation from the spontaneous firi ng rate for the neuron. These two portions of the curve (the most difficult to estimate well because they are at the tails of the distribut ion) are responsible for the modulation that is seen in the rasters of the spike train da ta when observed along with the kinematic variables, and that are fundamental for BMI decoding performance. Information Theoretic Delay Estimation The causal time delay can also be estimated by information theoretical analysis. Here, we are interested in the optimum time lag, which extracts the most instantaneous kinematic information corresponding to the neural spike event. The wellestablished concept of mutual information [Reza, 1994] as a metric for evalua ting neuron instantaneous receptive properties is based on information theory and would capture mu ch more of the neuronal response [Paninski et al., 2004b; Wang et al ., 2007b]. Define a tuned cell as a ce ll that extracts more information between the linear filtered kinematics and its spik ing output. If a neuron is tuned to a preferred direction in high-dimensional space, the mutual information between the spike and the delayed

PAGE 70

70 linear filter kinematics vector is first drawn simply as the function of the time lag after a spike as in Equation 3-11. 1 0 ) ; ()) ( | ( )) ( ( ) (spke k k spklag k spk p lag k p lag I x xx x ) ) ( )) ( | ( ( log2spk p lag k spk p x (3-11) where )) ( ( lag k p x is the probabilistic density the linear f iltered kinematics as a function of time lag, which can be easily estimated by Parzen window [Parzen, 1962]. ) ( spk p can be calculated simply as the percentage of the spike count during the entire spike train. ) | ( x k spk p is exactly the nonlinear function f in LNP model. The time delay with the highest mutual information is assigned as the optimum time lag for each neuron. The kinematics at the optimum time lag carries maximally the causal information of the neural spike. In the encoding stage, the 6-dimentional kinematic vectors are first synchronized at the optimum delay for each neuron, then input to the LNP tuning model to generate the estimated firing rates according to Eq uation 3-7. To test the encoding ability of the instantaneous tuning model, the neuron firing rate is obtained by smoothing the real spike train with a Gaussian kernel. The correlation coefficient is then calculated between two firing rates to measure the quality of encoding. As we mentioned in the previous section, the windowed kinematic vector is usually chosen as 300 msec before and 500 msec after the current neural spike, which already takes into account the causal time delay of the motor cortical neurons. We selected a possible delay range from 0 to 500ms after a neuron spikes to estimate the opt imum time delay for our instantaneous tuning function. The regularization factor in the spike-triggered average stage is experimentally set as 10-7, and the kernel size to smooth the histogram of probability density is set according to Silvermans rule [Silverman, 1981]. For all 185 ne urons, the mutual information as a function of

PAGE 71

71 time delay was obtained from 10,000 continuous samples (100 seconds) during movement. The time delay with highest mutual information was assigned as the best time lag for each neuron. Since neurons in M1 show more tuning informa tion than other cortical areas, here we study 5 neurons that show the highest tuning, neurons 72, 77, 80, 99, and neuron 108. Figure 3-11 shows the mutual information as a function of time delay after spike occurrence. The best time lags are marked by a cross on each curve, and are 110ms, 170 ms, 170ms, 130 ms and 250ms respectively. It is interesting to observe that not all the neurons have the same time delay, although all of these neurons are in M1. During the analysis, different time delay is used for each neuron respectively. The averag e best time delay for all 185 neurons was 220.108ms, which is close to the results mentioned in the literature [Wu et al ., 2006]. Instantaneous vs. Windowed Tuning Curves The windowed encoding approach yields a widely accepted exponential increasing nonlinear function f after lin ear projection [Paninski et al ., 2004a, Paninski et al ., 2004b]. However for BMIs we are proposing an instantaneous and global ( i.e., across kinematic variables) tuning estimation, therefore it is im portant to compare and evaluate the two tuning methodologies. For each neuron, we chose 7 different window sizes to filter the kinematic vector T lag t y y y x x x ] [ a v p a v p and calculate the nonlinearity using the methods described in Figure 310. The biggest window size is 300 ms before and 500 ms after the current ne ural spike, noted as [-300, 500], which has been used in [Paninski et al ., 2004b] for motor cortical neuron tuning analysis. Then each window shrinks 50ms at left and right extremes, such as [-250, 450], [-200, 400], ... until the smallest window [0, 200] ms. Fi gure 3-12 shows the nonlinearity of the 4 MI neurons estimated by windowed kinematics with 7 different window sizes, each plotted in

PAGE 72

72 different colors. The instantaneous nonlinear tuni ng with optimum delay is emphasized in a thick red line. As we can observe from the figures, the tuni ng curves vary with different window sizes, particularly in the high tuning regi on. However, the middle part of th e nonlinearity is very stable across all the window sizes, including the instan taneous estimation. Compared to the windowed tuning, the instantaneous model produces a smaller dynamic range of projected values ( x -axis) because it directly works in the dynamics ra nge of the kinematics without involving timeembedded information. We chose the Correlation Co efficient (CC) as the criterion to evaluate the similarity between the nonlinear tuning curves estimated from each windowed kinematics and the instantaneous one within the range specified by the instantaneous model. Seven histograms of correlation coefficients are show n in Figure 3-13, where the y-axis shows the percentage of neurons (out of 185) with a give n CC. We can see that 98.92% of neurons have instantaneous tuning curves with a similarity over 0.9 compared to the one by window size [300, 500] ms. More than half (58. 38%) of the neurons have a similarity over 0.9 for the [-50, 250] ms window. However, less than half (41.62%) of the neurons have a similarity over 0.9 for the [0, 200] ms window because this window is not big enough to include the optimum causal delay, which is on average 220 ms. Since the summa tion for the same window size (color bar) is 100%, the similarity of the less similar neurons (CC<0.9) is distributed across other CC bins. Also notice that from the windowed methods the one with the smallest window, when it includes the optimum time delay, is the closest to the instantaneous estimated tuning. The similarity amongst windowed and instantaneous methods is rath er surprising, and builds confidence that in spite of its simplicity in computation it is quan tifying appropriately neural tuning properties.

PAGE 73

73 One possible reason for the differences at both extremes of the tuning curves is insufficient data to provide an accurate estimation at both extremes, in particular because of the division in Equation 3-10. Recall th at this is actually the importa nt part of the tuning curve for BMIs because it is in this portion that the ne uron firing shows modulation with the kinematic variable. In particular, neurons 80 and 99 (as ma ny others) show a large mismatch at the high firing rate level (right end of the curve). Both neurons dem onstrate a lower firing probability in the instantaneous curve compared to the window ed curves. Neuron 80 also shows a saddle-like behavior very different from the exponential in crease. Therefore these behaviors need to be further investigated. Instantaneous vs. Windowed Encoding Since the ultimate goal of the tuning analysis is to transform spike timing information into the kinematic space, here we compare both tuning me thods in our experimental data set. Here we select neuron 80 and neuron 99 to compare the encoding ability betwee n the windowed and the instantaneous tuning model with the real kinematics signals (F igure 3-14A and 3-14B). From previous studies, these 2 neurons are known to be among the most sensitive neurons for BMI modeling [Sanchez et al ., 2003; Wang et al ., 2007b], and they are also amongst the ones that show the larger mismatch at the high firing proba bility range (right extrem e end of Figure 3-12). In each plot, the pink bars in the first and sec ond rows represent the neural spike train. The red dash line superimposed on the spike train is the firing rate estimation by kernel smoothing. In the top panel, the blue solid line superimposed on the spike train is the estimated firing rate by instantaneous tuning, while in the second panel, the green solid line superimposed on the spike train is the estimated firing rate by windowed tu ning with 300 ms before and 500 ms after the current neuron firing. To check the animals beha vior simultaneously with the neural activity, the

PAGE 74

74 third and fourth panels s how the re-scaled 2D positi on and velocity (blue for x green for y ) after synchronization at the optimum delay. We can clearly observe that, for both neur ons (Figure 3-14A and Figure 3-14B), the instantaneous model gives a smoother estimated firing rate than the noisy estimation by the windowed model. We found that the linear filter outputs in the windowed model are very noisy, because it is a projection of the high dimens ional time-embedded kinematic vector, which increases the range of the independe nt variable and so creates larger variability in the spike rates. Moreover, the over-estimation at high firing rate of the nonlinearity curve l eads to the extraneous large peaks on the green line. As can be expected since the tuning is higher, there will be more spikes and so the intensity function estimation is ve ry high and noisier as seen in the green curve. It is also very interesting to notice that after causal alignment both neurons demonstrate clear time alignment (negative corr elation) between the hand veloci ty trajectory and the peaks of firings, which reinforces the eviden ce for neural kinematic encoding. To quantify the encoding comparisons, the correlation coefficient between the neurons firing rate and the estimated firing rates from the windowed and instantaneous models. The kernel size smoothes the spike train to enable the estimation of CC but it will affect the results of the similarity measure. Figure 3-15A and B s hows results comparing the CC for the same 2 neurons vs. different kernel sizes Correlation coefficients for the instantaneous model are always greater than the ones by windowed model across kernel sizes. He re we choose to display the kernel size that maximizes the similarity. For neuron 99, the correlation coefficient between the instantaneous model and the firing rate is 0.6049, which is greater than 0.4964 for the windowed model. For neuron 80, the correlation coefficien t between the estimated firing rate with the instantaneous model and the firing rate from real spike train is 0.6393, which is greater than

PAGE 75

75 0.5841 given by the windowed model. Therefore, the instantaneous model shows better encoding ability. Discussion The traditional criterion of estimating tuning depth from windows of data does not seem the most appropriate in the design of BMIs using sequential estimation algorithms on spike trains. Here we present instead an information th eoretical tuning analysis of instantaneous neural encoding properties that re late the instantaneous value of the ki nematic vector to neural spiking. The proposed methodology is still based on the LNP model, and an information theoretic formulation provides a more detailed perspectiv e when compared with the conventional tuning curve because it statistically qua ntifies the amount of informati on between the kinematic vectors triggered off by the spike train. As a direct c onsequence, it can estimate the optimum time delay between motor cortex neurons and behavior caused by the propagation eff ects of signals in the motor and peripheral nervous system. The similarities and differences between th e windowed and instantaneously evaluated tuning functions were also analyz ed. We conclude that the instan taneous tuning curves for most of the neurons show over 0.9 correlation coefficien ts in the central region of the tuning curve, which unfortunately is not the most important fo r BMI studies. There are marked differences in the high tuning region of the curv es, both in the dynamic range and in the estimated value. The windowed model works on a time-embedded v ector, which spreads the linear output x k to a wider range. Since the pdf integral is always 1, the window ed model flattens the marginal distribution of ) ( x k p. In the time segment when the ne uron keeps firing, the overlapping windows make the linear filter output x k change slowly. It results in more spike-triggered samples in the small neighborhood of x k Therefore, the estimation on the joint distribution

PAGE 76

76 ) ( x k spk p becomes higher. Both consequences cont ribute to the overestimation of tuning at the high firing rate part of the windowed nonlinear curve. The instantaneous model works directly in th e dynamic range of the kinematics that is sensitive only to the corresponding neuron spike timings. It estimates more accurately the firing probability without distortions from the temporal neighborhood information. However, we create a vector with al of the kinematics (position, velocity, acceleration) to estimate better ( i.e., to obtain more sensitivity) the tuning from the data. This has the potential to mix tuning information for the different kinematics variables and different directions if they are not exactly the same. When the different kinematic variables di splay different sensitivities in the input space, after projection by the weight filter directi on they will peak at different values of x k in the nonlinear curve, which then results in the saddle -like feature observed in Figure 3-12. The other potential shortcoming is that less data is used, so the variability may be hi gher. However, at this time one still does not know which tuning curve provides a better estimate for the instantaneous tuning model required in the encoding and decoding stages of BMIs. Ultimately, the instantaneous model can produce equivalent or better encoding results compared to existing techniques. This outcome builds confidence to directly implement the instantaneous tuning function into the future online decodi ng work for Brain-Machine Interfaces.

PAGE 77

77 Table 3-1. Assignment of the sorted neural activity to the electrodes Right PMA Right MI Right S1 Right SMA Left MI Aurora (left handed) 1-66(66) 67-123(57)123-161(38)162-180(19) 181-185(5) Figure 3-1. The BMI experiments of 2D targ et reaching task. The monkey moves a cursor (yellow circle) to a randomly placed target (gre en circle), and is rewarded if a cursor intersects the target Figure 3-2. Tuning plot for neuron 72 0.05 0.1 0.15 0.2 0.25 30 2 10 60 240 90 270 120 300 1 50 33 0

PAGE 78

78 Figure 3-3. A counterexample of neuron tuning evaluated by tuni ng depth. The left plot is a tuning plot of neuron 72 with tuning depth 1. The right plot is for neuron 80 with tuning depth 0.93 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0 5 10 15 marginal probability and joint probability -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0 0.2 0.4 0.6 0.8 conditional probability p() p(spk,) p(spk|) Figure 3-4. The conditional probability density estimation 0.02 0.04 0.06 0.08 30 2 10 60 240 90 270 120 300 1 50 33 0 0.05 0.1 0.15 0.2 0.25 30 2 10 60 240 90 270 120 300 1 50 33 0

PAGE 79

79 1 2 3 4 5 6 7 8 9 10 0.5 1 1.5 2 The average tuning information across Monte Carlo trials for different neurons 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 dataset 1 by tuning depth dataset 2 by tuning depth dataset 3 by tuning depth dataset 1 by information analysis dataset 2 by information analysis dataset 3 by information analysis Figure 3-5. The average tuning informa tion across trials by different evaluation Table 3-2. The statistical similarity results comparison Sample# Method Dataset 1 Dataset 2 Dataset 3 Traditional tuning depth 0.9705 0.01860.9775 0.01330.99110.058 Information theoretical analysis 0.9960 0.00240.9964 0.00210.99880.0008 103t -test( p value) 1(9.520-26)1(1.37 10-26)1(5.680-24) Traditional tuning depth 0.9976 0.00130.9977 0.00140.99910.0005 Information theoretical analysis 0.9997 0.00020.9996 0.00020.99990.0001 104t -test( p value) 1(1.600-26)1(4.570-25)1(6.000-19) Table 3-3. The comparison of percentage of Monte Carlo results in monotonically increasing Sample# Method Dataset 1 Dataset 2 Dataset 3 Traditional tuning depth 7%3%0% 103 Information theoretical analysis 62%57%76% Traditional tuning depth 76%84%0% 104 Information theoretical analysis 100%100%100%

PAGE 80

80 Figure 3-6. Traditional tuni ng depth for all the neurons co mputed from three kinematics

PAGE 81

81 A B Figure 3-7. Information theoretic tuning depth for all the neurons computed from 3 kinematics plotted individually. A) In regular sale. B) In logarithmic scale

PAGE 82

82 Figure 3-8. Block diagram of Linear-Nonlinear-Poisson model Figure 3-9. Sketch map of the time delay be tween neuron spike train (bottom plot) and the kinematics response (upper plot) Linear Nonlinear f Poisson Model Kinematics Spikes

PAGE 83

83 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0 5 10 15 KX marginal probability and joint probability -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0 0.05 0.1 0.15 0.2 KX nonlinearity (conditional probability) p(spk|KX) p(KX) p(spk,KX) Figure 3-10. The conditional probability density estimation 0 50 100 150 200 250 300 350 400 450 500 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 0.022 time delay (ms)MIMI as function of time delay neuron 72 neuron 77 neuron 80 neuron 99 neuron 108 Figure 3-11. Mutual information as function of time delay for 5 neurons

PAGE 84

84 A B Figure 3-12. Nonlinearity estimation for neurons A) Neuron 80. B) Neuron 72. C) Neuron 99. D) Neuron 108 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 KX(KX)neuron 80 nonlinear estimation optimum delay [-300, 500] ms [-250, 450] ms [-200, 400] ms [-150, 350] ms [-100, 300] ms [-50, 250] ms [0,200] ms -0.2 0 0.2 0.4 0.6 0.8 1 1.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 neuron 72 nonlinear estimation KX(KX) optimum delay [-300, 500] ms [-250, 450] ms [-200, 400] ms [-150, 350] ms [-100, 300] ms [-50, 250] ms [0,200] ms

PAGE 85

85 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 neuron 108 nonlinear estimation KX(KX) optimum delay [-300, 500] ms [-250, 450] ms [-200, 400] ms [-150, 350] ms [-100, 300] ms [-50, 250] ms [0,200] ms C D Figure 3-12. Continued -0.2 -0.1 0 0.1 0.2 0.3 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 neuron 99 nonlinear estimation KX(KX) optimum delay [-300, 500] ms [-250, 450] ms [-200, 400] ms [-150, 350] ms [-100, 300] ms [-50, 250] ms [0,200] ms

PAGE 86

86 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 70 80 90 100 correlation coefficient% of 185 neuronsCC histogram between nonlinearity by windowed kinematics and optium delay [-300,500]ms [-250,450]ms [-200,400]ms [-150,350]ms [-100,300]ms [-50,250]ms [0,200]ms Figure 3-13. Correlation coefficient between the nonlinearity cal culated from windowed kinematics and the instantaneous kinematics with optimum delay

PAGE 87

87 A B Figure 3-14. Comparison of encoding resu lts by instantaneous modeling and windowed modeling. A) Neuron 99. B) Neuron 80

PAGE 88

88 A B Figure 3-15. Comparison of encoding simila rity by instantaneous modeling and windowed modeling across kernel size. A) Neuron 99. B) Neuron 80 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 kernel sizeCCneuron 99@time[3001:5000] X: 0.17 Y: 0.4964 by windowed tuning by instantaenous tuning 0.1 0.15 0.2 0.25 0.4 0.45 0.5 0.55 0.6 0.65 X: 0.17 Y: 0.6393 kernel sizeccneuron80@time[2501:4500] by windowed tuning by instantaenous tuning

PAGE 89

89 CHAPTER 4 BRAIN MACHINE INTERFACES DECODING IN SPIKE DOMAIN The Monte Carlo Sequential Estimation Framework for BMI Decoding We have thus far presented background on th e difference between simulation and BMI real data, and have elaborated on the Monte Carlo sequential estimation algorithm. Based on this information, we now present a systematic fram ework for BMI decoding using a probabilistic approach. The decoding of Brain Machine Interfaces is intended to infer the primates movement from the multi-channel neuron spike trains. The spike times from multiple neurons are the multichannel point process observations The kinematics is the state that needs to be derived from the point process observation through the tuning function by our Mont e Carlo sequential estimation algorithm. Figure 4-1 provides a schematic of the basic process. The decoding schematic for BMIs is shown in Figure 4-1 as the right to left arrow. The signal processing begins by first tr anslating the neuron spike times collected from the real data into a sequence of 1 (there is a spike) and 0 (no spike). A time interval small enough should be chosen to guarantee th e Poisson hypothesis ( i.e., only few intervals have more than one spike). If the interval is too small, however, the computational complexity is increased without any significant improvement in performance. One must also be careful when selecting the kinematic state (position, velocity, or acceleration) fo r the decoding model since the actual neuron encoding is unknown. The analysis presented here will consider a vector state with all three kinematic variables. The velocity is estimated as the difference between the current and previous recoded positions, and the acceleration is estimate d by first differences from the velocity. For fine timing resolution, all of the kinematics ar e interpolated and time synchronized with the neural spike trains.

PAGE 90

90 It is interesting to note that in black box modeling, the motor BMI is posed as a decoding problem ( i.e., a transformation from motor neurons to behavior). However, when we use the Bayesian sequential estimation, decoding is insu fficient to solve the modeling problem. In order to implement decoding it is important to also model how each neuron encodes movement, which is exactly the observation model ) ( f in tuning analysis in Chapter 3. Therefore, one sees that generative models do in fact require more info rmation about the task and are therefore an opportunity to investigate further neural func tionality. Here we use the instantaneous motor cortical neural activity m odeled in Chapter 3 as ) (lag t tk f x (4-1) ) (t tPoisson spike (4-2) where, as before, lag t x is the instantaneous kine matics vector defined as T lag t y y y x x x] 1 [ a v p a v p with 2 dimensional information of position, velocity, acceleration and bias with causal time delay depending on the da ta. For BMI, the kinematic vector in LinearNonlinear-Poisson model must be read from the experiment for every spike occurrence since the task is dynamic, taking into consideration the cau sal delay between neural firings and kinematic outputs [Wang et al ., 2007b]. The linear filter projects the kinematics vector x into its weight vector k (representing a preferred dire ction in space), which produ ces a scalar value that is converted by a nonlinear function f and applied to the Poisson model as the instantaneous conditional firing probability t for that particular direction in space ) | (x k spike p. The filter weights are obtained optima lly by least squares as ] [ ) ] [ (| 1 lag t spike lag t T lag tt lag tE E k x x xx, where ] [| lag t spiket lag tExx is the conditional expectation of the kinematic data given the spikes. The

PAGE 91

91 parameter is a regularization paramete r to properly condition the inverse. The optimal linear filter actually projects the multi-dimensional kinematic vect ors along the direction where they differ the most from the spike triggered kinematic vectors. The nonlinear encoding function f for each ne uron was estimated using an intuitive nonparametric technique [Chich ilnisky, 2001; Simoncelli et al ., 2004]. Given the linear filter vector k we drew the histogram of all the kinematics vectors filtered by k and smoothed the histogram by convolving with a Gaussian kernel. The same procedure was repeated to draw the smoothed histogram for the outputs of the spike-triggered velocity vectors filtered by k The nonlinear function f which gives the conditional instantane ous firing rate to the Poisson spikegenerating model, was then estimated as the ratio of the two smoothed histograms. Since f is estimated from real data by the nonparametric technique, it provides more accurate nonlinear properties than just assuming th e exponential or Gaussian functi on. In practice, it can be implemented as a look up table, fo r its evaluation in testing as i i training t test j j training spike t test t testk k k k k k k spike p ) ( ) ( ) | (,x x x x x (4-3) where k is the Gaussian kernel, t testx is a possible sample we generate at time t in the test data. i trainingx is one sample of velocity v ector in the training data, and j training spike,x is corresponding spike-triggered sample. In our calculation, we approximate the nonlinearity for each neuron by a 2-layer MLP with a 10 hidden logsig PEs trained by Levenberg-Marquardt back-propagation. The causal time delay is obtained by maximizi ng the mutual information as a function of time lag for each neuron from 10000 continuous samples of the kinematic variables [Wang et al ., 2007b], as we described in Chapter 3. Here we fu rther assume that the firing rates of all the

PAGE 92

92 neuron channels are conditiona lly independent in implementi ng whole Monte Ca rlo sequential estimation (SE) algorithm with the encoding and decoding process on BMI. First, the neuron activity data and kinematics are preprocessed. The only information we stored on the neural activities is the spiking time. In our preprocessing, we check every time interval with a spiking time, and assign 1 if there is a spike; othe rwise, we assign 0. The interval should be small enough so that only a few intervals have more than one spike. In this case, we will still assign 1. The multi-channel spike trains are generated as our point process observations. We identify the kinematic variable as the state we are interested in reconstructing, or the one that carries the most information as determined by the information theoretical tuning depth. This variable could be a kinematic vector during a wi ndow, which contains both spatial and temporal information. It could also be an instantaneous kinematic variable resulting from a spike with some time delays specific to the motor cortex. The velocity is derived as the difference between the current and previous recoded positions, and the acceleration is derived the same way from the velocity. All the kinematics are interpolated to be synchronized with neural spike trains. Secondly, the kinematics dynamic system model kF as stated in Equation 2-6 in Chapter 2, and the tuning function between the neural spike train and the primates kinematics are estimated from the existing (training) data. The sy stem model is used to linearly predict the next kinematic value from the current one as k k k kF 1x x Since the kinematics are continuous values, kF can be estimated easily by the least square solution. The tuning function ) (t tk f x is designed as a linear-nonlinear-Poiss on model for each neuron to describe the conditional firing rate as a func tion that encodes the kinematics that we are interested in reconstructing. The deta ils of linear parameter k and nonlinear function f estimation are already discussed in Chapter 3.

PAGE 93

93 Provided the pre-knowledge of the system m odel and tuning function, we can implement the Monte Carlo sequential estimation adaptive filtering algorithm fo r point process. By generating a sequential set of samples, the posterior density ) | (k j kN p x is recursively estimated given spike train of neuron j At each time iteration k the joint posterior density ) | (k kN p x is approximated by the product of all the marginal ) | (k j kN p x which assumes the conditional independence between neurons. The state is de termined by the maximum a posterior or the expectation by collapsing the Gaussian kernel on the set of samples. The following steps represent the entire process. Step 1: Preprocess and analysis. 1. Generate spike trains from stored spike times. 2. Synchronize all the kinematics with the spike trains. 3. Assign the kinematic vector x to be reconstructed. Step 2: Model estimation (encoding). 1. Estimate the kinematic dynamics of the system model ] [ ]) [ (1 1 1 1 T k k T k k kE I E F x x x x 2. For each neuron j estimate the tuning function 1) Linear model ] [ ) ] [ (| 1x x xxjspike T jE I E k 2) Nonlinear function ) ( ) ( ) ( x x x j j j j jk p k spike p k f 3) Implement the inhomogeneous Poisson generator Step 3: Monte Carlo sequential estimati on of the kinematics (decoding) For each time k a set of samples for state i kx are generated, i= 1: N 1. Predict new state samples k i k k i kF 1x x i=1:N 2. For each neuron j 1) Estimate the conditional firing rate ) (, i k j j j i kk f x i =1: N 2) Update the weights ) | (, j i t j k j i kN p w i =1: N 3. Draw the weight for the joint posterior density j j i k i kw W,, i =1: N

PAGE 94

94 4. Normalize the weights i i k i k i kW W W i =1: N 5. Draw the joint posterior density N i i k k i k k kk W N p1 : 1) ( ) | (x x x 6. Estimate the state *kx from the joint posterior de nsity by MAP or expectation. 7. Resample i kx according to the weights i kW Monte Carlo SE Decoding Results in Spike Domain In this section, we show the BMI decoding results directly in the spike domain by implementing the Monte Carlo seque ntial estimation framework. We first preprocessed the 185 channels of the ne uron spiking time as a 0, 1 point process. For each neuron in the ensemble, an optimum time interval of 10 ms was selected to construct the point process observation sequen ce. With this interval, 94.1% of the intervals with spikes had only a single spike. For each time interval and in each channel, 1 was assigned when there were one or more spikes, otherwise 0 was assigned. 185 multi-channel spike trains were generated 1750 seconds long. The recorded 2-D position vector p is interpolated to be synchronized with the spike trains. The velocity v is derived as the difference between the current and previous positions, and the acceleration a is derived the same way from the velocity. Here, the state vector is chosen as the instantaneous kinematics vector T y y y x x x] [a v p a v p x to be reconstructed directly fr om the spike trains, rather than choosing only the velocity duri ng a window when a spike appear s. Therefore, the kinematics vector contains more information about positions velocities and accelerations. As we discussed in the tuning analysis section, the informati on theoretical tuning depths computed from each kinematics can be different, indi cating that there are neurons tune d specifically to a particular

PAGE 95

95 kind of kinematics. Using only one kinematic va riable might leave out important information between the neural spikes and other kinematics. After data preprocessing, the kinematics model kF can be estimated using the least squares solution as shown in Equation 2-6. Notice that carefully choosing the parameters in the noise estimation (the noise distribution ) ( p in Monte Carlo SE) could affect the algorithm performance. However, since we have no access to th e desired kinematics in the test data set, the parameter of both algorithms were estimated from the training data sets. In the Monte Carlo SE model, the noise distribution ) ( p is approximated by the histogram of 1 k k k kFx x The resolution parameter was experimentally set to 100 to approximate the noise distribution. The regularization factor in the tuning function was experimentally set at 10-7 for this analysis. The remaining parameters in Monte Carlo SE include the kernel size selected at 0.02 and the number of particles nx experimentally set at 1000, for a reasonable compromise between computational time and estimation performance. Th is kernel size should be chosen carefully to not lose the characteristics of the tuning curve, while still minimizing ripples in the estimated density. Monte Carlo SE algorithm produces stochas tic outputs because of the Poisson spike generation model. It also introduc es variations between realizati ons even with fixed parameters due to the estimation of the posterior distribution with the particles. Table 4-1 shows reconstruction results on a 1000 sample of test segment (time index from 25401 to 26400) of neural data. Correlation Coeffi cients (CC) and Normalized Mean Square Error (MSE normalized by the power of the desi red signal) between the desired signal and the estimations are evaluated for the Monte Carlo SE using 20 realizations. We show the mean and

PAGE 96

96 the standard derivation among rea lizations, together with the be st and the worst performance obtained by single realization. Our approach resulted in reasonable reconstr uctions of the position and the velocity. The position shows the best correlation co efficient with the true trajecto ry. This result may be due to the fact that the velocity and the acceleration were derived as differential variables, where the noise in the estimation might be magnified. Th e Monte Carlo SE obtains the tuning function nonlinearity for each neuron from the training da ta and estimates the kinematics without any restriction on the posterior density. Th e average correlation for the position along x is 0.80580.0111 and along y is 0.8396 0.0124. The average correlation for velocity along x is 0.79450.0104 and along y is 0.7381 0.0057. We notice that although Monte Carlo SE introduces differences on the reconstruction among realizations due to stochasticity, the variance of the results is pretty small. Figure 4-2 zooms in the first 100 samples of th e reconstructed kinematics to show better the modeling accuracy. The left and right column plots display the reconstructed kinematics for x -axis and y -axis. The 3 rows of plots illustrate from top to bottom the reconstructed position, the velocity and the acceleration. In each plot, the red da sh line is the desired signal. The blue line is the reconstructed kinematics by one trial of Monte Carlo SE. The gray area in each plot represents the posterior density estimated by th e algorithm over time where the darker areas represent a higher value. As the va lue of the posterior density decreas es to 0, the color of the dots will fade to white. Figure 4-2 shows the Monte Carlo SE effectiveness to generate samples whose density follows the traject ory. The desired signal falls al most always within the high probability range of the posterior density, which demonstrates the good tracking ability of Monte Carlo SE.

PAGE 97

97 Since the desired signal in the test set data is formally unknown, it is not reasonable to just pick the best realization to present the recons truction results. Here, we choose the averaged performance among realizations as the r econstruction results by Monte Carlo SE. Figure 4-3 shows the averaged performance by Monte Carlo SE to reconstruct kinematics from all 185 neuron spike trains for 1000 test samp les. The left and right column plots display the reconstructed kinematics for x -axis and y -axis. The 3 rows of plots illustrate from top to bottom the reconstructed position, the velocity an d the acceleration. In eac h subplot, the red line indicates the desired signal, a nd the blue line indicates the expectation estimation. The correlation coefficients between the desired signal and the estimations were shown in Table 4-2. We further compared the statistical perfor mance of both algorithms on 8000 test data samples (80 seconds) of neural data. The perfor mance averaged among the decoding results from 20 Monte Carlo trials is chosen as the reconstruction result by Monte Carlo SE. CC and NMSE were both evaluated with an 800 sample-long wi ndow with 50% overlap. The reconstruction performance is shown in Table 4-3. As for the Figure of merit for reconstructi on, the correlation coefficient has been the preferred metric to compare movement reconstruc tion between different experimental data sets in BMIs [Wessberg et al 2000]. However, it may not be suffici ent to evaluate the accuracy of BMI algorithm, since a bias in position means that a different point in the external space will be targeted, so the rating criterion should take this bias into c onsideration to properly compare reconstruction models. Notice also that the correl ation coefficient obtained from the acceleration is pretty low. However, if we visually check the reconstruction resu lts in Figure 4-3, the algorithm actually follows the trend of the desi red signal closely. The problem with the NMSE for BMIs is that the results do not look as good, with errors sometimes bigger than the power

PAGE 98

98 of the trajectory. This can be observed in Figur e 4-3, where the reconstructed position seems to have a different scale of the desired trajectory Therefore, NMSE is also chosen as another criterion to evaluate the tracking accuracy of the animals true movement trajectory. Parameter Study for Monte Carlo SE Decoding in Spike Domain Although the results are interesti ng, Monte Carlo SE for spike modeling needs to be further developed. They are substantially more complex than the ones for random processes, and many parameters are assumed and need to be estimate d with significant design expertise. There are 4 parameters in Monte Carlo SE for point process in need to be tuned. Three of them during the encoding process (training st age), regularization factor in kinematics correlation matrix inverse (default 10-7), kernel size in nonlinearity smooth (defau lt 0.02), and the resolution parameter in approximation of noise distribution ) ( p in state dynamic model (default 100). The fourth parameter occurs in the decoding pr ocess and relates to the number of samples xn of particle i kx in posterior density estimation (default 1000). Therefore we will evaluate the encoding/decoding performance as a function of these parameters. For each parameter, 5 different values are tried with all the ot her parameters set at the default values. Regularization factor It is used to calculate the invers e of the correlation matrix of the kinematics 1) ] [ (I ETx x. The parameter is supposed to be a small positive number in order to properly condition the inverse of the correlation matrix of the kinematics, when the minimal eigenvalue is close to 0. However, it should be insignificant compared to the maximal eigenvalue of the correlation matrix, otherwise it would disturb the eignvalue structure. Notice that one way to experimentally set the proper is to check how affects the linear model error between linear output and de sired signal. Here we set = [0 10-7 10-5 10-3 10-1]. The error between the linear model output and the desired signal in terms of different is shown in

PAGE 99

99 Figure 4-4. As before, the left and right column plots display the reconstructed kinematics for x axis and y -axis. The 3 rows of plots illustrate from top to bottom the error for position, the velocity and the acceleration. We can see that when is smaller than 10-5, there is almost no significant difference between the errors. Howeve r we have only access to training data, a very small value will be safer (10-7) for the test data. The resolution parameter ) ( p. It is in approximating the noi se distribution in the state dynamic model 1 k k k kFx x Density is the number of samples to approximate the cdf of the noise distribution during the training. The greater the density is, the better the cdf approximates to the true one, together with more computation. Here we set density = [20,50,100,200,500]. Figure 4-5 shows the cdf of the noise distribution obtained from training set using different density values. We can see that when the density is larger than 100, the cdf lines overlap. Therefore 100 is a proper choice to approximate the cdf of the noise distribution in our experimental data. Kernel size It is used to smooth the nonlinearity in tuning estimation. Here we only study the kernel size for the important neurons, wh ich contribute most to shape the posterior density of the kinematics. If the kernel size is too small, there will be ripples on the conditional pdf which brings a large variance in nonlinearity estimation. If the ke rnel size is too big, it will smooth out the difference between joint pdf and marginal pdf, which results in the underestimation of the conditional pdf. Here we set = [0.005 0.01 0.02 0.05 0.1]. Figure 4-6 shows the nonlinearity of neuron 72 (one of the impor tant tuning neurons) smoothed by different kernel sizes. We can see that when is 0.005, there are a few rippl es on the nonlinear tuning curve. Even when is 0.01, there are still ripple at both extreme ends due to insufficient samples. When is too big (0.05 and 0.1), the tuni ng curve is underestimated. We check for all

PAGE 100

100 neurons, especially focus on the important tuning neurons. 0.02 is an empirical middle ground to smooth the nonlinearity in tuning. The sample number xn. It refers to the number of particle i kx in posterior density estimation is the only free parameter during the decoding process. This parameter describes the accuracy of the posterior density estimation at each time index. It also brings the main drawback of the approach, the high computational complexity, because each of the samples will be evaluated to construct the shap e of the posterior density. Here we set the sample number xn = [200, 500, 1000, 1500, 2000]. Figure 4-7 shows th e averaged decoding results through 20 Monte Carlo trials of the kinematics reconstruction with different xn .The left and right column plots display the reconstructed kinematics for x -axis and y -axis. The 3 rows of plots illustrate from top to bottom the reconstructed perfor mance of the position, the velo city and the acceleration. In each plot, the x -axis shows the value of xn. The blue solid line is CC between the reconstruction and desired signal. The green dash line is NMSE between the r econstruction and desired signal. We can see that CCs dont change obviously for all kinematics even using much higher xn., but the NMSE clearly shows the decrease trend when xn is bigger. Although the performance convergences with very large value of xn, it would also bring a larg e computational burden to decoding. To comprise between the accuracy and computational complexity, we choose 1000 samples where the decoding of most of the kinematic vari ables start to converge. Synthesis Averaging by Monte Carlo SE Decoding in Spike Domain The Monte Carlo sequential estimation for poi nt processes contains two sources of stochasticity, the generation of the samples to reconstruct the posterior density and the very nature of the single neuron firings that is modele d as a Poisson point process. While the former was dealt with the Monte Carlo method (averaging se veral realizations ), the later is still present

PAGE 101

101 in our results due to the coarse spatial samp ling of neural activity produced by the limited number of electrodes. This coarse sampling has two basic consequences. First, the multi electrode array collects activity from only some of these neural assemblies, which means that the Monte Carlo sequential estimation model output will have an error produced by not observing all the relevant neural data. This problem will always be present due to the huge difference in the number of motor cortex neurons and electrodes. Second, even when a given neural assembly is probed by one or a few neurons, it is still not po ssible to achieve accura te modeling due to the stochasticity embedded in the time structure of th e spike trains. To remove it, one would have to access the intensity function of neural assemblies th at are transiently created in motor cortex for movement planning and control, whic h are deterministic quantities. This means that every neuron belonging to the same neural assembly will display slightly different spike timing, although they share the sa me intensity function. Since each probed neuron drives an observation model in th e BMI, there will be a stochastic term in the output of the BMI (kinematics estimation) that can only be re moved by averaging over the neural assembly populations. However, we can attempt to decrea se this variance by estimating the intensity function from the probed neuron and from it generate several synthetic spike trains, use them in the observation model and average the correspondi ng estimated kinematics. Since this averaging is done in the movement domain (and if the proc ess would not incur a bias in the estimation of the intensity function) the time resolution woul d be preserved, while the variance would be decreased. We call this proce dure synthetic averaging and it attempts to mimic the population effect in the cortical assemblies. This averaging is rather different from the time average that is operated in binning, which looses time reso lution in the reconstructed kinematics.

PAGE 102

102 The synthetic spike trains are generated by an inhomogeneous Poisson process with a mean value given by the estimated intensity f unction obtained by kernel smoothing. This is repeated for each neuron in the array. During test ing these synthetic spike trains play the same role as the true spike trains to predict the kine matics on-line. Of course this will increase the computation time proportionally to the number of synthetic spike tr ains created. In a sense we are trying to use computer power to offset the li mitations of probing relati vely few neurons in the cortex. Since the errors in prediction have a bi as and a variance which ar e not quantified, it is unclear at this point how much better performanc e will become, but this will be addressed in the validation. As we analyzed in the previous section, in or der to deal with the intrinsic stochasticity due to the randomness of the spike trains, we propos ed the synthetic averaging idea to mimic the neuron population effect. Instead of decoding only from current spike trains, we use a Poisson generator to obtain 20 sets of spike trains from each neuron as synthetic plausible observations to represent the neuron ensemble firing with the sa me intensity function. This firing intensity function is estimated by kernel smoothing from each recorded spike trains. The kernel size is experimentally set as 0.17. In order to preserve the timing resolution the averaging is performed across the estimated kinematics of each group (incl uding the output of the true spike train). Table 4-4 shows the comparison results of the perf ormance by Monte Carlo SE averaged among 20 realizations on recorded real spike train and the deterministic averaged performance over Monte Carlo and synthetic data (20 sets re-gener ated spike trains, 20 Monte Carlo trials for each set) in the same segment of te st data (time index 215401 to 216400). Both approaches as well as the deterministi c performance resulted in reconstruction with similar correlation coefficients. However, th e average over synthetic data shows smoother

PAGE 103

103 kinematics reconstruction with reduced NMSE comparing to the averaged performance through 20 Monte Carlo trials on or iginal spike trains. NMSE reduces 26% for position along x 18% for position along y and on average 15% for all 6 kinematic variables. Therefore we can conclude that the reconstruction accuracy measured by NMSE has a large component due to the variance intrinsic in the spike firing, but does not affect the general trend of the reconstructed signal as measured by the CC. We further compared the statistical perfor mance of both algorithms on 8000 test data samples of neural data. The performance averaged among the decoding results from 20 sets regenerated spike trains is chosen as the reconstruction result by Monte Carlo SE. We run the decoding process for 20 Monte Carl o trials on each set of synthe tic spike trains. CC and NMSE were both evaluated with an 800 sample-long window with 50% overlap. For each segment of data, pair-wise student t -test was performed to see if the synthetic averaging (SA) results are statistically different from the averaged pe rformance by recorded ne uron spike train alone (MCSE). The test is performed against the al ternative specified by the left tail test CCMCSENMSESA for each kinematic variable. All the tests are performe d on the null hypothesis at = 0.05 significance le vel. Under the null hypothesis, the probability of observing a value as ex treme or more extreme of the test statistic, as indicated by the p -value, is shown in Table 4-5. Except the position x and the velocity y from this first case, we could not see CC by synthetic averaging is si gnificantly larger than the one Monte Carlo SE (05 0 p), as statistically verified using the t -test. In terms of NMSE, however, the t -test verifies that the

PAGE 104

104 Monte Carlo SE reconstruction is statistically be tter than the Monte Carlo SE for most kinematic variables. This result demonstrates that using th e simulated neuron popula tion attenuates the variability intrinsic in the coar se sampling of a given neural population, effectively trading computation for lack of more neural channels belonging to the same neural population. However, this procedure only reduces the kinematics estima tion error that is due to the variance of the recorded spike train. It cannot do anything against the lack of in formation produced by the coarse sampling of other neural populati on involved in the movement but not sampled at all. On the other hand, the procedure create s a modeling bias because the in tensity function is estimated from a single neuron, but it is very difficult to quantify. Since the results improve as measured by NMSE, overall the synthetic aver aging method gains more than it looses. When compared with the averaging done in time by binning, the averagi ng in the kinematics domain bypasses the lack of resolution problem and stil l smoothes the reconstruction. Decoding Results Comparison Analysis Several signal-processing approaches have been applied to predict movements from neural activities. Many decoding methodologies use binned spike trains to predict movement based on linear or nonlinear optim al filters [Wessberg et al ., 2000; Sanchez et al ., 2002b; Kim et al ., 2003]. These methods avoid the need for expl icit knowledge of th e neurological dynamic encoding properties, and standard li near or nonlinear regression is used to fit the relationship directly into the decoding ope ration. Yet another methodology can be derived probabilistically using a state model within a Ba yesian formulation [Schwartz et al. 2001; Wu et al ., 2006; Brockwell et al ., 2004] as we did in our Monte Carlo SE for point process. The difference is all the previous algorithms are coarse approaches th at do not exploit spike timing resolution due to binning and may exclude rich ne ural dynamics in the modeling. Monte Carlo SE for point

PAGE 105

105 process decodes the movement in spike domain. It is important to compare our algorithm to other Bayesian approaches that have been applied to BMI in terms of their different assumptions and decoding performance. Decoding by Kalman The Kalman filter has been applied to BMIs [Wu et al ., 2006] to reconstruct the kinematics as the state from continuous repres entation of neural activities ( i.e., using binned data). When seen as a Bayesian approach, the 2 basic assumpti ons of the Kalman filter are the linearity and Gaussian distributed posterior density. In another word, both the kinematic dynamic model and the tuning function are assumed to be strictly line ar, and the posterior density of the kinematics state given current neural firing ra tes are Gaussian distributed at each time index. In this way, the posterior density can be represen ted in close form with only 2 pa rameters, mean and variance of pdf To apply Kalman filter on our BMI data the state dynamic remains the same as k k k kF 1x x (4-4) where kF establishes the dependence on the previous state and k is zero-mean Gaussian distributed noise with covariance kQ kF is estimated from training data by the least square solution. kQ is estimated as the variance of the e rror between the linear model output and the desired signal. The tuning f unction is linearly defined as k lag t tn H x (4-5) where t is the firing rate by 100ms window binning. tx is the instantaneous kinematics vector defined as T t y y y x x x] 1 [a v p a v pwith 2-dimensional informati on of position, velocity, acceleration and bias term. The variable lag refers to the causal time delay between motor cortical neuron activity and kinematics due to the propagation effects of signals thru the motor

PAGE 106

106 and peripheral nervous systems. Here it is experimentally set as 200 ms [Wu et al., 2006, Wang et al., 2007b]. kn is zero-mean Gaussian dist ributed noise with covariance kR The weight estimation of the linear filter H is given from training data by ] [ ]) [ (1 t lag t lag t T lag tE E H x x x (4-6) Equation 4-6 represents the least square solution for the linear tuning function. The kinematics vector is then derived as the st ate from the observation of firing ra te in test by Equations 4-7 a-e. 1 | 1 1 | k k k k kFx x (4-7 a) k k k k k k kQ F P F P 1 | 1 1 | (4-7 b) 1 1 | 1 |) ( k k k k k k k k kR H P H H P K (4-7 c) k k k k k k kQ F P F P 1 | 1 1 | (4-7 d) ) (1 | 1 | | k k k k k k k kH Kx x x (4-7 e) Decoding by Adaptive Point Process Adaptive filtering of point pro cesses provides an analytical so lution to the state estimation in the spike domain. Therefore, it requires a parametric model for the neuron tuning in closed form. Many different functional forms of tuning ha ve been proposed, consisting mostly of linear projections of the neural modula tion on 2 or 3 dimensions of ki nematic vectors and bias. Moran and Schwartz [1999] also introduced a linear rela tionship from motor cortical spiking rate to speed and direction. Brockwell et al. [2003] assumed an exponential tuning function for their motor cortical data. Here we have trie d both tuning functions for our BMI data. Exponential tuning The exponential tuning function is estimated from 10000 samples of the training data as ) exp(lag t tH x (4-8)

PAGE 107

107 ) (t tPoisson spike (4-9) where t is the firing probability for each neuron, obtained by smoothing th e spike train with a Gaussian kernel. The kernel si ze is empirically set to be 0. 17 in the experiment [Wang et al., 2007c]. tx is the instantaneous kinematics vector defined as T t y y y x x x] 1 [a v p a v p with 2dimensional information of position, veloci ty acceleration, and bias. The variable lag refers to the causal time delay between motor cortical neuron activity and kinematics due to the propagation effects of signals thru the motor and peripheral nervous systems. Here it is experimentally set as 200 ms as well [Wu et al., 2006; Wang et al., 2007c]. The weight estimation of the linear filter H is given from the training data by )] log( [ ]) [ (1 t lag t lag t T lag tE E H x x x (4-10) Equation 4-10 represents the least square so lution for the linear adaptive filter in log likelihood form. During operation, most likely some firing rates are close to 0, which results in extremely negative numbers. Therefore, we add a small positive number, defined as 10% of the mean firing rate during training for each neuron, which makes th e firing rate always positive. The exponential tuning function in Equation 4-8 defines the first and second derivative terms in Equations 2-7c and 2-7d as T lag t tH xlog (4-11) 0 log2 T lag t lag t tx x (4-12) The kinematics vector is then derived as th e state from the observation of multi-channel spikes train for the test samples by Equations 2-7a-d in Chapter 2.

PAGE 108

108 Kalman point process Notice that when a linear t uning function is selected for the observation model together with a Gaussian assumption for the posterior density, the end result is actually a Kalman filter in the spike domain and will be called Kalman filt er for point process (PP) Here the linear tuning function is estimated from 10000 samples of the training data as B hlag t t x (4-13) ) (t tPoisson spike (4-14) where t is the firing probability for each neuron, obtained by smoothing th e spike train with a Gaussian kernel. The kernel si ze is empirically set to be 0. 17 in the experiment [Wang et al., 2007c]. tx is the instantaneous kinematics vector defined as T t y y y x x x] [a v p a v p with 2dimensional information of position, ve locity and acceleration. The variable lag refers to the causal time delay between motor cortical neuron activity and kinematics due to the propagation effects of signals thru the motor and peripheral nervous systems. Here it is experimentally set as 200 ms [Wu et al., 2006; Wang et al., 2007c]. We extend the kinematics vector as T t y y y x x x] 1 [a v p a v p to include a bias B which can be regarded as part of the weights of the linear filter H The tuning function is then lag t tH x. The weight estimation of the linear filter H is given by ] [ ]) [ (1t lag t lag t T lag tE E H x x x (4-15) Equation 4-15 represents the least square so lution for the linear adaptive filter, where ] [lag t T lag tE x x gives the autocorrelation matrix R of the input kinematics vector considering a causal time delay. ] [t lag tEx gives the cross-correlation vector P between the input and the

PAGE 109

109 firing probability. The linear tuning function in Equation 4-13 defines the first and second derivative terms in Equations 2-7c and 2-7d in Chapter 2 as t T lag t tH x log (4-16) 2 2logt T T lag t lag t tH H x x (4-17) The kinematics vector is then derived as th e state from the observation of multi-channel spikes train for the test samples by Equation 2-7a-d in Chapter 2. Performance Analysis Our Monte Carlo SE for Point Process is designe d to estimate the kinematics state directly from spike trains. The posterior density is estimated non-parametrical ly without Gaussian assumptions, which allows the state model and the observation model to be nonlinear. It is important to compare the performance of the Monte Carlo SE with the other algorithms on the same data set to validate all the assumptions. Firs t, to evaluate the performance advantages of a nonlinear & nonGaussian model, we compare it with the Kalman PP, which works in spike domain with linear tuning function and assumes the posterior density Gaussian distributed. Secondly, the Monte Carlo SE is utilizes a tuni ng function that is estimated non-parametrically directly from data. It would be interesting to compare the decoding performances with the different tuning models, such as the Gaussian tuning curve and the expo nential tuning curve. Thirdly, all the algorithms assume stationary tuning function between the training and test datasets. To study the decoding performance separa tely in training and te sting would provide us some idea how the tuning function could be ch anging over the time. F ourthly, the following question should be asked. How is the performan ce in the spike domain compared to working on

PAGE 110

110 the conventional spike rates? The above questions will be analyzed in de tails in the following sections. Nonlinear & non-Gaussian vs. linear & Gaussian The point process adaptive filtering with linear obse rvation model and Gaussian assumption (Kalman filter PP) and the proposed Monte Carlo SE framework were both tested and compared in a BMI experiment for the 2-D control of a computer cursor using 185 motor cortical neurons [Nicolelis et al., 1997; Wessberg et al., 2000] as before. After data preprocessing, the kinematics model kF for both algorithms can be estimated using the least squares solution. Notice that carefully choosing the parameters in the noise estimation (covariance kQ in Kalman PP and the noise distribution ) ( p in Monte Carlo SE) could affect the algorithm perf ormance. However, since we have no access to the desired kinematics in the test data set, the parameter es timations of both algor ithms were obtained from the training data sets. For the Kalman filter PP, the noise in the kinematics model (Equation 2-6) is approximated by a Gaussian distribution with covariance kQ. In the Monte Carlo SE model, the noise distribution ) ( p is approximated by the histogram of 1 k k k kFx x The resolution parameter was experimentally set to 100 to approximate the noise distribution. The regularization factor in the tuning function was experimentally set at 10-7 for this analysis. The remaining parameters in Monte Carlo SE include the kernel size selected at 0.02 and the number of particles nx experimentally set 1000, for a reasonable compromise between computational time and estimation performance. This kernel size is chosen carefully to not lose the characteristics of the tuning curve as we study before. As we have analyzed before, both algorithms produce stochastic outputs because of the Poisson spike generation model. However, the Kalman filteri ng PP has an analytical solution

PAGE 111

111 with recursive close form equa tions. We set the initial state 0xto be the zero vectors and the state variance0 | 0Pis estimated from the training data. Once the initial condition and parameters are set, the state estimation is determined uniquely by th e spike observations. However, the Monte Carlo SE approach introduces variations between realizations even with fixed parameters due to the estimation of the posterior distribu tion with the particles. Since th e desired signal in the test set data is formally unknown, it is not reasonable to just pick the best real ization to present the reconstruction results. Here, we choose the av eraged performance among realizations as the reconstruction results by Monte Carlo SE, and compare with the Kalman filter PP results. Table 4-6 shows reconstruction results on a 1000 sample of a test segment (shown in Figure 4-7) of neural data. Corre lation Coefficients (CC) and Normalized Mean Square Error between the desired signal and th e estimations are evaluated for the Kalman filter PP as well as for the Monte Carlo SE using 20 realizations of the posterior. For the second approach we also show the mean and the standard derivation among realizations, together with the best and the worst performance obtained by single realization. Both approaches resulted in reasonable rec onstructions of the posit ion and the velocity. The position shows the best correlation coefficient with the true trajectory. This result may be due to the fact that the velocity and the accelera tion were derived as differential variables, where the noise in the estimation mi ght be magnified. Although the Kalman filter PP assumes a Gaussian posterior and a simple linear mode l for both the kinematic dynamic system and the tuning function, it obtains a r easonable reconstruction of the position and the velocity. For the position CC= 0.7422 for the x direction and CC= 0.8264 for the y direction. The velocity shows a CC = 0.7416 for x and CC = 0.6813 for y. The Monte Carlo SE obtains the tuning function nonlinearity for each neuron from the training da ta and estimates the kinematics without any

PAGE 112

112 restriction on the posterior density. Th e average correlation for the position along x is 0.80580.0111 and along y is 0.8396 0.0124. The average correlation for velocity along x is 0.79450.0104 and along y is 0.7381 0.0057. The Monte Carlo SE is better than the Kalman filter PP in terms of both CC and NMSE. Figure 4-8B shows the reconstructed kine matics using both algorithms from all 185 neurons for 1000 testing samples. As before, the left and right panels depict respectively the reconstructed kinematics for x-axis and y-axis. The 3 rows of plots from top to bottom display respectively the reconstructed pos ition, the velocity and the acceler ation. In each subplot, the red dash line indicates the desired signal, the blue solid line indi cates the estimation by Monte Carlo SE, and green dotted line indica tes the estimation by Kalman filt ering PP. For clarity, Figure 48B also shows the 2D recons tructed position for a segment of the testing samples by two methods. The Monte Carlo approach offers the most consistent reconstruction in terms of both correlation coefficient and normalized mean square error. The simulation of both models with syntheti c data provides important hints on how to interpret the results with real neural data. The linear tuning model by the Kalman filter PP provides less accuracy in the nonlinear region of the tuning function, which in turn affects the decoding performance. Moreover, the Kalman filt er PP also assumes the posterior density is Gaussian, therefore both algorit hms provide similar velocity estimation along y when both assumptions are verified. When the estimation from the two algorithms are different (often occur at the peak of the desired signal), the M onte Carlo SE model us ually provides better performance, which is due to ei ther its better modeling of the neurons nonlinear tuning or its ability to track the non-Gaussian posterior density better.

PAGE 113

113 We further compared the statistical perfor mance of both algorithms on 8000 test data samples of neural data. The performance averaged among the decoding results from 20 sets regenerated spike trains (20 realizations each set) is chosen as the reconstruction result by Monte Carlo SE. CC and NMSE were both evaluate d with an 800 sample-long window with 50% overlap. For each segment of data, pair-wise student t-test was performed to see if the results are statistically different from the Kalman filter PP. The test is performed against the alternative specified by the left tail test CCKalmanNMSEMCSE for each kinematic variable. All the tests are performed on the null hypothesis at = 0.05 significance level. Under the null hypothesis, the probability of observing a value as extreme or more extreme of the test statistic, as indicated by the p-value, is shown in Table 4-7. Except the x position and the y acceleration from this first case, the CC of the Monte Carlo SE of all other kinematic vari ables is significantly larger th an the Kalman filter PP (05 0 p), as statistically verified using the t-test. In terms of NMSE, however, the t-test verifies that the Monte Carlo SE reconstruction is statistically be tter than the Kalman filter PP for all kinematic variables. Exponential vs. linear vs. LNP in encoding We have shown 2 tuning models in implemen ting the adaptive filtering on point process. Comparing the decoding performance of these 2 different encoding (tuning) models with the Gaussian distributed posterior density could show the importan ce of choosing an appropriate tuning model for the decoding methodology.

PAGE 114

114 Both tuning models were implemented as BM I decoders in the spike domain. The point process generation is the same as we described fo r Kalman PP in previous section. After data preprocessing, the parameter estimation of both al gorithms were obtained from the training data sets. For the exponential filter PP, the noise in th e kinematics model is the same as the one in Kalman PP. We set the initial state 0xto be the zero vectors and the state variance0 | 0P is estimated from the training data. Once the init ial condition and parameters are set, the state estimation is determined unique ly by the spike observations. Table 4-8 shows the statistica l reconstruction results on 8000 samples of test neural data. NMSE between the desired signa l and the estimations by expo nential PP and Kalman PP are evaluated with an 8 sec window with 50% overl ap, together with the performance by Monte Carlo SE. Kalman filter PP gives bett er performance in position y but worse performance in position x comparing to exponential PP. For all the other ki nematic variables, both encodings give similar performances. We can infer that the proper tuni ng function to decode the kinematics on-line would be somehow between line ar and exponential curves. The performance comparing to one by Monte Carlo SE shows that the instantaneous tuni ng curves we evaluate directly from the data catches more information than both linear a nd exponential curves, wh ich provides the best decoding results. However, it is a very ti me consuming operation as described before. Training vs. testing in different se gments nonstationary observation As we have mentioned before, all the parameters of the tuning curves were estimated from the training data and remain the same in the testing segments. The big assumption behind this methodology is stationary of the tuning properties over time, which may not be true. One way to test this assumption is to see the performan ce comparison among the trai ning data and different

PAGE 115

115 testing data. Here time index of the training set is from 113500 ms to 193500 ms. The time index for testing set 1 is from 213500 ms to 293500 ms, which is right after the training data. The second testing set is chosen from 1413500 ms to 1493500 ms, which is far from the training data. For each data set, the statistica l reconstruction results on 8000 samples of neural data. Both CC and NMSE between the desired signal and the estimations by exponential PP, Kalman PP and Monte Carlo PP are evaluated with an 8 sec wi ndow with 50% overlap. Figure 4-9 A and 4-9 B shows the performance trends between the traini ng and different test se ts in terms of CC and NMSE respectively. The left and right panels de pict respectively the reconstructed kinematics for x-axis and y-axis. The 3 rows of plots from t op to bottom displa y respectively the reconstructed performances for position, velocity and acceleration. In eac h subplot, the green bar indicates the mean and variance of the estima tion performance for 3 different data sets by Kalman filtering PP, the cyan line indicates the stat istical estimation performance by Exponential filtering PP, and the blue line indicates the statis tical estimation performance by Monte Carlo SE. For both criteria, all the algorith ms show clearly similar trends of statistical performance. The reconstruction on test data 1 is slightly worse than the reconstruction in training data. However, in the test data 2, which is quite far from the training data, the performance is much worse. It means the stationary assumption in both training and testing is qu estionable. It might be allowed in the testing segment close right afte r the training because th e change of the tuning property is not obvious. But it woul d result in poor estimation when the tuning properties change after some time. Therefore, the study on the non-stationary tuning property and the corresponding tracking in the decoding algorithm is necessary. Spike rates vs. point process One way to test the decoding difference betw een continuous variable s (spike rates) and point processes is to compare the performance of the Kalman filter and Kalman PP on the same

PAGE 116

116 segment of test data, because both filters have linear tuning and Gaussian distributed posterior density assumptions. The difference is Kalman fi ltering reconstructs the kinematics state from continuous representation of neur al activities the binning firing ra te, while Kalman PP directly works in spike domain. For Kalman filtering, 100 msec binning window is used to process the spike times into continuous firing rates for each neuron in the ensemble. For Kalman PP, the preprocessing to construct the point process observation sequence remains the same as Monte Carlo SE does. After data preprocessing, the kinematics model kF for both algorithms can be estimated using the least squares solution. Notice that carefully choosing the parameters in the noise estimation (covariance kQ in both Kalman and Kalman PP) could affect the algorithm performance. However, since we have no access to the desired kinematics in the test data set, the parameter estimations of both algorithms were obtai ned from the training data sets. The noise in the kinematics model is approximated by a Gaussian distribution with covariance kQ. The Kalman filtering PP algorithms produce st ochastic outputs because of the Poisson spike generation model. Both have analytical so lutions with recursive close form equations. We set the initial state 0xto be the zero vectors and the state variance0 | 0Pis estimated from the training data. Once the initial condition and paramete rs are set, the state estimation is determined uniquely by the spike observations. Table 4-9 shows the statistical reconstructi on results on 8000 samples of training and 8000 samples of test segment of neural data. Since the desired signals of Kalman filter and Kalman PP are obtained differently, here onl y Correlation Coefficients (CC) between the desired signal and the estimations are evaluated with an 8 sec window with 50% overlap.

PAGE 117

117 Both approaches resulted in reasonable rec onstructions of the posit ion and the velocity. The position shows the best correlation coefficient with the true trajectory. This result may be due to the fact that the velocity and the accelera tion were derived as differential variables, where the noise in the estimation might be magnified. It is interesting to first notice that Kalman filter gets pretty good results on training set whil e the performance drop much more in testing comparing to the decrease between training and test. In the Kalman filter for BMI, the firing rates obtained by binning techniqu es blur the exact time inform ation of the spike trains. The binning techniques also serve as averaging whic h makes the noise terms more Gaussian. This may make the Kalman filter over-fit the training set, while loose the generality in test because of a lack of information in spike timing. Compar ing the performance difference between training and testing by Kalman PP, it shows no phenom enon of model over-fitting even with position y better predicted in testing than in training. This is because Kalm an works directly in the spike train, which involves higher resolu tion time information of neural; activity. However, working in spike domain without averaging makes the assump tion of the Gaussian distributed posterior density less satisfied in Kalman PP than Kalman This is why the Kalman PP doesnt show better decoding performance, which does not necessarily mean point process brings less information for decoding. When we compare the Kalman pe rformance to Monte Carlo SE for point process in Table 4-10, where we have no Gaussian as sumption, Monte Carlo SE has better decoding results in position and velocity as we expected The smaller CC of reconstructed acceleration in point process might be due to large peaks of the desired acceleration as we explained before, which is different from the desire d acceleration of Kalman filter. Monte Carlo SE Decoding in Spike Domain Using a Neural Subset The performance of BMI hinges on the abilit y to exploit information in chronically recorded neuronal activity. Sin ce during the surgical phase there are no precise techniques to

PAGE 118

118 target the modulated cells, the strategy has been to sample as many cells as possible from multiple cortical areas with known motor associations. In the expe riment, we collected activities of 185 neurons from 5 motor cortical areas and re gard them contributing equally to the current decoding process. Research has shown that different motor cortical areas play different roles in terms of the movement plan and execution. Mo reover, the time-consuming computation on all the neuron information would br ing significant computational bur den to implement BMI in lowpower, portable hardware. We cant help making a guess that groups of neurons have different importance in BMI decoding as sugg ested in previous work [Sanchez et al., 2003]. In Chapter 3, we have shown that the information theoretical analysis on the neuron tu ning function could be a criterion to evaluate the information amount be tween the kinematics and neural spike trains, therefore it weights the impor tance among neurons in term of certain task or movement. Moreover, if the decoding algorithm only calcu lates the subset of the important neuron associated with movement behavior, it will im prove the efficiency of BMI on large amount of the brain activity data. Neural Subset Selection As we have shown in Chapter 3, the inform ation theoretic tuning depth we proposed as a metric for evaluating neuron instantaneous recept ive properties is based on information theory and would capture much more of the neuronal respons e. Define a tuned cell as a cell that extracts more information between the kine matics and its spiking output. Th e well-established concept of mutual information [Reza 1994] can mathematically account as an information theoretic metric from the neural spikes for each neuron based on the instantaneous tuning model, which is given by 1 0 ) ( ) ( ) () () | ( ) ( ) ; (jspke lag j k lag lag j jk spk p k p k spk Ix x xx ) ) ( ) | ( ( log) ( ) ( 2 j lag jspk p k spk p x (4-18)

PAGE 119

119 where j is the neuron index. ) (lagk p x is the probabilistic density th e linear filtered kinematics evaluated at the optimum time lag, which can be easily estimated by Parzen window [Parzen 1962]. ) ( spk p can be calculated simply as the percen tage of the spike count during the entire spike train. ) | ( x k spk p is exactly the nonlinear function f in LNP model The information theoretical tuning depth statis tically indicates the in formation between the kinematic direction and neural sp ike train. By setting a threshol d, as shown in Figure 4-10, it could help determine which subsets of the tune d neuron to include in the model to reduce the computation complexity. For example, the first 30 most tuned neurons could be selected as candidates to decode the movements in the BMI m odel. The distribution of the selected neuron is shown in Figure 4-11. Here the 5 different cortical areas are shown in diffe rent color bar with the corresponding mutual informati on estimated by Equation 4-18. The selected 30 neurons are labeled as read stars. We can see that here ar e 1 neuron in PMd-contra, 21 neurons in M1-contra, 6 neurons in S1-contra, and 2 neur ons in SMA-contra. The most t uned neurons are in M1 as we expected. Neural Subset vs. Full Ensemble As we have the criterion to select the ne ural subset, we compare the reconstruction performance by different neural subsets, which have 60, 40, 30, 20, 10 most important neurons associated with the movement, to the decoding results by the full ensemble of 185 neurons. The statistical performances evaluated by both CC and NMSE with an 8 sec window with 50% overlap are shown in Table 4-11. We also plot the statistical decoding perf ormance (mean and standard derivation) by CC and NMSE respectively with different neuron subsets in Figure 4-12 A and 4-12 B. The performance difference evaluated by CC among the ne ural subsets is not as clear as the one by

PAGE 120

120 NMSE. Decoding performances along x evaluated by CC and NMSE increase and converge as the neuron number of neuron subset in creases. The decoding performances along y reach the maximum (CC) or the minimum (NMSE) when the neuron subset has 30 neurons. The study of decoding performance by neuron subs et shows the possibility to evaluate the neuron tuning importance associated with the m ovement task. With only 30 neurons (bolded row in Table 4-11) out of the full ensemble 185 (itali c row in Table 4-11), we could achieve similar or even better performance in terms of NMSE. It means that not all the neuron activities in motor cortex are closely related to the movement task Some of the neurons activities might contribute as noise in term of certain task, which reduces the decoding performance. At the same time, computation by only 30 neurons saves 84% runni ng time comparing to the one by 185 neurons.

PAGE 121

121 Figure 4-1. Schematic of relationship betw een encoding and decoding processes for Monte Carlo sequential estimati on of point processes Table 4-1. The kinematics reconstructions by Monte Carlo SE for segment of test data Position Velocity Acceleration Meth od Criterion x y x y x y MeanStd 0.810.01 0.83 0.010.79 0.010.74 0.010.450.01 0.25 0.01 Best 0.83 0.840.800.740.47 0.26 C C Worst 0.79 0.830.780.730.44 0.25 MeanStd 0.440.03 0.98 0.140.45 0.020.55 0.010.820.01 1.03 0.01 Best 0.40 0.740.450.540.81 1.04 Mont e Carlo SE N M S E Worst 0.43 1.300.44 0.540.81 1.02 State space Observation model (Tuning function) spike trains observation 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 timespike 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 10 5 -1.5 -1 -0.5 0 0.5 1 1.5 time ( ms ) velocityDecoding Encoding state model

PAGE 122

122 Figure 4-2. The posterior de nsity of the reconstructed ki nematics by Monte Carlo SE

PAGE 123

123 2.54 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -40 -20 0 20 40 60 t Px 2.54 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -30 -20 -10 0 10 20 30 40 t Py 2.54 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -2 -1.5 -1 -0.5 0 0.5 1 1.5 t Vx 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -1.5 -1 -0.5 0 0.5 1 t Vy 2.54 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -0.1 -0.05 0 0.05 0.1 0.15 0.2 t Ax 2.54 2.55 2.56 2.57 2.58 2.59 2.6 2.61 2.62 2.63 2.64 x 104 -0.1 -0.05 0 0.05 0.1 0.15 0.2 t Ay desired CCMCSE=0.81;NMSEMCSE=0.43 desired CCMCSE=0.84;NMSEMCSE=0.93 desired CCMCSE=0.80;NMSEMCSE=0.44 desired CCMCSE=0.74;NMSEMCSE=0.54 desired CCMCSE=0.46;NMSEMCSE=0.82 desired CCMCSE=0.26;NMSEMCSE=1.01 Figure 4-3. The reconstructed ki nematics for 2-D reaching task Table 4-2. Averaged performance by Monte Ca rlo SE of the kinematics reconstructions for segment of test data Position Velocity Acceleration Criterion x y x y x y CC 0.81 0.840.800.740.46 0.26 NMSE 0.43 0.930.440.540.82 1.01 Table 4-3. Statistical performance of the kinematics reconstructi ons using 2 criteria Position Velocity Acceleration Criteri on x y x y x y CC 0.762 0.078 0.757 0.128 0.751 0.075 0.734 0.063 0.520 0.055 0.370 0.076 NMSE 0.563 0.186 0.964 0.322 0.515 0.126 0.510 0.126 0.748 0.160 1.017 0.353

PAGE 124

124 1 2 3 4 5 0 2 4 6 x 10-5 errorPX 1 2 3 4 5 0 0.5 1 x 10-4 Py 1 2 3 4 5 0 0.02 0.04 0.06 errorVx 1 2 3 4 5 0 0.1 0.2 Vy 1 2 3 4 5 0 0.5 1 alphaerrorAx 1 2 3 4 5 0 0.5 1 alpha Ay Figure 4-4. Linear mode l error using different

PAGE 125

125 -0.1 -0.05 0 0.05 0.1 0.15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 noisecdf density=20 density=50 density=100 density=200 density=500 Figure 4-5. cdf of noise distribution using different density -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 KXneuron 72 =0.005 =0.01 =0.02 =0.05 =0.1 Figure 4-6. Nonlinearity of neuron 72 using different

PAGE 126

126 200 400 600 800 1000 1200 1400 1600 1800 2000 0.65 0.7 0.75 0.8 xn Px CC NMSE 200 400 600 800 1000 1200 1400 1600 1800 2000 0.5 0.6 0.7 0.8 0.9 xn Py CC NMSE 200 400 600 800 1000 1200 1400 1600 1800 2000 0.7 0.8 0.9 1 1.1 xn Vx CC NMSE 200 400 600 800 1000 1200 1400 1600 1800 2000 0.45 0.5 0.55 0.6 0.65 0.7 0.75 xn Vy CC NMSE 200 400 600 800 1000 1200 1400 1600 1800 2000 0.2 0.4 0.6 0.8 1 1.2 xn Ax CC NMSE 200 400 600 800 1000 1200 1400 1600 1800 2000 0.4 0.5 0.6 0.7 0.8 0.9 1 xn Ay CC NMSE Figure 4-7. Decoding pe rformances by different xn Table 4-4. Results comparing the kinematics re constructions averaged among Monte Carlo trials and synthetic averaging Position Velocity Acceleration Criteri on Method x y x y x y Average among 20 Monte Carlo trials 0.811 0.837 0.799 0.741 0.456 0.255 CC Average among 20 Synthetic spikes, 20 Monte Carlo trials each 0.843 0.852 0.822 0.737 0.443 0.233 Average among 20 Monte Carlo trials 0.429 0.933 0.439 0.538 0.817 1.025 NMSE Average among 20 Synthetic spikes, 20 Monte Carlo trials each 0.319 0.768 0.330 0.484 0.808 0.990

PAGE 127

127 Table 4-5. Statistical performance of the ki nematics reconstructions by Monte Carlo SE and synthetic averaging Position Velocity Acceleration Method x y x y x y Monte Carlo SE 0.762 0.078 0.757 0.128 0.751 0.075 0.734 0.063 0.520 0.055 0.370 0.076 CC Monte Carlo SE (synthetic averaging) 0.777 0.089 0.755 0.154 0.753 0.083 0.750 0.058 0.496 0.073 0.346 0.082 ttest H1: CCMCSENMSESA ( p -value) 1(0) 1(0) 1(0) 1(0.001) 0(0.954) 1(0) Table 4-6. Results comparing the kinematics r econstruction by Kalman PP and Monte Carlo SE for a segment of data Position Velocity Acceleration Criterion Method x y x y x y Kalman filter PP 0.740.830.740.680.42 0.18 CC Monte Carlo SE 0.810.840.800.740.46 0.26 Kalman filter PP 0.811.510.500.770.95 1.13 NMSE Monte Carlo SE 0.430.930.440.540.82 1.01

PAGE 128

128 A Figure 4-8. The reconstructed kinematics for a 2-D reaching task. A) Plot individually. B) Position reconstruction in 2D 2.54 2.56 2.58 2.6 2.62 2.64 x 104 0 20 40 60 t Px 2.54 2.56 2.58 2.6 2.62 2.64 x 104 0 10 20 30 40 t Py 2.54 2.56 2.58 2.6 2.62 2.64 x 104 .5 .5 0 0.5 1 1.5 t Vx 2.56 2.58 2.6 2.62 2.64 x 104 .5 .5 0 0.5 1 t Vy 2.54 2.56 2.58 2.6 2.62 2.64 x 10 4 .1 0.05 0 0.05 0.1 0.15 0.2 t Ax 2.54 2.56 2.58 2.6 2.62 2.64 x 10 4 .1 .05 0 0.05 0.1 0.15 0.2 t Ay desired CCMCSE=0.81;NMSEMCSE=0.43 CCPP=0.74;NMSEPP=0.81 desired CCMCSE=0.84;NMSEMCSE=0.93 CCPP=0.83;NMSEPP=1.51 desired CCMCSE=0.80;NMSEMCSE=0.44 CCPP=0.74;NMSEPP=0.50 desired CCMCSE=0.74;NMSEMCSE=0.54 CCPP=0.68;NMSEPP=0.77 desired CCMCSE=0.46;NMSEMCSE=0.82 CCPP=0.42;NMSEPP=0.95 desired CCMCSE=0.26;NMSEMCSE=1.01 CCPP=0.18;NMSEPP=1.13

PAGE 129

129 B Figure 4-8. Continued -50 -40 -30 -20 -10 0 10 20 30 40 -50 -40 -30 -20 -10 0 10 20 30 40 x yposition desired KalmanPP Monte Carlo PP

PAGE 130

130 Table 4-7. Statistical performance of the kine matics reconstructions by Kalman PP and Monte Carlo SE (synthetic averaging) Position Velocity Acceleration Method x y x y x y Kalman filter PP 0.763 0.073 0.717 0.133 0.702 0.114 0.694 0.066 0.471 0.065 0.345 0.089 CC Monte Carlo SE (synthetic averaging) 0.777 0.089 0.755 0.154 0.753 0.083 0.750 0.058 0.496 0.073 0.346 0.082 ttest H1: CCKalmanNMSEMCSE ( p _value) 1(0) 1(0.019) 1(0) 1(0) 1(0) 1(0) Table 4-8. Statistical performance of the ki nematics reconstructions by different encoding models Position Velocity Acceleration crit eri on Method x y x y x y Exponential PP 0.6673 0.2024 1.4976 0.6547 0.6690 0.2090 0.6922 0.1180 0.8731 0.1718 1.1178 0.3731 Kalman filter PP 0.897 0.305 1.043 0.245 0.673 0.271 0.686 0.172 0.891 0.187 1.085 0.385 N MS E MCSE PP 0.563 0.186 0.964 0.322 0.515 0.126 0.510 0.126 0.748 0.160 1.017 0.353

PAGE 131

131 A Figure 4-9. The decoding perfor mance by algorithms in PP for di fferent data sets. A) CC. B) NMSE 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 PxCC 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 PyCC 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 VxCC 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 VyCC 0.5 1 1.5 2 2.5 3 3.5 0 0.2 0.4 0.6 0.8 AxCC 0.5 1 1.5 2 2.5 3 3.5 0 0.2 0.4 0.6 0.8 AyCC Exponential PP Kalman PP MCSE PP training test1 test2training test1 test2 training test1 test2 training test1 test2 training test1 test2 training test1 test2

PAGE 132

132 B Figure 4-9. Continued. 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 PxNMSE 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 PyNMSE 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 VxNMSE 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 VyNMSE 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 AxNMSE 0.5 1 1.5 2 2.5 3 3.5 0 1 2 3 AyNMSE Exponetial PP Kalman PP MCSE PP training test1 test2 training test1 test2 training test1 test2 training test1 test2 training test1 test2 training test1 test2

PAGE 133

133 Table 4-9. Statistical performa nce of the kinematics reconstruc tions Kalman filter and Kalman PP Position Velocity Acceleration Method x y x y x y Training 0.874 0.039 0.859 0.061 0.851 0.043 0.809 0.064 0.748 0.057 0.676 0.068 Kalman filter Test 0.746 0.070 0.740 0.100 0.738 0.060 0.732 0.064 0.585 0.081 0.483 0.112 Training 0.794 0.061 0.641 0.182 0.759 0.090 0.696 0.105 0.479 0.087 0.361 0.113 CC Kalman filter PP Test 0.763 0.073 0.717 0.133 0.702 0.114 0.694 0.066 0.471 0.065 0.345 0.089 Table 4-10. Statistical performance of the ki nematics reconstructions by spike pates and by point process Position Velocity Acceleration Method x y x y x y Kalman filter 0.7463 0.0703 0.7397 0.1003 0.7379 0.0601 0.7318 0.0643 0.5853 0.0806 0.4834 0.1123 CC Monte Carlo SE 0.7776 0.0886 0.7545 0.1543 0.7530 0.0830 0.7505 0.0583 0.4958 0.0726 0.3459 0.0824 0 20 40 60 80 100 120 140 160 180 200 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 sorted neuron indexI(spk, V)sorted information theoretic tuning depth X: 30 Y: 0.00158 Figure 4-10. Threshold setting for sorted information theoreti c tuning depths for 185 neurons

PAGE 134

134 0 20 40 60 80 100 120 140 160 180 200 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 NeuronMISeleted Neuron Subset distribution PMd M1 S1 SMA M1 ipsi Selected Neuron Subset (30 Neurons) Figure 4-11. Selected neuron s ubset (30 neurons ) distribution

PAGE 135

135 Table 4-11. Statistical performance of the kine matics reconstructions by neuron subset and full ensemble Position Velocity Acceleration Neuron Subset x y x y x y Full Ensemble 0.7619 0.0784 0.7574 0.1275 0.7511 0.0749 0.7342 0.0633 0.5199 0.0545 0.3703 0.0764 60 0.7554 0.0942 0.7721 0.1105 0.7473 0.0787 0.7279 0.0633 0.5145 0.0510 0.3650 0.0830 40 0.7449 0.0954 0.7782 0.1058 0.7373 0.0848 0.7315 0.0646 0.5102 0.0608 0.3644 0.085030 0.7456 0.1027 0.7730 0.1084 0.7420 0.0823 0.7327 0.0613 0.5084 0.0650 0.3682 0.085720 0.7213 0.1036 0.7568 0.1238 0.7227 0.0884 0.7354 0.0566 0.4927 0.0681 0.3669 0.0763 CC 10 0.7181 0.1141 0.6487 0.1752 0.6824 0.0924 0.6931 0.0774 0.4661 0.0804 0.3515 0.0702Full Ensemble 0.5628 0.1861 0.9643 0.3222 0.5145 0.1259 0.5097 0.1261 0.7481 0.1598 1.0165 0.352660 0.5330 0.1908 0.8925 0.2256 0.5031 0.1345 0.5098 0.1098 0.7505 0.1634 1.0156 0.3557 40 0.5335 0.1818 0.8003 0.1585 0.5173 0.1385 0.5044 0.1039 0.7541 0.1678 1.0106 0.359330 0.5339 0.2047 0.8022 0.2555 0.4985 0.1440 0.4985 0.0940 0.7536 0.1697 0.9993 0.356220 0.5828 0.1858 0.7273 0.2674 0.5334 0.1538 0.4915 0.1124 0.7711 0.1680 1.0005 0.3600 N M SE 10 0.5770 0.2167 0.7304 0.3408 0.5697 0.1550 0.5348 0.1502 0.7940 0.1684 0.9595 0.3779

PAGE 136

136 A Figure 4-12. Statistical performa nce of reconstructed kinematics by different neuron subsets. A) CC. B) NMSE 0 50 100 150 200 0 0.5 1 Px Neuron SubsetCC 0 50 100 150 200 0 0.5 1 Py Neuron SubsetCC 0 50 100 150 200 0 0.5 1 Vx Neuron SubsetCC 0 50 100 150 200 0 0.5 1 Vy Neuron SubsetCC 0 50 100 150 200 0 0.5 1 Ax Neuron SubsetCC 0 50 100 150 200 0 0.5 1 Ay Neuron SubsetCC

PAGE 137

137 B Figure 4-12. Continued 0 50 100 150 200 0.2 0.4 0.6 0.8 1 Px Neuron SubsetNMSE 0 50 100 150 200 0 0.5 1 1.5 Py Neuron SubsetNMSE 0 50 100 150 200 0.2 0.4 0.6 0.8 1 Vx Neuron SubsetNMSE 0 50 100 150 200 0.4 0.5 0.6 0.7 Vy Neuron SubsetNMSE 0 50 100 150 200 0.4 0.6 0.8 1 Ax Neuron SubsetNMSE 0 50 100 150 200 0.5 1 1.5 Ay Neuron SubsetNMSE

PAGE 138

138 CHAPTER 5 CONCLUSIONS AND FUTURE WORK Conclusions Brain-Machine Interfaces (BMI) is an emerging field inspir ed by the need to restore motor function and control in individuals who have lost the ability to control the movement of their limbs. Researchers seek to design a neuron-motor sy stem that exploits the spatial and temporal structure of neural activity in the brain to bypass spinal cord lesions and directly control a prosthetic device by intended movement. In human and animal experiments, neuronal activity has been collected synchronously from microelectrode arrays im planted into multiple cortical areas while subjects performed 3-D or 2-D ta rget-tracking tasks. Se veral signal processing approaches have been applied to extract the functional relationship between the neural recordings and the animals kinematic trajectories. The resu lting models can predic t movements and control a prosthetic robot arm or computer to implement them. Many decoding methodologies, including Wiener filter and neural networks, use binned spike trains to predict movement based on standa rd linear or nonlinear regression. Alternative methodologies, such as Kalman filter or particle filter, were deri ved using a state model within a Bayesian formulation. From a sequence of noisy obs ervations of neural acti vity, the probabilistic approach analyzes and infers the kinematics as a state variable of the neural dynamical system. The neural tuning property relates the measurement of the noisy neur al activity to the animals behaviors, and builds up the observation meas urement model. Consequently, a recursive algorithm based on all available statistical inform ation can be used to construct the posterior probability density function of each kinematic state given the neur on activity at each time step from the prior density of that state. The prior density in turn is the posterior density of the previous time step updated with the discrepancy between an observation model and the neuron

PAGE 139

139 firings. Movements are then recovered probabilistically from the multi-channel neural recordings by estimating the expectation of the posteri or density or by maximum a posterior. The differences among the above approaches reflect the following challenges in BMI modeling. Linear or nonlinear? Wiener and Kalman filters are bo th linear fitting methods that can be used to reflect the functional relationship between neural firing and movements. A linear model is intuitive and is not co mputationally complex, so is si mple to calculate. However, the assumption of linearity is very strict, and althoug h it may be valid for binned data due to the averaging effectives, most neuroscientists do not ag ree with this approach at the neural level. Adding to this concern is that neuron behavior exhibits saturati on, thresholding, and refractory attributes, thus reflecting non linearity. To improve the perfor mance of these models, neural networks and particle filters we re added to build nonlinear relati onships, but this also increases the computational complexity. On the other hand a standard method to accurately estimate or model the neural nonlinearity is still in development, sin ce the ground truth is not fully understood even by neuroscientist s. Evaluation of the model nonl inearity by comparing several algorithm performances in the BMI reconstruction accuracy is one of the feasible ways to rate different hypothesis. At issue is whether or not the performance will improve enough to justify the complicated nonlinear modeling and computation. Gaussian or non-Gaussian? Gaussianity is one of engineerings most preferred assumptions to describe the error distribution when we build models for stoc hastic signals. In the Bayesian approach, the assumption of Gaussianity is also present in the Kalman filter to describe the posterior density. However, if we agree on the nonlinearity relation of the neuron behavior tuning to preferred movement, the Gaussian assu mption at all the time is always questionable because the pdf is reshaped by the nonlinear tuning. An algorithm that is not bound to this

PAGE 140

140 assumption (i.e., which utilizes the full information in the pdf) is necessary to help us understand how much performance hit is tied to the Gaussian assumption. Pa rticle filtering is a general sequential estimation method that works wi th continuous observation through a nonlinear observation model without the Gaussian assump tions. However, in terms of a practical application, we should not ove r-claim the non-Gaussian assump tion for performance evaluation because the computational complexity of both methods (Particle and Kalman filters) is drastically different. The proper framework is to realize that the Gaussian assumption is a simplifying assumption, and then ask how much improvement over the Kalman can the Particle filter provide. For instance, if the local pdf can be approximated by a Gaussian distribution very well for certain segment of experimental data the algorithm with non-Gaussian assumption would have the equivalent perfor mance without showing its advantag e, which, at the same time, comes with more computation complexity. Black box or gray box? Wiener and neural networks ar e black box models that operate without physical insight in to the important features of the motor nervous sy stem. However, in the Bayesian approach, the observation model enables us to have more insight into the neural tuning property which relates the measurem ent of the noisy neural activity to the animals behaviors. Although the Kalman filter is stil l (and controversially) linear, it would be an excellent entry point to incorporate th e knowledge of neural tuning into modeling. Enhancing the black box model to the gray box model is expected to in crease performance and in turn to test the knowledge we incorporate into the model. Notice however, that both the particle filtering and the Kalman filter both still assume a fixed and know n state and observation models. In actuality, these remain unknown for BMI data.

PAGE 141

141 All of the computational models described above are intended to efficiently and accurately translate neural activity into the intention of movement. Depending on different animals and tasks, well-established adaptive signal pr ocessing algorithms have achieved reasonable kinematics predictions (average correlation coefficient of ar ound 0.8 [Sanchez, 2004]). These algorithms provide an attractive engineering so lution for evaluating and characterizing the temporal aspects of a system. However, a suc cessful realization of BMI cannot be dependent entirely on improvement of methodologies. We mu st develop a better understanding of brain signal properties. Brain signals are believed to be very complicated. They contain a huge amount of data, they are noisy, non-stationary, and interact with each other in wa ys not fully understood. When designing the computationa l model, the following should be carefully considered. What is the proper signal scale for BMIs? To fit into the traditional signal processing algorithm which works with a continuous value, early BMI research frequently employed a binning process on action potentials to obtain the neural firing rate as a continuous neural signal. However, single unit activity is completely specified by the spik e times. The weakness of the binning technique, as a coarse approach, is findi ng the optimal window size. The loss of spike timing resolution might exclude rich neural dynamic s from the model. How to extract effectively the information hidden in the spike timing br ings challenges not onl y in signal-processing algorithm development but also in the accurate modeling of the neuron physiologic properties. Moreover, if the signal-processing te chniques enable us to look closer into the neural spike train, we will have to face another challenge not encountered in BMI for spike rates. Time resolution gap between ne ural activity and movement. Although spike trains are a very good indicator of neuronal function, they are also far removed from the time and macroscopic scales of behavior. Therefore, a central question in modeling brain function in

PAGE 142

142 behavior experiments is how to optimally br idge the time scale between spike events (milliseconds) and the time scale of behavior (s econds). Most often, the relatively rudimentary method of time averaging (binning spikes) is used to bridge the gap, but much of the resolution of the spike representation is wast ed. Therefore, to model the hierarchy of scales present in the nervous system, a model-based methodology must link the firing times to movement in a principled way. It remains to be seen under wh at conditions the spike timing is relevant for motor BMIs because as stated the kinematics exist at a much longer time scale, which may indicate that the exact timing of spikes is not important. Non-stationary neuron behavior. Studies show that the re sponse of individual neurons to the same stimulus changes frequently. Even the cortical areas used in BMI experiments can vary considerably from day to day. Neuroscienti sts have used the average of peri-event neuron spiking patterns across trials/times in order to eliminate noise contamination and observe the same stationary neuron behaviors. However, this statistical analysis is not feasible for the reconstruction of trajectory time series in mo tor BMI. Current signal processing modeling still assumes that neuron behaviors are stationary between the training and testing data. This assumption is questionable and affects the performance on the test data. Association among the neurons. Evidence shows that neuron spikes are synchronized as groups along time. Some researcher s even claimed that in order to understand brain function, the signals should be recorded from areas all over the brain since they are dynamically correlated as a network. Imagine the computational complexity when we have about 200 neurons interacting with each other. Researchers have applied statis tics and data mining techniques to evaluate the synchronization of multi-channel spikes in term s of the accuracy and the efficiency. A better understanding of neuron recordi ngs, especially the causal corr elation between the different

PAGE 143

143 recording areas, would be achieved by dynamic modeling in the probability domain neural dependence across channels. Therefore, it is very important to involve a dependence study among the neurons into our BMI study. Unfortunate ly the sequential estimation models for point processes assume independence among neurons to a void estimating the joint distribution, so this is one of their most important shortcomings. Computational complexity. BMI performance hinges on the ab ility to exploit information in chronically recorded neuronal activity. Sin ce there are no precise techniques to target the modulated cells during the surgical phase, the strategy has been to sample as many cells as possible from multiple cortic al areas with known motor asso ciations. This time-consuming computational burden would significantly impa ir the use of BMI in low-power, portable hardware. Therefore channel sel ection methodologies should be app lied to the neural vector to estimate the channels that are more relevant for the task. With all of these issues in mind, we pr oposed and validated a Monte Carlo sequential estimation framework to reconstruct the kinematics directly from the neur al spike trains. There are two main steps to apply this idea to neural data from BMI experiments. First, we must validate our physiologic knowledge of neural tu ning properties by analysis and modeling using statistical signal processing. Second, based on the knowledge we have gained, we must implement the adaptive signal filtering algorithm to derive the kinematics directly from the neuron spike trains. Our intention is to reduce the randomness of the neuron spiking in probabilistic models. Faced with a tremendous amount of neural r ecoding data, we proposed using the mutual information between the neuron spike and kinemati c direction as a new metric to evaluate how much information the neuron spike encodes. This well-established concept in information theory

PAGE 144

144 provides a statistical measure to gauge neuron tu ning depth. As a non-unit measure, the proposed metric provides a means to compare information in terms of tuning, not only among different kinematics, positions, velocities and accelerations ; but also among neurons in different cortical areas. The primary motor cortex contained most of the tuned neurons, and th erefore is a potential location to elicit a neuron subset for movement reconstructions. In addition to its informative value for importance, the tuning function was also mathematically estimated by a parametric Li near-Nonlinear-Poisson model. The traditional criterion of estimating tuning dept h from windows of data does not seem the most appropriate in the design of BMIs using sequential estimation al gorithms on spike trains. Here we presented instead an information theoretical tuning analysis of instantaneous neural encoding properties that relate the instanta neous value of the kinematic vector to neural spiking. The proposed methodology is still based on the Linear-Nonlinear-Poiss on model of Paninski. Using a spiketriggered averaging technique, the linear filter finds the preferre d direction of a high-dimensional kinematics vector, which could i nvolve both spatial (2-D) and tem poral information if evaluated in a window. The nonlinear filter demonstrates the neurons nonlinear property, such as saturation, thresholding, or refractory period. As the function of the filtered kinematic vectors, the neurons nonlinear property is approximated by the conditiona l probability density of the spikes according to the Bayesian rule. Alt hough most of the statis tical nonlinear neuron properties are expressed as expone ntially increasing curves, we also found diversity among these properties. This might indicate varying functi onal tuning roles among ne urons. The prescribed inhomogeneous model embodies the randomness and nons tationary aspects of neural behaviors, which finally connects the continuous kinematics to the point process. An information theoretic formulation provides a more deta iled perspective when compared with the conventional tuning

PAGE 145

145 curve because it statistically quantifies the amount of information betwee n the kinematic vectors triggered off by the spike train. As a direct cons equence, it can estimate the optimum time delay between motor cortex neurons and behavior caused by the propagation eff ects of signals in the motor and peripheral nervous system. The similarities and differences between the windowed and instantaneously evaluated tuning functions were also analyzed. The inst antaneous tuning function displayed over 0.9 correlation in the central region w.r.t. to the windowed tuning func tion. The differences in the high tuning region of the curves, both in the dyn amic range and in the estimated value were much higher and resulted from the overestimat ion of tuning by the window method at the high firing rate part of the curve. The instantaneous model works directly in the dynamic range of the kinematics therefore it estimates more accurately the firing probability w ithout distortions from the temporal neighborhood information and prod uce equivalent or better encoding results compared to existing techniques. This outcome builds confidence to directly implement the instantaneous tuning function in to the future online decoding models for Brain-Machine Interfaces. The instantaneous tuning function based on the Linear-Nonlinear-P oisson model builds a non-linear functional relationship from the kinemati cs to the neuron activity, which is estimating neural physiologic tuning directly from the spike timing information. This solution is working to a certain extent, but it might not describe totally how the neur on actually fires corresponding to the certain kinematics. For example, it assumes a stationary linear fi lter and nonlinear tuning curve; the current modeling is done independe ntly for each neuron without considering the interactions. Since the accuracy of the encoding model will im pact the performance of the

PAGE 146

146 kinematic decoding from the neural activity, furt her development and validation of the encoding model is an important aspect to consider. With the knowledge gained from the neuron phy siology function analysis with this signal processing algorithm, we proposed a Monte-Carlo sequential esti mation for point process (PP) adaptive filtering to convert the Brain Machin e Interfaces decoding problem to state sequential estimation. We reconstruct the kinematics as the state directly from the neural spike trains. The traditional adaptive filtering algorithms were well established to represent the temporal evolution of a system with continuous measurements on signals, such as Kalman filter, least square solution and gradient decent searching. They ar e of limited use when it comes to BMI decoding in the spike domain, where only the recorded ne ural spiking time matters and the amplitude information of the signals is absent. A recen tly proposed point process adaptive filtering algorithm uses the probability of a spike occurrence (which is a continuou s variable) and the Chapman-Kolmogorov Equation to estimate parameters from discrete observed events. As a twostep Bayesian approach, it assumes posterior dens ity of the state given the observation Gaussian distrusted with less accuracy. We presented a Monte Carlo sequential estimation to modify the amplitude of the observed discrete events by th e probabilistic measurem ent posterior density. We generated a sequence of samples to estimate the posterior density more precisely, avoiding the numerical computation of the integral in the C-K Equation through sequential estimation and weighted Parzen windowing. Due to the smoothing of the posterior density with the Gaussian kernel from Parzen windowing, we used collapse to easily obtain the expe ctation of the posterior density, which leads to a better result of state estimate than noisy Maximum A Posterior. In a simulation of a one-neuron encoding experiment the Monte Carlo estimation showed better

PAGE 147

147 capability to probabilistically estimate the state, better approximating pos terior density than the point process adaptive filtering algor ithm with Gaussian assumption. The Monte Carlo sequential estimation PP algor ithm enables us to use signal-processing techniques to directly draw information from timing of discrete ev ent without a Gaussian assumption. Although it is proposed for the BMI a pplication on motor corti cal neurons in this dissertation, it is theoretical ly a general non-parametric appr oach that can infer continuous signals from point process without constraints, which can be util ized in many other neuroscience applications (e.g. visual cortex processing) in communications (network traffic) and in process optimization. We have to point out that the im plementation of this algorithm would not always bring us better performance. It depends how the user assign the proper state and build the models. In addition, the advantage of the approach will be only shown when the posterior density of the state given observation can t be well approximated by Gaussian distribution, for example, multi-modes or highly skewed. On the other hand, since the pdf information is fully stored and propagated for each time index, th e computation complexity is one trade off that the user must weight. Moreover, we were able to pin point and quantify for mo tor BMIs the performance paid by the Gaussian assumption. Towards this goal, we compared performance with the Kalman filter PP applied to a cursor control task, a nd concluded that the Monte Carlo PP framework showed statistically better results ( all the p value of the pair-wise t-test on NMSE is smaller than 0.02) between the desired and estimated trajectory We should mention that this improvement in performance is paid by much more demanding co mputation and also by much more detailed information about the decoding model for each neuron. Although spike trains are very telling of neuronal function, they are also very removed from the macroscopic time scales of behavior. Therefore, a central question in modeling brain

PAGE 148

148 function in behavior experiments is how to optimally bridge the time scale between spike events (milliseconds) and the time scale of behavior (s econds). Most often, the relatively rudimentary method of time averaging (binning spikes) is us ed to bridge the gap, but excludes the rich information embedded in the high resolution of the spike representation. Model-based methodologies including an encoding model linking the firing times to state variables as the ones presented here seem to be a much more principl ed way to model the hierarchy of scales present in the nervous system. However, these models are intrinsically stochastic with the encoding models in use today, so they pose difficult ies for real time operation of BMI models. Although the results are interesting, the si gnal processing methodologi es for spike train modeling need to be further developed. Many para meters are assumed and need to be estimated with significant design expertis e as we studied in terms of decoding performance. They are substantially more complex than the ones for random processes. Therefore, we choose the averaged kinematics estimation among many Monte Carlo trials as the algorithm performance. Still, the results are still intr insically stochastic due to the randomness of the generated spike trains. In order to achieve more reliable resu lts, we propose a synthe tic averaging idea to generate several sets of spike trains from the esti mated firing intensity probability to simulate the population effects in the cortex. In stead of the coarse binning tec hniques on the neural activity, the model is implemented several times from re generated spike observati on to reconstruction the kinematics. The performance is averaged among th e decoding results in the movement domain to bypass the possible distortion by nonlinear tuning function due to the binning in spike domain. The synthetic averaging idea provided smoother ki nematics reconstruction, which is a promising result for improved performance.

PAGE 149

149 However, synthetic averaging is effectivel y averaging the timing information that one seeks in this class of methods in the first place. Therefore, the interesti ng observation is that it seems to indicate that spike timing has no effect in performance, otherwise when we use the synthetic examples performance should decrease. Th is issue is hard to quantify due to the many factors at play and the lack of ground truth to compare absolute performance. We briefly explain the issues below, but this is an open problem that deserves much more research. First, the way we generate the synthetic spike trains is to obtain an estimate of the intensity function (firing probability) of a single neuron by kernel smoothing. This obviously will always produce a bias estimate of the intensity function that will be present in all the realizations. However, the averaging of kinematic responses will decrease the variance of the estimated kinematics as we have seen in the re sults. NMSE reduces 26% for position along x, 18% for position along y, and on average 15% for all 6 kinematic va riables. But this process of averaging effectively puts us back into the realm of rate m odels if we look at the in put side (spike trains). We think that further analysis is necessary distinguishing the li near and the nonlinear models. If we do synthetic averaging in the Kalman PP, where the neuron tuning f unction is linear, the synthetic averaging would be inde ed equivalent as inputting the c ontinuous firing rates when the number of realizations is infinite. However, si nce the neuron tuning function is developed based on LNP model, the averaging in the neuron activity, binning or smoothed spike rates, would be conceptually different from the averaging on th e nonlinear outputs of th e tuning. As a simple example, in general )] ( [ ]) [ ( x x f E E f where ] [ E is expectation operati on. Besides, synthetic averaging is coupled with LNP encoding model de signed just for the spik e train, which models the kinematics simply triggered by the spike timi ng. This quantity can not be currently estimated

PAGE 150

150 on the continuous firing rates inputs since th ere is no corresponding en coding modeling method available. The synthetic averaging is an attempt not only to bridge the time-resolution difference between neuron activity and the kinematics, but al so to reduce the variance of the spike timing introduced by single realization of the neuron recordings. Altern ate methods that can reduce the variance of the estimate without reducing temporal resolution need to be investigated, but are not known to us at this moment. In addition to the comparing our Monte Carlo SE to Kalman PP to evaluate effect of the linear/nonlinear tuning and Gau ssian/nonGaussian distributed pos terior density; we further investigated the decoding performance by comp arison to other decoding methods. The difference between the statistical recons truction results by Kalman PP and adaptive filtering on point process with exponential tuning function shows th e importance of accurate encoding model. The linear tuning curve works be tter for kinematics along y (e.g. NMSE of position y linear vs. exponential: 1.0430.245 vs. 1.498 0.655). While the exponential tuning curves works better for kinematics along x (e.g. NMSE of position x exponential vs. linear: 0.6670.202 vs. 0.8970.305). However, both of the encoding models couldnt catch more information than Monte Carlo SE, which provides the best decoding results (e.g. NMSE of position x and y: 0.5630.186 vs. 0.9640.322). This is because the Monte Carlo SE using the instantaneous encoding estimated directly for data without close-form assumptions. Lets come back to the motivation of devel oping signal processing techniques on the point process, where we wonder if the spike timing contains richer in formation than the conventional spike rates. One straightforward way is to co mpare decoding performances between the spikes rates and point process domain. Si nce our algorithm is developed based on the state-observation

PAGE 151

151 model, it would be comparable to first start wi th Kalman filter and Kalman PP. Both methods have linear tuning and assume the posterior de nsity Gaussian distribut ed. The big performance drop between the training and test by Kalman filter shows the over-fit parameter in tuning model because of the blurred time information of the neur al activity. Kalman PP works directly on point process, which overcomes the problem with le ss performance difference between training and testing set. However, the clos er resolution on neural activity results in poor estimation on posterior density results approxi mated Gaussian, which produces not necessarily better results. Compare the performance by Monte Carlo SE, which estimated posterior density more accurately, the performance in spike domain is slightly better (CC of 2D position 0.77760.0886, 0.75450.1543) than the one in continuous spike rates (CC of 2D position 0.74630.0703, 0.73970.1003). The slightly better performance is not as good as we expected to co rroborate the hypothesis that richer dynamic information from spike ti ming is needed in motor BMIs. By only checking values of the performance criterion, it would be too quick to come to the conclusion that the spike trains contain no more info rmation than spike rates. We s hould look into carefully how the 2 different methods are implemented and under wh at circumstances each shows the advantage. The Kalman filtering infers the kinematics fr om continuous spike rates within closed form simply and analytically with linear model and Gaussian assu mption on posterior. Our proposed Monte Carlo sequential estimation enable to filter on point process, while it would show clearly better performance only if the pdf of the state given experimental observation is multi-modal or highly skewed for most of the time. One of the possible reasons of the slightly better performance here could be that state variable we are modeli ng on. Currently we build the probabilistic approach to inferri ng the 2D position, velocity a nd acceleration, which are a final

PAGE 152

152 representation of a combination of complicated muscle movement s that are initialized by the motor neuron spiking. Those combinations can be regarded as low-pass filtering or weighted averaging operations from the neural activities, which might make linear function and Gaussian assumptions easily satisfied in Kalman filter. Pl us the bigger time resolution gap from spike timing rather than spike rates brings more di fficult decoding job for Monte Carlo SE. If we would have access to synchronous EMG (electro myopgraphic), signals which have a much higher time resolution than the ki nematics because they respond to motor neuron firing without too much averaging and less time resolution gap, it might be a better case for Monte Carlo sequential estimation to show its advantages on decoding. Comparing to the Kalman filter with fixed li near model, our proposed approach, as a nonparametric method without constrains, enable s us to build the neuron physiologic tuning knowledge estimated simply from spike timing into the decoding framework. The instantaneous LNP model we currently use may not be optimal, wh ich could also result in the slightly but not obviously better performance. The better encoding model s hould bring the potentials to improve the BMI decoding performance, therefor e evaluate more fairly if the spike timing contains more information comparing to the spike rates. In the efforts to reduce the computational co mplexity for multi-channel BMI, we proposed mutual information based on the instantaneous tuning function to select the neuron subset in term of the importance related to the movement task Among the 30 selected neurons, 70% of neurons distribute in M1. The decoding performance has cl ose or even less NMSE comparing to the full neuron ensemble with much le ss computational complexity. Future Work As we have described the challenges in BM I, Monte Carlo SE is design to derive kinematics directly from spike domain wit hout linear and Gaussian assumptions. The

PAGE 153

153 instantaneous encoding model tries to evaluate the tuning property directly from the data without a closed form assumption such as linear or e xponential. We have also developed the synthetic averaging idea in efforts to bridge the time ga p between the neural activates and the movement. The information theoretical cr iterion is proposed to reduce the computation complexity by decoding with only subset of the neurons. There ar e still some aspects we could work on in the future 1) the association among neurons, and 2) the non-stationary tracking of the neuron tuning properties during decoding process. In our current approach, the posterior density of the kinematics given multi-channel spike observations is obtained with the conditiona l independent assumption among all the neurons. This opposes the concerns on neuron associations One solution might be to modify the neuron tuning function such that it take s into account not only the kinema tics but also the neurons with synchronized behavior. In this wa y, we also build the function al structure between the neuron firing information and improve our a pproach in a more realistic way. In our preliminary BMI decoding results, we used the statistically fixed tuning function to reconstruct the monkeys movements from the mu lti-channel neuron spike trains. The preferred kinematic direction, which is represented by the linear filter in the tuning function model, is constant for each neuron. The nonlinearity of the neuron tuning curve remains constant throughout the decoding. As we analyzed the dec oding performance in trai ning and testing data in different segments, it clearly shows that the r econstruction in the testing segment, which is far away from the training set, is poor. It is becau se the stable assumptions could conflict with regard to nonstationary neuron firi ng patterns. If we can analyze the amount of information that a neuron conveys by firing changes, could we deal with it in the decoding?

PAGE 154

154 Awareness of the nonstationary properties of neuron firing behaviors should alter the parameters in the tuning function modeling al ong the time step. The preferred kinematic direction could deviate sl ightly from the direction at the prev ious time iteration. Approximating both movements and linear filter weights is a duel estimation problem. In the dual extended Kalman filter [Wan & Nelson, 1997] and the join t extended Kalman filter [Matthews, 1990], the dual estimation problem was addressed with diffe ring solutions. In the dual extended Kalman filter, a separate state-space representation is us ed for both the signal and the weights. At every time step, the current estimation of the weights is used as a fixed parameter in the signal filter, and vice versa. The joint extended Kalman Figure combines signal and weights into a single joint state vector, and runs the estimation simultan eously. Since there are 185 neurons recorded simultaneously with the movement task, to expl ore the joint state vector with both signal and weights within such a high dimensional space co uld require huge amount of samples. We apply here the dual methods to our BMI decoding to deal with the nons tationary neuron tuning function. We started with the simplest case, Kalman filter working on the continuous binning spike rates to show preliminary results of the dual id ea. To apply Kalman filter on our BMI data, the state dynamic remains the same as t t t tF 1x x (5-1) where tF establishes the dependence on the previous state and t is zero-mean Gaussian distributed noise with covariance tQ1. tF is estimated from training data by the lease square solution. tQ1 is estimated as the variance of the e rror between the linear model output and the desired signal. The tuning f unction is linearly defined as t t lag t t tn n H2 1 x (5-2)

PAGE 155

155 where t is the firing rate by 100ms window binning. tx is the instantaneous kinematics vector defined as T t y y y x x x] 1 [a v p a v p with 2-dimensional informa tion of position, velocity, acceleration and bias term. The variable lag refers to the causal time delay between motor cortical neuron activity and kinematics due to the propagation effects of signals thru the motor and peripheral nervous systems. Here it is experimentally set as 200 ms [Wu et al., 2006; Wang et al., 2007c]. In traditional Kalman filter, the weight estimation of the linear tuning function tH is given from training data by ] [ ]) [ (1 t lag t lag t T lag t tE E H x x x (5-3) Different from traditional Kalman filter, the li near filter weights in th e tuning function, which represent the preferred kinematic direction, ar e modeled as a slowly changing random walk in dual Kalman filter. In this way, the dual estimation on tuning function parameters would demonstrate the transformation of the neuron encoding. j t j t T j t Tu H H 1 (5-4) where j tH represents the linear tuning parameters of neuron j at time index t. Here we only model the tuning parameter of the first 10 most impor tant neurons as we selected in Chapter 4 by the information theoretical criterion. The tuning parameters of the 10 neurons change over time with the dependence on the previous tuning parameters. T) ( represents the transformation operation. j tu is zero-mean Gaussian distributed noise with covariance kQ2. kn1 is zero-mean Gaussian distributed noise with covariance kR1, which is contributed by the noisy kinematics states. kn2 is zero-mean Gaussian distributed noise with covariancekR2,

PAGE 156

156 which is contributed by the changing tuning para meters. At each time index, the kinematics vector is first derived as the state from the obser vation of firing rate in te st by Equations 5-5 a-e. 1 | 1 1 | k k k k kFx x (5-5 a) k T k k k k k kQ F P F P1 1 | 1 1 | (5-5 b) 1 1 1 | 1 |) ( k T k k k k T k k k kR H P H H P K (5-5 c) k T k k k k k kQ F P F P1 1 | 1 1 | (5-5 d) ) (1 | 1 | | k k k k k k k k kH Kx x x (5-5 e) After the kinematics state is estimated from the observation, the tuning parameters for each neuron are then estimated by another Ka lman filter by Equations 5-6 a-d. k k k k kQ Ph Ph2 1 | 1 1 | (5-6 a) 1 2 1 | 1 |) ( k k k k T k k k k kR Ph Ph Khx x x (5-6 b) 1 | 1 | | k k T k k k k k kPh Kh Ph Phx (5-6 c) ) (1 | 1 k k k k k T k T kH Kh H Hx (5-6 d) Notice that carefully choosing the parameters in the noise estimation (covariance kQ1 in state dynamic model and covariance kQ2 in tuning dynamic model) could affect the algorithm performance. However, since we have no access to th e desired kinematics in the test data set, the parameter estimations of both algorithms were obtained from the training data sets. For the Kalman filter, the noise in the kinematics m odel (Equation 5-1) is approximated by a Gaussian distribution with covariance kQ1. We set the initial state 0xto be the zero vectors and the state variance0 | 0P is estimated as the state va riance from the training data.

PAGE 157

157 The initial tuning parameter 0H can be set as the one estimated from training by least square. It is somewhat different to set the variance parameters kQ2 and 0 | 0Ph in the tuning dynamic model. This is because we have the ac cess to a series of the stochastic kinematics signals in the training set, but onl y the deterministic result to ge t the tuning parameters by least square solution. In order to ge t a series of the tuning parameter changing over time, we run the dual Kalman (Equation 5-6 a-d) to estimate the t uning parameters over time in the training set, where the kinematics state is set directly as the tr ue value. Since in the testing set, the noise term is always contributed by 2 terms, the noisy ki nematics state and the noisy tuning parameters, here we set covariance kQ2 of the noise term ku in tuning dynamic is only 20% of the noise variance approximated by ) (1t tH H from the time series of the tuning parameters. The variance0 | 0Ph is also set as 20% of the variance from the time series of the tuning parameters estimated from the training data. Table 5-1 shows reconstruction results on 800 sample segment (time index from 213.5 m to 293.5 m) of a test segment of neural data by Kalman filter and by dual Kalman with tuning parameter modification on 10 most important ne urons with the criterion Normalized Mean Square Error (MSE normalized by the power of the desired signal) between the desired signal and the estimations. Table 5-1 shows that dual Kalman filter obtained less NMSE than Kalman filter with fixed tuning parameters for all the kinematics. Figu re 5-1 shows the recons truction performance by Kalman filter and dual Kalman filter on 10 most important neur ons for 1000 test samples. The left and right column plots displa y the reconstructed kinematics for x-axis and y-axis. The 3 rows of plots illustrate from top to bottom the recons tructed position, the velo city and the acceleration.

PAGE 158

158 In each subplot, the red line indicates the desi red signal, the green line indicates the estimation by Kalman filter and the blue line indicates th e estimation by Dual Kalman filter. We zoom in the position reconstruction in the plots. It is shown that dual Kalman filter provides better estimation at the peak of the desired signal than Kalman filter, because the tuning parameter is slowly tuned over the time. Figure 5-2 shows th e tracking of the tuning parameters of the 10 neurons estimated by dual Kalman filter in test se t. As we expected, we see the slow change of the parameters over the time. Neuron 72 and neur on 158 show diverge of the parameter change. It only appears when a pair or pairs of the para meter changes fast over time. We could infer that after the linear projection, the pair of the fast changing weight could results in a slow change of the linear output. The preliminary results of the dual Kalman shows the possibility of the tracking the nonstationary tuning properties of the motor ne urons. As we know from the experiment, the results are very sensitive to the parameter sett ings. The systematic way to decide the optimal parameters could be studied. Again the algorithm s hould be tested for longer data in the future.

PAGE 159

159 Table 5-1. Results of the kine matics reconstructions by Kalman and dual Kalman for segment of test data Position Velocity Acceleration NMSE x y x y x y Kalman 0.5706 0.52220.47470.47330.6752 0.8153 Dual Kalman 0.5574 0.51700.47400.47250.6698 0.8100 Figure 5-1. The reconstructed kinematics fo r 2-D reaching task by Kalman and dual Kalman filter

PAGE 160

160 0 100 200 300 400 500 600 700 800 -10 0 10 neuron 67 timeweight 0 100 200 300 400 500 600 700 800 -50 0 50 neuron 72 timeweight 0 100 200 300 400 500 600 700 800 -50 0 50 neuron 76 timeweight 0 100 200 300 400 500 600 700 800 -20 0 20 neuron 77 timeweight 0 100 200 300 400 500 600 700 800 -10 0 10 neuron 80 timeweight 0 100 200 300 400 500 600 700 800 -10 0 10 neuron 81 timeweight 0 100 200 300 400 500 600 700 800 -50 0 50 neuron 85 timeweight 0 100 200 300 400 500 600 700 800 -10 0 10 neuron 98 timeweight 0 100 200 300 400 500 600 700 800 -5 0 5 neuron 107 timeweight 0 100 200 300 400 500 600 700 800 -20 0 20 neuron 158 timeweight Figure 5-2. The tracking of the tuning parameters fo r the 10 most important neurons in dual Kalman filter

PAGE 161

161 LIST OF REFERENCES Ashe, J., & Georgopoulos, A. P. (1994) Movement parameters and neur al activity in motor cortex and area 5 Cereb. Cortex 6, 590 Abeles, M. (1982). Quantification, smoothing, a nd confidence limits for single-unit histograms, J. Neurosci. Methods. 5, 317-325 Arieli, A., Shoham, D., Hildesheim, R., & Grinvald, A. (1995). Cohere nt spatiotemporal patterns of ongoing activity revealed by realtime optical imag ing coupled with single-unit recording in the cat visual cortex, J Neurophysiol. 73, 2072. Arulampalam, M. S., Maskell, S., Gordon, N., & Cl app, T. (2002). A tutorial on particle filters for online nonlinear/non-gaussi an bayesian tracking. IEEE Trans. on Signal Processing. 50(2), 174 Bergman, N. (1999). Recursive Bayesian esti mation: Navigation and tracking applications, Ph.D. dissertation, Linkoping University, Sweden Borst, A., & Theunissen, F. E. (1999). Informat ion, Information theory and neural coding. Nat. Neurosci.. 2, 947-957 Bourien, J., Bartolomei, F., Bellanger, J. J., Ga varet, M., Chauvel, P., & Wendling, F. (2005). A method to identify reproducible subsets of co-a ctivated structures dur ing interictal spikes. Application to intracerebral EEG in temporal lobe epilepsy, Clin Neurophysiol. 116(2), 44355 Brillinger, D. R. (1992). Nerve cell spike train data analysis: a progres sion of techniques, J. Amer. Stat. Assoc. 87, 260-271 Brockwell, A. E., Rojas, A. L. & Kass, R. E. (2004). Recursive Bayesian decoding of motor cortical signals by pa rticle filtering. J Neurophy. 91, 1899-1907 Brody, C. D. (1999). Correlations without synchrony, Neural Comput. 11, 1537-1551 Brown, E. N., Frank, L., & Wilson, M. (1996). St atistical approaches to place field estimation and neuronal population decoding. Soc. of Neurosci. Abstr. 26, 910, Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, MA (1998). A statistical paradigm for neural spike train decoding applied to posit ion prediction from ensemble firing patterns of rat hippocampal place cells. J. Neurosci. 18, 7411 Brown, E. N., Nguyen, D. P., Frank, L. M., Wi lson, M. A., & Solo, V. (2001). An analysis of neural receptive field plasticity by point process adaptive filtering. PNAS, 98 12261 Brown, E. N., Barbieri, R., Ventura, V., Kass, R. E., & Frank, L. M. (2002). The time-rescaling theorem and its application to ne ural spike train data analysis. Neural Computation. 14, 325

PAGE 162

162 Brown, E., Kass, R., & Mitra, P. P. (2004). Multiple neural spike train data an alysis: state-of-the-art and future challenges. Nature Neurosci. 7(5), 456-461 Carmena, J. M., Lebedev, M. A., Crist, R. E., O Doherty, J. E., Santucci, D. M., Dimitrov, D. F., Patil, P. G, Henriquez, C. S., & Nicolelis, M. A. L. ( 2003). Learning to control a brain machine interface for reaching and grasping by primates. PLoS Biology. 1(2), 193-208 Carpenter, J., Clifford, P., & Feamhead, P. (1999) Improved particle filte r for non-linear problems. in IEE Proc. on Radar and Sonar Navigation. 136(1), 2-7 Chan, K. S., & Ledolter, J. (1995) Monte Carlo es timation for time series models involving counts. J. Am. Stat. Assoc. 90, 242 Chandra, R., & Optican, L. M. (1997). Detection, classification, and superposition resolution of action-potentials in multiunit single-channel recordings by an online real-time neuralnetwork. IEEE Trans. Biomed. Eng. 44, 403 Chichilnisky, E. J. (2001). A simple white noise analysis of neuronal light responses. Network: Comput. Neural Syst. 12, 199-213 DeAngelis, G. C., Ohzawa, I., & Freeman, R. D. (1993) The spatiotemporal organization of simple cell receptive fields in the cat's striate co rtex. II. Linearity of temporal and spatial summation. Journal of Neurophysiology. 69, 1118-1135 DeBoer, E., & Kuyper, P. (1968). Triggered correlation. IEEE Trans Biomed Eng. 15,169-179 Diggle, P. J., Liang, K-Y., & Zeger S. L (1995). Analysis of longitudinal data. Oxford: Clarendon Doucet, A. (1998). On sequential monte carlo sampling methods for Bayesian filtering. Department of Engineering, University of Cambridge, UK, Tech. Rep. Eden, U. T., Frank, L. M., Barbieri, R., Solo, V., & Brown, E. N. (2004). Dynamic analysis of neural encoding by point pro cess adaptive filtering. Neural Comput. 16(5), 971-998 Eggermont, J. J., Johannesma, P. I. M., & Aertsen, A. M. H. J. (1983) Reverse-correlation methods in auditory researchQ. Rev. Biophysics. 16, 341-414 Fee, M. S., Mitra, P. P. & Kleinfeld D (1996). Au tomatic sorting of multiple-unit neuronal signals in the presence of anisotropic and non-Gaussian variability. J. Neurosci. Meth. 69, 175 Frank, L. M., Eden, U. T., Solo, V., Wilson, M. A ., & Brown, E. N. (2002). Contrasting patterns of receptive field plasticity in thehippocampus a nd the entorhinal cortex: An adaptive filtering approach. Journal of Neuroscience. 22, 3817-3830 Frank, L. M., Stanley G. B.,&Brown, E.N. (2004) Hippocampal plasticity across multiple days of exposure to novel environments. Journal of Neuroscience, 24. 7681-7689

PAGE 163

163 Fritsch, G., & Hitzig, E. (1870). Ueber dir elektrische Erregbarkeit des Grosshirns. Arch. Anat. Physiol. Lpz. 37, 300-332 Gabbiani, F, & Koch, C. (1998). Principles of spik e train analysis. In: Koch C, Segev I, editors. Methods in Neuronal Modeling: From Ions to Networks, 2nd edition. Cambridge MA: MIT, 313 Georgopoulos, A. P., Kalaska, J. F., Caminiti, R., & Massey, J. T. (1982). On the relations between the direction of two-dimensional arm moveme nts and cell discharge in primate motor cortex. J. Neurosci. 2,1527-1537 Georgopoulos, A. P., Schwartz, A. B., & Kettne r, R. E. (1986). Neuronal population coding of movement direction. Science. 233, 1416-1419 Georgopoulos, A. P., Lurito, J. T., Petrides, M., Sc hwartz, A. B., & Massey, J. T. (1989). Mental rotation of the neur onal population vector. Science. 243, 234-236 Gerstein, G. L., & Perkel, D. H. (1969). Simultan eously recorded trains of action potentials: analysis and functional interpretation. Science. 164, 828-830 Gozani, S. N., & Miller, J. P. (1994). Optimal di scrimination and classifi cation of neuronal actionpotential wave-forms from mult iunit, multichannel recordings using software-based linear filters. IEEE Trans. Biomed. Eng. 41, 358 Gordon, N., Salmond, D., & Smith, A. F. M. (1993) Novel approach to nonlinear and non-gaussian bayesian state estimation. in IEE proceedings-F. 140, 107. Haykin, S. (2002). Adaptive filter theory. Prentice-Hall. Hensel, H., &Witt, I. (1959). Spatial temperat ure gradient and thermoreceptor stimulation. J. Physiol. 148(1), 180 Jammalamadaka, S. R., & SenGupta, A. (1999). Topics in Circular Statistics. River Edge, NJ: World Scientific Publishing Company. Jones, J. P., & Palmer, L. A. (1987) The two-di mensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology. 58(6), 1187-1211 Kass, R. E., Ventura, V., A spik e train probability model (2001). Neural Comput. 13, 1713-1720 Kim, S. P., Sanchez, J. C., Erdogmus, D., Rao, Y. N., Wessberg, J., Principe, J. C., & Nicolelis M. A. (2003). Divide-and-conquer approach for br ain machine interfaces: nonlinear mixture of competitive linear models. Neural Network., 16, 865-871 Kim, S. P. (2005). Design and analysis of optim al encoding models for brain machine interfaces. PhD. Dissertation. University of Florida

PAGE 164

164 Lewicki, M. S. (1998). A review of methods for sp ike sorting: the detecti on and classification of neural action potentials. Network Comput. Neural Syst. 9, R53-R78 Leyton, A. S. F., & Sherrington, C. S. (1917). Obse rvations and excitable cortex of the chimpanzee, orange-utan and gorilla. Q. J. Exp. Physiol. 11, 135-222 Makeig, S., Jung, T-P., Bell, A. J., Ghahremani, D ., & Sejnowski, T. J. ( 1997). Blind separation of auditory event-related brain respon ses into independent components. Proc. Natl Acad. Sci. USA. 94, 10979 Marmarelis, P. Z., & Naka, K. ( 1972). White-noise analysis of a ne uron chain: An application of the Wiener theory. Science. 175, 1276-1278 Martignon, L. G., Laskey, K., Diamond, M., Freiwald, W., & Vaadia E. (2000). Neural coding: higher-order temporal patterns in th e neurostatistics of cell assemblies. Neural Comput. 12, 2621-2653 Matthews, M. B. (1990). A state-space approach to adaptive nonlinear filtering using recurrent neural networks. In Proceedings IASTED Internat. Symp. Artificial Intelligence Application and Neural Networks. 197-200, 1990 McKeown, M. J., Jung, T-P, Makeig, S., Brown, G ., Kindermann, S. S., Lee, T-W & Sejnowski, T. J. (1998). Spatially independent activity patterns in functi onal magnetic resonance imaging data during the Stroop color-naming task. Proc. Natl Acad. Sci. USA 95, 803 McLean, J., & Palmer, L. A. (1989). Contribution of linear spatiotemporal receptive field structure to velocity selectivity of si mple cells in area 17 of cat. Vision Research. 29, 675-679 Mehta, M. R., Quirk, M. C., & Wilson, M. ( 2000). A experience-dependen t asymmetric shape of hippocampal receptive fields. Neuron. 25, 707 Mehring, C., Rickert, J., Vaadia, E., de Oliveira, S. C., Aertsen, A., & Rotter, S. (2003). Inference of hand movements from local field potentials in monkey motor cortex. nature neuroscience. 6(12), 1253-1254 Meister, M., Pine, J., & Baylor, D. A. (1994). Mu lti-neuronal signals from the retina: acquisition and analysis. J. Neurosci. Meth.. 51, 95-106 Moran, D. W., & Schwartz, A. B. (1999). Motor cortical representation of speed and direction during reaching. J. Neurophysiol., 82, 2676-2692 Nicolelis, M. A. L., Ghazanfar, A. A., Faggi n, B., Votaw, S., & Oliveira, L.M.O. (1997) Reconstructing the engram: simultaneous, mu ltiple site, many single neuron recordings. Neuron. 18, 529-537. Nirenberg, S., Carcieri, S. M., Jacobs, A. L. & Latham, P. E. (2001). Retinal ganglion cells act largely as independent encoders. Nature. 411, 698-701

PAGE 165

165 Okatan, M., Wilson, M. A., Brown, E. N. ( 2005). Analyzing functiona l connectivity using a network likelihood model of ensemble neural spiking activity. Neural Comput.. 17, 19271961 OKeefe, J., & Dostrovsky, J. (1971). The hippoc ampus as a spatial map: Preliminary evidence from unit activity in the freely moving rat. Brain Res., 34, 171. Paninski, L. (2003). Convergence properties of some spike-triggered analysis techniques. Network: Computation in Neural Systems. 14, 437-464 Paninski, L., Fellows, M. R., Hatsopoulos, N. G ., & Donoghue, J. P. (2004a ) Spatiotemporal tuning of Motor Cortical Neurons for Hand Position and velocity. J. Neurophysiol. 91, 515-532 Paninski, L., Shoham, S., Fellows, M. R., Hatsopoulos, N. G., & Donoghue, J. P. (2004b). Superlinear population encoding of dynamic ha nd trajectory in primary motor cortex. J. Neurosci., 24(39), 8551-8561 Parzen, E. (1962). On the estimation of a probability function and the mode. Annals of Mathematical Statistics. 33(14), 1065 Rieke, F., Warland, D., de Ruyter van Stevenin ck, R. R., & Bialek W. (1997). Spikes: Exploring the Neural Code. Cambridge. MA: MIT Reich, D. S., Victor, J. D., & Knight, B. W. (1998 ). The power ratio and the interval map: spiking models and extracellular recordings. J Neurosci. 18,10090 Reich, D. S., Melcher F. & Victor J. D. (2001) Independent and redundant information in nearby cortical neurons. Science. 294, 2566-2568 Reid, R. C., & Alonso, J. M. (1995). Specificity of monosynaptic connecti ons from thalamus to visual cortex. Nature. 378(6554), 281-284 Reza, F. M. (1994) An Introduction to Information Theory. New York: McGraw-Hill New York: Dover Rieke, F., Warland, D., Stevenin ck, R. R., & Bialek W. (1997). Spikes: Exploring the Neural Code. Cambridge, MA: MIT Roitman, A. V., Pasalar, S., Johnso,n M. T. V., & Ebner, T. J. (2005). Position, direction of movement, and speed tuning to cerebellar purki nje cells during circul ar manual tracking in monkey. J. Neurosci. 25(40), 9244-9257 Sakai, H. M., & Naka, K. (1987). Signal transmission in the catfish retina. v. sensitivity and circuit. J. Neurophysiol. 58, 1329-1350

PAGE 166

166 Sanchez, J. C., Erdogmus, D., Principe, J. C., Wessberg, J. & Nicolelis, M. A. L. (2002a). A comparison between nonlinear mappings and line ar state estimation to model the relation from motor cortical neuronal firing to hand movements. Proc. of SAB Workshop on Motor Control of Humans and Robots: On the Interplay of Real Brains and Artificial Devices. 59-65 Sanchez, J. C., Kim, S. P., Erdogmus, D., Rao, Y. N., Principe, J. C., Wessber g, J., & Nicolelis, M. A. (2002b) Input-output mapping performance of linear and nonlinear models for estimating hand trajectories from corti cal neuronal firing patterns. Proc. Of Neural Net. Sig. Proc. 139148 Sanchez, J. C., Carmena, J. M., Erdogmus, D., Le bedev, M. A., Hild, K. E., Nicolelis, M. A., Harris, J. G., & Principe, J. C. (2003). Ascer taining the Importance of Neurons to Develop Better Brain Machine Interfaces. IEEE Transactions on Biomedical Engineering. 61 943953 Sanchez, J. C. (2004). From cort ical neural spike trains to behavior: modeling and analysis. PhD. Dissertation. University of Florida Sanchez, J. C., Principe, J. C., & Carney, P. R. (2005). Is Neuron Discrimination Preprocessing Necessary for Linear and Nonlinear Brai n Machine Interface Models? accepted to 11th International Conference on Human-Computer Interaction Sanchez, J. C., & Principe, J. C. (2007). Brain-Machine Interface Engineering. New York: Morgan and Claypool Schafer, E. A. (1900) The cerebral cortex. Textbook of Physiology, edited by Schafer. F. A., London: Yong J. Pentland. 697-782 Schwartz, A. B., Kettner, E., & Georgopoulos, A. P. (1988). Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement. J. Neurosci. 8, 2913 Schwartz, A. B. (1992). Motor co rtical activity during drawing m ovements: Single -unit activity during sinusoid tracing. J Neurophysiol 68, 528 Schwartz, A. B., Taylor D. M., & Tillery, S. I. H. (2001). Extraction algorith ms for cortical control of arm prosthetics. Current Opinion in Neurobiology. 11(6), 701-708. Schmidt, E. M. (1980). Single neuron recording fro m motor cortex as a possible source of signals for control of external devices. Ann. Biomed. Eng. 339-349, 1980. Serruya, M. D., Hatsopoulos, N. G, Paninski, L. Fellows, M. R., & Donoghue, J. P. (2002). Brainmachine interface: Instant neural control of a movement signal. Nature. 416, 141-142 Shadlen, M. N., & Newsome, W. T. (1998). Th e variable discharge of cortical neurons: implications for connectivity, com putation, and information coding. J Neurosci. 18, 3870 3896

PAGE 167

167 Sharpee, T., Rust, N. C., & Bialek, W. (2002) Maximally informative dimensions: Analyzing neural responses to natural signals. Neural Information Proc essing Systems (NIPS02). 15, Cambridge, MA. MIT Press Silverman, B. W. (1981). Using Kernel Dens ity Estimates to Inve stigate Multimodality. J. Roy. Stat. Soc., Ser. B. 43, 97-99 Simoncelli, E.P., Paninski, L., Pillow, J., & Sc hwartz, O. (2004). Characterization of neural responses with stochastic stimuli. The New Cognitive Neurosci., 3rd edition, MIT Press Smith, A. C., & Brown, E. N. (2003). State-sp ace estimation from point process observations. Neural Computation. 15, 965-991 Strong, S. P., Koberle, R., de Ruyter van St eveninck, R. R., & Bialek, W (1998). Entropy and information in neural spike trains. Phys. Rev. Lett. 80, 197-200 Suzuki, W. A., & Brown, E. N. (2005). Behavior al and Neurophysiological Analyses of Dynamic Learning Processes. Behavioral and Cognitive Neuroscience Reviews. 4(2), 67-97 Taylor, D. M., Tillery, S. I. H., & Schwartz A. B. (2002). Direct co rtical control of 3D neuroprosthetic devices. Science. 296, 829-1832 Todorov, E. (2000) Direct cortical control of muscle activation in voluntary arm movements: a model. Nature Neuroscience. 3, 391-398 Truccolo, W., Eden, U.T., Fellows, M.R., Donoghue, J. P., & Brown, E. N. (2005). A point process frame work for relation neural spiking activity to spiking hi story, neural ensemble, and extrinsic covariate effects. J. Neurophy. 93, 1074-1089 Tuckwell, H. (1988). Introduction to Theoretical Neurobiology, 2. New York: Cambridge University Press Wan, E. A., & Nelson, A. T. ( 1997). Neural dual extended Kalman fi ltering: applications in speech enhancement and monaural blind signal separation. Proc. Neural Networks for Signal Processing Workshop. IEEE Wan, E.A., & Van Der Merwe, R. (2000). The uns cented Kalman filter for nonlinear estimation. Adaptive Systems for Signal Processing, Co mmunications, and Control Symposium 2000. AS-SPCC. The IEEE, 153-158 Wang, Y., Sanchez, J. C., Principe, J. C., Mitzel felt, J. D., & Gunduz, A. (2006a). Analysis of the Correlation between Local Field Potentials a nd Neuronal Firing Rate in the Motor Cortex. Intl. Conf. of Engineering in Medicine and Biology Society 2006. 6186-6188 Wang, Y., Paiva, A. R. C., & Principe, J. C. (2006b). A Monte Carlo Sequential Estimation for Point Process Optimum Filtering. IJCNN 2006. 1846-1850

PAGE 168

168 Wang, Y., Paiva, A. R. C., & Principe, J. C. ( 2007a) A Monte Carlo Sequential Estimation of Point Process Optimum Filtering fo r Brain Machine Interfaces. Neural Networks, 2007. IJCNN '07. International Joint Conference on. 2250-2255 Wang, Y., Sanchez, J., & Principe, J. C. (2007b) Information Theoretical Estimators of Tuning Depth and Time Delay for Motor Cortex Neurons. Neural Engineering, 2007. CNE '07. 3rd International IEEE/EMBS Conference on. 502-505 Wang, Y., Sanchez, J., & Principe, J. C. ( 2007c). Information Theoretical Analysis of Instantaneous Motor Cortical Neuron Encoding for Brain-Machine Interfaces, IEEE transactions on Neural syst ems and Rehabilitation Engineering, under review Wise, S. P., Moody, S. L., Blomstrom, K. J., & Mitz, A. R. (1998). Changes in motor cortical activity during visuomotor adaptation. Exp Brain Res. 121(3), 285-99 Wessberg, J., Stambaugh C. R., Kralik, J. D., Beck, P. D., Laubach M, Chapin, J. K., Kim, J., Biggs, S. J., Srinivasan, M. A., & Nicolelis, M. A, (2000). Real-time prediction of hand trajectory by ensembles of co rtical neurons in primates. Nature. 408,361-365 Wu, W., Black, M. J., Mumford, D., Gao, Y., Bienenstock, E., & Donoghue, J. P. (2004). Modeling and decoding motor cortical activit y using a switching kalman filter. IEEE Trans. on Biomedical Engineering. 51(6), 933 Wu, W., Gao, Y., Bienenstock, E., Donoghue, J. P., Black, M. J. (2006). Bayesian population decoding of motor cortical acti vity using a Kalman filter. Neural Comput.18, 80-118 Zhang, K. C., Ginzburg, I., Mc Naughton, B. L,. Sejnowski, T. J. (1998). Interpreting neuronal population activity by reconstruction: a unified framework with application to hippocampal place cells. J Neurophys. 79,1017

PAGE 169

169 BIOGRAPHICAL SKETCH Yiwen Wang received a B.S. in engineering science with a minor in automatic control from University of Science and Technology of Ch ina (USTC, Hefei, Anhui, China) in 2001. In 2004, she received a masters degree in engineer ing science with a minor in pattern recognition and intelligent system from University of Sc ience and Technology of China (USTC, Hefei, Anhui, China). Right then, she joined the Departme nt of Electrical and Co mputer Engineering at the University of Florida-Gainesville, FL, USA, and received a Ph.D. in 2008. Under the guidance of Dr. Jose C. Principe in computational neuro-engineeri ng lab, she has investigated the application of advanced signal pr ocessing and control methods to neural data for brain machine interfaces (BMIs). Her research interests are in brain machine interfaces, statis tical modeling on biomedical signals, adaptive sign al processing, pattern recognit ion, and information theoretic learning.