<%BANNER%>

Using domain-specific knowledge in support vector machines

University of Florida Institutional Repository
Permanent Link: http://ufdc.ufl.edu/UFE0011358/00001

Material Information

Title: Using domain-specific knowledge in support vector machines
Physical Description: xiii, 109 p.
Language: English
Creator: Eryarsoy, Enes ( Dissertant )
Koehler, Gary J. ( Thesis advisor )
Aytug, Haldun ( Thesis advisor )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2005

Subjects

Subjects / Keywords: Decision and Information Sciences thesis, Ph.D   ( local )
Dissertations, Academic -- UF -- Decision and Information Sciences   ( local )

Notes

Abstract: Our goal was to develop a methodology that incorporates domain-specific knowledge for pattern-recognition problems. As different classifier algorithms capture pertinent domain knowledge in different ways, we specifically examined methodology for incorporating Support Vector Machines, a novel learning algorithm in patternrecognition. We began with a detailed literature review on learning theory and Support Vector Machines (SVM), an efficient and mathematically elegant technique for machine learning. We incorporated prior knowledge to enhance generalization error bounds on learning with SVM. First, we considered prior knowledge about the domain by incorporating upper and lower bounds of attributes. Then, we considered a more general framework that allowed us to encode prior knowledge to linear constraints formed by attributes. Finally, by comprehensive comparative numerical analysis we compared the effectiveness of incorporating domain knowledge versus using pure SVM error bounds obtained without incorporating domain knowledge.
Subject: data, ellipsoid, machines, method, mining, support, SVM, vector
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 122 pages.
General Note: Includes vita.
Thesis: Thesis (Ph.D.)--University of Florida, 2005.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0011358:00001

Permanent Link: http://ufdc.ufl.edu/UFE0011358/00001

Material Information

Title: Using domain-specific knowledge in support vector machines
Physical Description: xiii, 109 p.
Language: English
Creator: Eryarsoy, Enes ( Dissertant )
Koehler, Gary J. ( Thesis advisor )
Aytug, Haldun ( Thesis advisor )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2005

Subjects

Subjects / Keywords: Decision and Information Sciences thesis, Ph.D   ( local )
Dissertations, Academic -- UF -- Decision and Information Sciences   ( local )

Notes

Abstract: Our goal was to develop a methodology that incorporates domain-specific knowledge for pattern-recognition problems. As different classifier algorithms capture pertinent domain knowledge in different ways, we specifically examined methodology for incorporating Support Vector Machines, a novel learning algorithm in patternrecognition. We began with a detailed literature review on learning theory and Support Vector Machines (SVM), an efficient and mathematically elegant technique for machine learning. We incorporated prior knowledge to enhance generalization error bounds on learning with SVM. First, we considered prior knowledge about the domain by incorporating upper and lower bounds of attributes. Then, we considered a more general framework that allowed us to encode prior knowledge to linear constraints formed by attributes. Finally, by comprehensive comparative numerical analysis we compared the effectiveness of incorporating domain knowledge versus using pure SVM error bounds obtained without incorporating domain knowledge.
Subject: data, ellipsoid, machines, method, mining, support, SVM, vector
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 122 pages.
General Note: Includes vita.
Thesis: Thesis (Ph.D.)--University of Florida, 2005.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0011358:00001


This item has the following downloads:


Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101124_AAAAEF INGEST_TIME 2010-11-25T01:55:52Z PACKAGE UFE0011358_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 15085 DFID F20101124_AADAJU ORIGIN DEPOSITOR PATH eryarsoy_e_Page_062.QC.jpg GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
2017de9f3dd3716763f50ccac7f9c7a4
SHA-1
d1fc1c358fcd42e58f60233cf73abcf45037b2b1
25271604 F20101124_AADAEX eryarsoy_e_Page_120.tif
b531ba2c10f15c248a4332f89cdcbe0d
ab58521154c17da15b5d38e176946ca8ee0eb080
90844 F20101124_AACZYV eryarsoy_e_Page_041.jp2
63cf601fcab9683a08128e0a70f4efb2
074949caf20f1e81f86a4ae6ac58f960d57ea786
71219 F20101124_AACZTY eryarsoy_e_Page_018.jpg
4f2a7abcd095f29748b06b4f513f4baa
2e8609660adb8aeb7950a05b55355c6acf3f97b8
21007 F20101124_AADAJV eryarsoy_e_Page_064.QC.jpg
c2658e444616b54f2c12fa9d724bc8d7
3f4a7011f2e6ef522d6dbc87e32fb8b24009ba01
1053954 F20101124_AADAEY eryarsoy_e_Page_121.tif
c3005f29f0f6a79fdb7fa2796b3fac6d
c5f67f21ea4f91f0c2005879f7b251da527b46a8
F20101124_AADACA eryarsoy_e_Page_021.tif
445ed8df09a45b25eb75b81195352de2
0f274c46b585463f07579521daea63c3885880b6
99095 F20101124_AACZYW eryarsoy_e_Page_042.jp2
55123c28b2114067969553261ad4432a
06658afd946ec6dbfabae242e5a3b9c1de5abf88
56227 F20101124_AACZRA eryarsoy_e_Page_014.jpg
28577df43c325e1d304f1205474a4396
fe1e91bde0b7f30a5522d40db9d7890d53ffe71b
6343 F20101124_AACZTZ eryarsoy_e_Page_095thm.jpg
719a028f86f32d2644d316e7209d7ed0
ce9c9280c449d59197057bdfaa094bb1dc2811cd
6059 F20101124_AADAJW eryarsoy_e_Page_064thm.jpg
57f02c9d00c709bdad8599c0c491c58b
8d3dcf294421fc44914a71d8c8c4572dc3f8b5f4
F20101124_AADAEZ eryarsoy_e_Page_122.tif
8fa4a7dc8265b805ce8053bf18ee3275
f8395f347952b844fcdb69bc1f007d0bf5ae17c8
F20101124_AADACB eryarsoy_e_Page_022.tif
8a52c826ccfcfc0674daa88410d6c64e
a575ba43ede0b8daee74ad25ac3a5671f213e491
92234 F20101124_AACZYX eryarsoy_e_Page_043.jp2
82a2f12eed005b0b758aa1d6cb1b6c27
79c4ac349afc5fd7882adb4c21be5cdf02f7870c
79304 F20101124_AACZRB eryarsoy_e_Page_045.jp2
a19d7154937112f9da271b9f35832e52
899a1a54a92c8652a43d2b973d08034f49f6469c
21912 F20101124_AADAJX eryarsoy_e_Page_065.QC.jpg
afbf838556a14b417d689ba2e90070c9
72d036567c63037713c8ac501b9e2cd8b34b89af
72312 F20101124_AACZWA eryarsoy_e_Page_069.jpg
a8dc5c2fb3209cccf03ccbfc2004af84
932b8fb7b81d52f60ab3e9b3d429b4f269878a17
80491 F20101124_AACZYY eryarsoy_e_Page_044.jp2
8a1ba47aca18b96a7b41a8d02aca82dd
075faa7222145a4fbe4999286bbc4bc3a644a8b5
F20101124_AACZRC eryarsoy_e_Page_013.tif
dc140ba2f9bff7bb6282ed46b8131079
19ff6f729d7153c7fe4b1986e45da42fa0d0655c
F20101124_AADACC eryarsoy_e_Page_023.tif
1614e4c4df3470b01d3712ec9817b652
8e5d42a5e5da58c8b974c46604eff090e46a712c
6396 F20101124_AADAJY eryarsoy_e_Page_066thm.jpg
4eeb486e301e218062c354f270383d74
fdd9f0aa3f5239c4eee1b9c224fc70a9ef676e60
12484 F20101124_AADAHA eryarsoy_e_Page_010.QC.jpg
af27d6240865b3a9518fa8f06ac4eb86
43b9f913069b0c3b8da40fbfa65c12846d48d8d9
73179 F20101124_AACZWB eryarsoy_e_Page_070.jpg
db021782c0b14c86eb52c819eb59bd4c
fd67fd6b2e8c2ee56586ef76e5f9e27f5daefe6b
86517 F20101124_AACZYZ eryarsoy_e_Page_046.jp2
451dc867363a740db2aad1891b60504f
8d509e60e033bc5c7a179bf9816c7fc062ecb976
58849 F20101124_AACZRD eryarsoy_e_Page_106.jpg
f2c8a3f74bed322831a454f2f2341eae
72eb8edfd2d37f94130983e794a02ec79f37e0f9
F20101124_AADACD eryarsoy_e_Page_025.tif
aa4643a80943cbcb4aba8c49a12dd70d
8f517734c02225c5521a9fc3963b9083ae89bf8e
24307 F20101124_AADAJZ eryarsoy_e_Page_067.QC.jpg
6aaafd0e54d1d8172bbe093447bf2d1e
640deee59900653efef2f80eb9de02e95225e862
19259 F20101124_AADAHB eryarsoy_e_Page_091.QC.jpg
8789122e3cf5e85bf8a142c9f2da7f4d
c821ca1aa24dfc5c6dc6df410ee963b0576a0699
65911 F20101124_AACZWC eryarsoy_e_Page_072.jpg
19ab5040b6624a1dec71efe308cd8395
3913d18fb9a238d3318e38fece746b19c808760a
22015 F20101124_AACZRE eryarsoy_e_Page_104.QC.jpg
88fe3a8dc2b58b3628767d12a5ae2998
6345203ac9d7f75656445f6839b873aa4558f72d
F20101124_AADACE eryarsoy_e_Page_027.tif
3faf17e927ff22450a9206a60ba6cd9a
10bcccd5712f99b48b784697254160235c345b89
5445 F20101124_AADAHC eryarsoy_e_Page_031thm.jpg
de0c5eca13f2d9e7ba2d4df518da6524
0a4044be5125116185fa0816f41c4a4dc88517e4
64312 F20101124_AACZWD eryarsoy_e_Page_073.jpg
1c4e0569942b6b5b869bbcf2ff768f92
a4badbb95b1bb03ae4488cb7c6f44a1f42d53308
F20101124_AACZRF eryarsoy_e_Page_084.tif
55d17631961f25264ddecb659db6c6fc
ccaeee97f20c5889b413274170080a9581c278ee
F20101124_AADACF eryarsoy_e_Page_028.tif
5d8f6471ef49e027ea2b123e2975b42a
3457397552242845a1551289dd021b1b434ffe65
2170 F20101124_AADAMA eryarsoy_e_Page_115thm.jpg
219bb9c232dbad9227e8df48e1b63672
186fd291e1a829c745519667d59666f793151ae3
6298 F20101124_AADAHD eryarsoy_e_Page_114thm.jpg
f28d1f315ffa597d7e170836fa866552
2df8d92845221bec44f9c34d412e60095a46d2a8
31249 F20101124_AACZWE eryarsoy_e_Page_075.jpg
7c866a8091ea8535ea06fddce56d3019
8273094e590c8f5112ec1621914765d65fa465dd
20379 F20101124_AACZRG eryarsoy_e_Page_086.QC.jpg
1c15fcafccfefcd5782c137392d0ebc3
e3015c6a2d8ab801f34e1e7893b57f4f78df6f1d
F20101124_AADACG eryarsoy_e_Page_029.tif
f3b4d2e3464fe4aee5c9076c2001d660
9aeabcbfa0cc6beb7dcef56d09948cbac7c9df64
24415 F20101124_AADAMB eryarsoy_e_Page_116.QC.jpg
fe216b6ec4f7029524e5ef5f157c9a47
759e5e83df388b7404245e89c1aaddbca394b336
11845 F20101124_AADAHE eryarsoy_e_Page_085.QC.jpg
a1dd7b6a5560067991bdeeccb3424a93
f968c816b264124b26ab660fe947f8a721835a9c
F20101124_AACZRH eryarsoy_e_Page_103.tif
8388307f60aa1f984aa88f5bc06d0981
ba7861de40606b3c7cd8164b6fcca2167752ad14
F20101124_AADACH eryarsoy_e_Page_030.tif
81176eb16b8e21f4e37f8017b1329be9
da0cf789e47c9de830f836f29e876eb31d1488ba
6620 F20101124_AADAMC eryarsoy_e_Page_116thm.jpg
6831f5719be815ff5bb86c726682e7e7
b75fe66dec58722e4efcb69eafc5b216d1ccc702
18997 F20101124_AADAHF eryarsoy_e_Page_046.QC.jpg
38f825a9843e932bab01a6ee66324a0a
61e52cbbc21b2cab76c63324db4ca4cdfddf0c0a
60281 F20101124_AACZWF eryarsoy_e_Page_076.jpg
7ec538890a11ee1c0b99c9e0886a69c4
06bbb87350bd3098c0e656b8533424093b611acc
13711 F20101124_AACZRI eryarsoy_e_Page_084.QC.jpg
55578607ea07efe29112fc6104512c13
d5c70cf53ad43d011b3b6f6dee7b9d334afce03f
F20101124_AADACI eryarsoy_e_Page_031.tif
ca301e08c785cfdd2eda8fd6798091d7
87973d607ff4ca2cae5feafc46a74248dcf6e552
28003 F20101124_AADAMD eryarsoy_e_Page_117.QC.jpg
fd696066dbde4835a3fbb349d96f7c67
f53c374c24c2db86298b5ecdcd95177955975586
5501 F20101124_AADAHG eryarsoy_e_Page_106thm.jpg
721b446569ea62549f283aaba4972eb1
f2109a6c3590414b6f2dd8d5ca970b26e32d15fc
65629 F20101124_AACZWG eryarsoy_e_Page_079.jpg
174ecaf682ce24bd45d74bf293a042ed
64daacb40fab8bed014dc23ab10526b0764bbc10
F20101124_AACZRJ eryarsoy_e_Page_044.tif
f3ea52c6a7b6a7e9559c537874351a2b
d3f2dfa2476aa35b38718abfb42d041ef9686c27
F20101124_AADACJ eryarsoy_e_Page_032.tif
4d34fca028db3752142f4d398be42265
00d9c2e45d6e8822cd5a5892f0e5cc81e3ba40de
7041 F20101124_AADAME eryarsoy_e_Page_119thm.jpg
1401265e61093b383e151806580d067c
fcba6dbb2993d4cb6a1141ff6c16ae14ea3f56aa
133567 F20101124_AADAHH UFE0011358_00001.xml FULL
3825fc234a4770fe5ca64f4b13923642
85e819c7f49602f6700da4d84e5a5175dc287d30
66267 F20101124_AACZWH eryarsoy_e_Page_080.jpg
5aac2a6c22918a645865f9293a859cf7
ada1139895aa1e0aecfdd52883d59251177f86f4
64475 F20101124_AACZRK eryarsoy_e_Page_093.jpg
fc4bf5ced2fc60ebacca51a6f688ce4c
9ba2c84db27fb5dc40cc92bc5c4bd0eaea5b773d
F20101124_AADACK eryarsoy_e_Page_033.tif
9d1318195cc56e9dd4ec344695f1d871
6d63053728b47789ad8c535def5d0c14f60baef0
24408 F20101124_AADAMF eryarsoy_e_Page_120.QC.jpg
188fb5176b0bcc2c7a527d448150b730
3bad68a67307aa8cee9067df84e873ccbd3cff40
6880 F20101124_AADAHI eryarsoy_e_Page_001.QC.jpg
c791d64b1c76317377ee1cff5c487017
fbe8d6bad05453712ea409b8a07108e2ee86ada9
59102 F20101124_AACZWI eryarsoy_e_Page_081.jpg
9770f8d48981c21ddcc83deae3a41d15
aa004435ed2a7206fff693e07486ff135fbf84b2
F20101124_AACZRL eryarsoy_e_Page_050.tif
04980a0ee0ac8dd4d406c8a23d68b0b8
e55361519fb6b2ac8ab47729f19eb617cd5f9372
2251 F20101124_AADAMG eryarsoy_e_Page_121thm.jpg
23eb3a5bba1d4ccdccc50cbe2a3ec279
b478e34bac3202687e5b1f0b14deaefcc8eaece6
3236 F20101124_AADAHJ eryarsoy_e_Page_002.QC.jpg
c2545bd46dc542d7816139d5e183e822
ff8af087a9e088bc1a07628a5fef9f10bdd5f2f4
69896 F20101124_AACZRM eryarsoy_e_Page_066.jpg
2097e08027c703a04c9d6aae492884e4
fd083f7c22437db74ceab2b33589436b8fbdc696
F20101124_AADACL eryarsoy_e_Page_034.tif
104182701c2b0d71ed011ae067194595
9eeaadc2116467441dda721f8e755342de2b4f5a
63628 F20101124_AACZWJ eryarsoy_e_Page_082.jpg
91120f27673f5e29558b8f18f35fb509
cfa6787bd304a7535ca84638c54a55b936b83893
7317 F20101124_AADAMH eryarsoy_e_Page_122.QC.jpg
a6600ec246a5fa80b4eb88ca15f5be7a
349a1d5f0ebde29207d64cf7d2f163374fb40a3d
3211 F20101124_AADAHK eryarsoy_e_Page_003.QC.jpg
ca78c899030b5796f3b8d4d47dc544da
1443cbdef40dcc3482db68425ad731e8f4b53e69
F20101124_AACZRN eryarsoy_e_Page_004.tif
6287d9ebde0556210edaafd9ba3d997b
68965e928998cc9b496a25949cb0d118737c0dcb
F20101124_AADACM eryarsoy_e_Page_036.tif
40ce22502626a8475c11e6a8fda199c3
64f1f8dddd55a6db6eb8553b223d313b025c951d
55635 F20101124_AACZWK eryarsoy_e_Page_083.jpg
5997846296ec720c0b73dd5ecde4fa8f
1f1d1f7600431ced9503e86515fcd967495804b9
2489 F20101124_AADAMI eryarsoy_e_Page_122thm.jpg
0f2cfa848fb0370f5ca4b04f73c86bfe
bc00268ff22da78ced036271ac2c8cfc8464feda
19504 F20101124_AADAHL eryarsoy_e_Page_004.QC.jpg
d0634a63dc84c24dda2a89a80f671611
96dcb9e2e6919c36cefb43806751ad08c1da8078
19422 F20101124_AACZRO eryarsoy_e_Page_100.QC.jpg
a5b7b8a843058e1a39b4b72e3958c9df
04f1e8966dcd9d968078fdc3b4aa815b8d86f55b
F20101124_AADACN eryarsoy_e_Page_037.tif
72b64ef87683fc222966f0607220dc87
6949b675810851bccdc8f7b3e80c62a821e83ac3
40396 F20101124_AACZWL eryarsoy_e_Page_084.jpg
82596bc7a82e1cd40d3b064c403003ad
c70fac6707b9e29732e3aa99d80fa331e4e29fee
18652 F20101124_AADAHM eryarsoy_e_Page_005.QC.jpg
c1bd7761ea2a2f6b7638e3a271cbad4b
e2044263ea30f56f99fc2e7cdf8040c99adfc776
24394 F20101124_AACZRP eryarsoy_e_Page_103.QC.jpg
228f6e36fd3827491f8adc74ac15075c
724013cf82c6e9a1771457f32c3afa9f8dceba93
F20101124_AADACO eryarsoy_e_Page_040.tif
62fe249ce9720ce84a328c9c0b5c8a25
a4a7a561710a3fe197b82bd56ac3e9abd000bf7f
36963 F20101124_AACZWM eryarsoy_e_Page_085.jpg
eb99b5c117a68c53a6f4f10b514f5c31
33dcf347444438bbc2c050ab9f10c63414ee25da
4613 F20101124_AADAHN eryarsoy_e_Page_006thm.jpg
23f13921a2b0862857404f9c50bdc648
209ba469d816a3d1643eff5160d72814e5e226e5
54128 F20101124_AACZRQ eryarsoy_e_Page_027.jpg
90524119367b49707f626333468fa762
53ab4388d92093261512d0c339c4522710d6c7c7
F20101124_AADACP eryarsoy_e_Page_041.tif
9385c6700f7e9949b70435152774c9a0
b65b1e42ef91968fe10e4f1f6504709c056c418b
58491 F20101124_AACZWN eryarsoy_e_Page_087.jpg
f1e2af3011aecfccc058cfb66a1c05aa
d6f2b7a697776f160d4feadb0509d36067bb9279
15372 F20101124_AADAHO eryarsoy_e_Page_007.QC.jpg
b98fcbcfc9ced46890944c47d07959e3
9d9fc45755ba0a740c7f86829d537055d0b43f8c
16640 F20101124_AACZRR eryarsoy_e_Page_096.QC.jpg
d53fb84b38431036e7d9ec67bd52441c
9c20143bcb88235e52a1685afb24f8c72485a778
F20101124_AADACQ eryarsoy_e_Page_042.tif
d41eeb0391c4240c123814b0f7f8970d
1902825afe10fd2a889d58a0eb63ea77011ea00c
56723 F20101124_AACZWO eryarsoy_e_Page_088.jpg
7878d09260f3e7087f5c596cf739c375
2cbf77531ea02b9b2022b8f7b342fe1b040bcd6a
4178 F20101124_AADAHP eryarsoy_e_Page_007thm.jpg
7222df0cb439de2b62e7a36890488a7a
dd3a493bc65f99136646c39af2256f32219c21ad
87231 F20101124_AACZRS eryarsoy_e_Page_021.jp2
88bb03b80273eab0537718bcf1444f4d
a341ed2abcde43e731fd4c9b7073ef012e783937
F20101124_AADACR eryarsoy_e_Page_047.tif
9aef8899e5e9600dc7125f1629d97d41
7934f3fd27e894420a2d2d8b231a0ca90325ef89
62608 F20101124_AACZWP eryarsoy_e_Page_089.jpg
be50f31ab17c52b5b53ff0a24766cd6c
233b17f33abfce2d8dec13a6b13b28f692958b02
F20101124_AACZRT eryarsoy_e_Page_107.tif
eac8cc2f2bd127e0ca091eb414821b21
e96ac8030aa5e9d1f01bac6af9fa86c9eb4f6d83
F20101124_AADACS eryarsoy_e_Page_048.tif
e99e81679837801bb963a5bfa6117fa9
b9ebe4899bff7e127336bda3baee628e260c12e4
46673 F20101124_AACZWQ eryarsoy_e_Page_090.jpg
4b0bacdfbaa5e606e06ab7ee60d6e124
2db618de57385168202583c6a6b307785ac58901
11441 F20101124_AADAHQ eryarsoy_e_Page_009.QC.jpg
73edc30cd59d0736d4433973273f348e
fa68bb9361380d8caee002f0e9bbe275798228c9
46455 F20101124_AACZRU eryarsoy_e_Page_057.jpg
1396a57bc6e602bd812b731fdc0eac88
71ede7c59fa78b5a9e8a31121fa5f07979945b0b
F20101124_AADACT eryarsoy_e_Page_049.tif
d8308ad1767cc4e84c72104bc025b19c
b03bf9ce5e5faf3bde2bca437c6c3ea38bd1ce6e
58273 F20101124_AACZWR eryarsoy_e_Page_091.jpg
936a397557b3dafe772c467377cdaa86
facb18a0aaf54e6131791654e06aa5d32bcff159
4370 F20101124_AADAHR eryarsoy_e_Page_011thm.jpg
0a59d985b8b7686daa0da616118b7fe6
01e5ebe07308104afde850150db16fdae9bd9f98
20710 F20101124_AACZRV eryarsoy_e_Page_111.QC.jpg
f83b653733dbb5434e8f1f3b47f16d78
221fe162538f09dd7d38908d13ec691e1e8d427b
F20101124_AADACU eryarsoy_e_Page_051.tif
1fe2ef2f207fdf2c0eff131cc0973158
fb180ea4d8db11cf668fe49a5d92edb1f7dc935d
47908 F20101124_AACZWS eryarsoy_e_Page_092.jpg
f047e9bf3796a662046e491db9f5a5a9
dcf9d99703015e532b981427b8a3a3cf3396d725
16849 F20101124_AADAHS eryarsoy_e_Page_012.QC.jpg
a7f885a9e34b9604107722a720fcf84a
789164d1275b65ddd6670f05cde7ad7a0290a9de
F20101124_AACZRW eryarsoy_e_Page_026.tif
c989fc30cf7d7b92615b800aa47f0aa4
2d405639f71a8b6705a10dcc2a09c8226f1fd0f4
F20101124_AADACV eryarsoy_e_Page_053.tif
af11883de7650538d7a6df5e867ee5f1
925462f21c20b15ec8b69aaea8704074214286e6
55777 F20101124_AACZWT eryarsoy_e_Page_094.jpg
b3007cc679b8ce09d264566e4168fa35
bd7487cef32d7e6bad834b879b65dea87b2918fc
4768 F20101124_AADAHT eryarsoy_e_Page_012thm.jpg
fe70dcf1b09699cb53e79c8d3d270bac
68f7e84a0d1541c52e73a5cb6aab01a7c01966ac
F20101124_AACZRX eryarsoy_e_Page_105.tif
45d588df03743d2f9e5fdbf9dec97b9e
4e5c621f9fc062e96bb9c3db74c0e3843a10714e
F20101124_AADACW eryarsoy_e_Page_054.tif
1ef731344e2d92da09097bbefe10bd7c
cf48891724360c368d769806b99cd03cf8bae46e
67235 F20101124_AACZWU eryarsoy_e_Page_095.jpg
19657d4fc70f3590cf990004609be41d
d0826630e26aaee4730cf3c4db031b2ac5507488
4564 F20101124_AADAHU eryarsoy_e_Page_013.QC.jpg
46bd8d9565ce8bb318c392ee821a5d88
3d98d686038aac63048023a505714ee24da59932
8423998 F20101124_AACZRY eryarsoy_e_Page_039.tif
7a0539fa639b9f7ff3071db941d020d5
10f6f8f9989a7367f17ba4097399f30003e3ccc0
F20101124_AADACX eryarsoy_e_Page_055.tif
c88b8b25393bbc9e38acb1cc85373db7
998440fe76ec0eef3630e2ffeef60c8114dbee8a
49916 F20101124_AACZWV eryarsoy_e_Page_096.jpg
1f75e07a670f6b41f7bda0aecfdd5607
a8e249293a7fa38db659b5a9934b6e9a7241ebe2
5253 F20101124_AADAHV eryarsoy_e_Page_014thm.jpg
ba49b9d3d8323d4d26d5d1a378979685
bcb44d69d0bd5cde8e8469e0fea6a6b0f6966d21
F20101124_AACZRZ eryarsoy_e_Page_019.tif
9889cf52e768532b2c53b6e48c96f51a
ad066aa02e6e4cc3b2b39bff5981833cbe2a402a
85541 F20101124_AADAAA eryarsoy_e_Page_081.jp2
1b5d0e814fc8c6348e3986d83187922a
00c9176a0f9cd3816fbcb12030df399105fd766e
F20101124_AADACY eryarsoy_e_Page_056.tif
01e6c7027c6418ef54c8626071819e12
b0167daeab731ccdadae67cb5e7eede3288fd001
69900 F20101124_AACZWW eryarsoy_e_Page_097.jpg
7af1220a47e5f521dd7ccf4cef4b264c
ebd7daad03782d51bf0105ed9b611e0705504707
62457 F20101124_AACZPA eryarsoy_e_Page_090.jp2
335c33ac83a3c10df38df2c9655ec816
b09aa9c96e2cb1f506bf5d09b569d2164eb59a1b
6356 F20101124_AADAHW eryarsoy_e_Page_015thm.jpg
cc0d7a192ce237963556a502b550a724
f73dbd996a4b516e064e72b5f79e176d89bf23fe
89863 F20101124_AADAAB eryarsoy_e_Page_082.jp2
dfb1c5ca362c572b43c94778502da9f0
e2505b7e5a3ead0445ad3f829f870adcafc614a3
F20101124_AADACZ eryarsoy_e_Page_057.tif
1f48aaec1123544396ec9cd21a16b4f7
0b8d160f88ebbefd0997ac49e40485163d16beb5
68391 F20101124_AACZWX eryarsoy_e_Page_099.jpg
e8ca0cd48b577e6c5b0694f992e3a9f9
3acb45c0cc903dbe14c972cc1df1f30a5fafa3c2
64902 F20101124_AACZPB eryarsoy_e_Page_060.jp2
4172c384b82119cd5e27ea8e2318ff9c
c5a7f60a74a9a8f2cfa7a8c04ba670c43238bb8a
6433 F20101124_AADAHX eryarsoy_e_Page_018thm.jpg
4a8a4a1088cffd7e8d520fbadf1fcbfa
e70ebf09eb8c28d6b89f980eae992735ac04b28a
77419 F20101124_AADAAC eryarsoy_e_Page_083.jp2
d489b5e991155ad49b6c8d901af3a196
a7fef62e0d7d7bcd328a2e7a2d98a9c1a1a91700
5569 F20101124_AACZUA eryarsoy_e_Page_054thm.jpg
be76a1bb6eabf5646c0666618a8ef119
966625e197c58da124005760bfafd8de7962a5c4
65578 F20101124_AACZWY eryarsoy_e_Page_100.jpg
63622b7c565923df2baef03045782040
d2adb64a1c4a59af61d41f16ba70cf09124a1d01
F20101124_AACZPC eryarsoy_e_Page_086.tif
9ef7dc7bfb7ddff04bd5898f2f3c4411
0600661afba60962c23de238d7fbd17fec611484
23412 F20101124_AADAHY eryarsoy_e_Page_019.QC.jpg
e0b15111d6021f67e7d092948c6428f9
3ee35980ffac4e04b7049a17e3f446410d55b2eb
2317 F20101124_AADAFA eryarsoy_e_Page_001thm.jpg
3014bbde4d5353d4f54dd32fc8463047
1bd0ab72515b07fbe51711d6e9db8d453cac49cc
49256 F20101124_AADAAD eryarsoy_e_Page_085.jp2
175c9c84eaccb691381e01b666a1b218
77b39f467575790eeb7a9e8ea91e2f236c167850
18664 F20101124_AACZUB eryarsoy_e_Page_025.QC.jpg
a2142ea6c948cf4a443327fac5a6d296
985ed87c4bdd93cc5442d5936d8bc7f02ae2d880
64656 F20101124_AACZWZ eryarsoy_e_Page_101.jpg
12b8407498518f2c7bc88271401ccc2f
496404fad3a4a68328cad84da08dc277381e36f8
4926 F20101124_AACZPD eryarsoy_e_Page_020thm.jpg
05fd9f05a05cc29c77e44db6da1eb33c
957b7dceb092a275e002a40a094259ad45397c56
16774 F20101124_AADAHZ eryarsoy_e_Page_020.QC.jpg
407462a0f409b68b10851d768f5e4d42
ac7760669f9113deaa4025ef5fd14a2f193a2301
1314725 F20101124_AADAFB eryarsoy_e.pdf
8bf8ef7bc67069b74003e660666d44e7
051fdb0aac990e6b26d970b36820fb238aa10715
1015600 F20101124_AADAAE eryarsoy_e_Page_086.jp2
826bddcc875a6e1dcb679a0a5d0a201c
35bd1e2b02c75fa7fcda898e7778426897d27d0b
91443 F20101124_AACZUC UFE0011358_00001.mets
c0bebda3a53822b09f9bfc7b5a448fda
0e30d10b28c203324e252e5143259b54b042c3d5
F20101124_AACZPE eryarsoy_e_Page_043.tif
7e1143445b46fb01ab3f8e15aa8f26e1
cc5f33a9e4ce603e7b5ddf7a8e54219e79bf0287
23230 F20101124_AADAFC eryarsoy_e_Page_066.QC.jpg
190d4802133a5ca24de9ae8a70f9d859
124aab304523c2bcda471aa36e0e19bc578d8f89
80348 F20101124_AACZZA eryarsoy_e_Page_047.jp2
4df2659758375076f1e6704009bfed6f
4525e7ade5e9cde8ee56190e0dc30611ed27cca1
83013 F20101124_AADAAF eryarsoy_e_Page_087.jp2
5806517abaa3e40d54e6a5fb48df4154
d07a442640ee128a49c64b4b0ee8fab61aadebd1
50490 F20101124_AACZPF eryarsoy_e_Page_028.jpg
c0aef14c5ebdfcff1c292e8259fd4937
a4426bf57e08fb79bcad4308c17e28e3806daca4
19637 F20101124_AADAKA eryarsoy_e_Page_068.QC.jpg
a4c4758d5fe5a4d45a2b06af4064ccea
14df58a6017d64414d6c2663c4526a5f7d34125c
21759 F20101124_AADAFD eryarsoy_e_Page_080.QC.jpg
3e195bf0f59a8c9a2386ebdd1f543662
9e51d9d2aa9417839783ca316ef3dd1262541f35
95188 F20101124_AACZZB eryarsoy_e_Page_048.jp2
f72d77b1dde5a594961869329c5b9500
4524514b09a95d4fc28d8b7cc990961771f466c7
82280 F20101124_AADAAG eryarsoy_e_Page_088.jp2
9e836672514af777a4be8042fa4e5d81
ce34abb7586f0374a92af90ad771602df32d913a
87142 F20101124_AACZPG eryarsoy_e_Page_116.jpg
0b0be87c940509fa71a5cbe592aab570
4090a378fdf9924d4f16c8039a87d553eaab81af
23746 F20101124_AADAKB eryarsoy_e_Page_069.QC.jpg
0c99463e81a415cd8d3cb116f2109651
d7f504791899f80d339f843a1bdd6189320bdbc9
18251 F20101124_AADAFE eryarsoy_e_Page_014.QC.jpg
a5d27af68a6386280021946df49bc963
33032e8e6055659abb6229fb6ff5bd2467ded03a
82683 F20101124_AACZZC eryarsoy_e_Page_049.jp2
3fd202acc0a57b14f339306242aae285
b412dee1e62d771a7340431c72766bfb1b73f6a9
84638 F20101124_AADAAH eryarsoy_e_Page_091.jp2
debb75bd99e64424e11007182f14562b
618b8c5c4561c91a711994d66c641992e846976d
F20101124_AACZPH eryarsoy_e_Page_035.tif
04baa5ae824384ad8ab03ed546599962
04c69742d897f52ddaf563937710d0922abf58d1
23854 F20101124_AADAKC eryarsoy_e_Page_070.QC.jpg
a6180546e4ea65733ea81df104dcccb7
f0308eef67530e2fa299e6570904e89c970ad029
5638 F20101124_AADAFF eryarsoy_e_Page_004thm.jpg
50e2fa235f66e28f5596c40828a938e0
69c19b8f7252d9f54407456a3c0052784973db4c
100302 F20101124_AACZZD eryarsoy_e_Page_051.jp2
9379bb2655394b92dccca2b9016b4557
4ae6c3af6a2b6b1c5c491285577529e3063e9d2b
69018 F20101124_AADAAI eryarsoy_e_Page_092.jp2
f97e57083d76e9006233605fc55a3f4c
251098d864aa04c5a1ff39e9cb902b9d6ec18aae
21627 F20101124_AACZUF eryarsoy_e_Page_001.jpg
80b2b1170bd084a94bd7fc2e0c40c20f
a6dd30d68ef14e62b8b467f6538fd5d112f3b8c1
53890 F20101124_AACZPI eryarsoy_e_Page_012.jpg
c9f2b2f380f44b12c03f2db828261e4c
79359ab69c3dc38237d9dd81ebc9cc23b57f5452
6581 F20101124_AADAKD eryarsoy_e_Page_070thm.jpg
d60df3808f50f12d5fa5cb8ec3c1f4e6
f824e3e378ade4c6f7cee44eed5b5b0f99c93d24
6172 F20101124_AADAFG eryarsoy_e_Page_016thm.jpg
b0ac1ce5d7b757aad1cd3d1da75758b1
11158a9b3f2de0be8f449b3c7ce2081242a27915
78266 F20101124_AACZZE eryarsoy_e_Page_052.jp2
fbedb05ee75483bf690cc1aec4b03ed0
93c7bb60baa00f7df55f3dd65132eb049204d296
10215 F20101124_AACZUG eryarsoy_e_Page_002.jpg
870e85e40858d303113bdb0be18529e1
8927abe27dc10186c28f60cb1a640de38043d657
19137 F20101124_AACZPJ eryarsoy_e_Page_082.QC.jpg
b76aba5bdebbf81072e1bc3a274900f4
a742acada96f7c9174ec5c7ce032913aa24f0345
5348 F20101124_AADAKE eryarsoy_e_Page_071thm.jpg
59f47c2d60b4564e687f3f2b39ee31a5
335a6f5e35fe6f4e07242bb3f43903b1969c71e3
5478 F20101124_AADAFH eryarsoy_e_Page_089thm.jpg
0527005c12937529077ce46bf9ffb6f1
82a2486e1b6b14ea2edebd45f7c8d026533b3b7c
75105 F20101124_AACZZF eryarsoy_e_Page_053.jp2
9e676252269250e76de6753bef54c363
5fa590362dfacaf6f5fb72635811577a658cc8e7
93458 F20101124_AADAAJ eryarsoy_e_Page_093.jp2
bae80893dbc88fbb60100176c60c9f80
58ab4714e9fc33c1119eb419f3d4025893a2a96a
10623 F20101124_AACZUH eryarsoy_e_Page_003.jpg
e5b8fa3b61e934b8ab2f5b50f5e9933e
a59dd54a1d00f96013164b8c6442e0ef049302d1
1764 F20101124_AACZPK eryarsoy_e_Page_063thm.jpg
099275b75445518e733213f33f787735
5ef4a6d305abe04e5839e5264a5af5110450dbdf
21049 F20101124_AADAKF eryarsoy_e_Page_072.QC.jpg
fc0e4cf3288692336474fd7438a84bda
c5754b1317d98616bc64b81ba51a18441a372497
16492 F20101124_AADAFI eryarsoy_e_Page_008.QC.jpg
a6382208e98512c99f0ab658be240e67
0252c286e80e24d783eaccddd3c0af24396513f6
78615 F20101124_AADAAK eryarsoy_e_Page_094.jp2
d1c084be4ee3af4d71b18f4a49ac69a1
a2059f828b5e9fc1229b1f3c8613000ce3bc09d1
71322 F20101124_AACZUI eryarsoy_e_Page_005.jpg
a26166de90f83295f0f8536ccac3d117
1101c8b7589abd5b6e622e7e96ffebb4f7cbf56f
78507 F20101124_AACZPL eryarsoy_e_Page_020.jp2
cd09677f02f36d56d891577ac2bcb368
c259deab933b008597419cafb67eb5efe19098b7
83438 F20101124_AACZZG eryarsoy_e_Page_054.jp2
932ea1a8b0a2896c7df195b89d35f463
c5042867936354c8cbf5d43e457493f8e53f86e7
6096 F20101124_AADAKG eryarsoy_e_Page_072thm.jpg
d0f2c9407a793a45dd808b1e14a9f5c0
015e7767feb12e9de275743eba7f0bbe9b8c3189
4711 F20101124_AADAFJ eryarsoy_e_Page_063.QC.jpg
0c936d42b7851c4476b4312a774a4145
6c8a9203ac38bfc446404c46a21939b05288a8a6
99482 F20101124_AADAAL eryarsoy_e_Page_095.jp2
893385344696dc12259d9a01a19b5953
71f1f9d68f645217ac5f7a1bf4542695d0d89b3e
70171 F20101124_AACZUJ eryarsoy_e_Page_006.jpg
591e56a04860455c7f1fbc5fa38e7c59
e28873ad2eb1a9799759972fc81769c7e4421103
F20101124_AACZPM eryarsoy_e_Page_077.tif
ade19144923d9bcb77390859ec6f4236
225504568bacf048220e07cc1f45e35e966852cf
73886 F20101124_AACZZH eryarsoy_e_Page_055.jp2
670b1b77b9614ac1863a818409da67a5
b43ce4e6061f7d6be1bed6c157234014235028d2
6017 F20101124_AADAKH eryarsoy_e_Page_073thm.jpg
632cd1ef1074cd939f80abe417d2e89f
ef97d9b0bc2794b3f3ad44673ec12d2669eb6d6a
15007 F20101124_AADAFK eryarsoy_e_Page_060.QC.jpg
fa5d0ec2c4292643dbbba89d0064b2de
9cc1e9c0600131e88fc8c37627cdfd7dd46d6d9d
69899 F20101124_AADAAM eryarsoy_e_Page_096.jp2
099aca938aa4c6224923a1034b8118a2
426b50a79319559766bc939e1eda9ca5ed095f58
53239 F20101124_AACZUK eryarsoy_e_Page_007.jpg
456471ab2e51668f4043879bc1251848
12d2d8ef6b853f3febc429d14babd7e2b913f9f3
22963 F20101124_AACZPN eryarsoy_e_Page_018.QC.jpg
63c5359842f876e5a8c9a8d26c1e6aed
56845851e8b39e6ac55d83e94db4f4ff002008fe
5580 F20101124_AADAKI eryarsoy_e_Page_074thm.jpg
6cae4fa269134ade68225f4e86a0f79a
4181ef0631f902359548a3f097f6c0af3d303531
17639 F20101124_AADAFL eryarsoy_e_Page_106.QC.jpg
fdff76a5999ba2d3368c5ccffe208986
cb269fa8a3954b379c9565fd72ef5cb406280dcd
1051978 F20101124_AADAAN eryarsoy_e_Page_097.jp2
51e3dfb4550b0dd7f0efc28041026860
19d53137eac152c1e9472f0c02d6ac4b7978d3a1
34024 F20101124_AACZUL eryarsoy_e_Page_009.jpg
b62a4996d9fca9567e954da22b1c1cf8
22a104fee874b15c4b4d649d35fad92365dd58b9
1703 F20101124_AACZPO eryarsoy_e_Page_013thm.jpg
2d75aed368af3e12deef381e5985f820
36d6982fb556ee8206b44345de1ecd529643da7c
66189 F20101124_AACZZI eryarsoy_e_Page_057.jp2
33d9806146fe085e4cf7b8699d88d676
b24e05f89af2adc18bede510bb3841910e616b0f
10234 F20101124_AADAKJ eryarsoy_e_Page_075.QC.jpg
0ef30bfca45708b7123e76b135153a6f
a37441644cbc207a9e2b6d36cec6dc4d530a4c82
7287 F20101124_AADAFM eryarsoy_e_Page_117thm.jpg
01e9aff927e1ef06f87bce974fe35c60
60940824af8cc88ed4b5ff7db7308117db127769
97667 F20101124_AADAAO eryarsoy_e_Page_098.jp2
45939f73a7c98b082be3f24b3558f59f
2e4d680ef14434ada69fffb7b64b13ace4b23fdb
42430 F20101124_AACZUM eryarsoy_e_Page_011.jpg
90b09b9ddbfcc39623d5abd1bce343c5
d67c705b943a885161c089c263fcff8518f891f6
108902 F20101124_AACZPP eryarsoy_e_Page_015.jp2
23a97bc729939b139cea33ddaf020d69
a6c4c1cc67edb611f6f0a03eb084d7519f0db4d1
95462 F20101124_AACZZJ eryarsoy_e_Page_059.jp2
1b2fba9e27a725f87ab750f62d8254fd
1205e7f81cfc58aa35299518ade38fefaeb8e986
5844 F20101124_AADAKK eryarsoy_e_Page_076thm.jpg
60f3b7c91c48d949bd358d3aaa81c362
2d324fa625bc1fed79bbd189def84f766a17109c
22123 F20101124_AADAFN eryarsoy_e_Page_098.QC.jpg
0415d8ec1c5b6718bd8c3c29516a26e0
5e42471ca8c85dda638507f645673a21430ff1b2
102003 F20101124_AADAAP eryarsoy_e_Page_099.jp2
2aa0e8f15bf2d2a4d61f9497aeb373ba
0d4ebf81f76076c4222fdd731233c404670bc152
71143 F20101124_AACZUN eryarsoy_e_Page_015.jpg
2bbafcd9860a9770cb5afc98e80feafb
071d920aa0da31553c86485cd99b3f288dbc375f
13359 F20101124_AACZPQ eryarsoy_e_Page_013.jpg
1a8beade168ec384983d2238a563e2cc
a8590192f5f280b23418a9a02a51dcf1b336bdc2
67436 F20101124_AACZZK eryarsoy_e_Page_061.jp2
f847eab2953b8825c19f489cba5bf9b5
7041655cc22763095b9d4abd5ae41adb61c93fd6
18434 F20101124_AADAKL eryarsoy_e_Page_077.QC.jpg
4cee5a1816e8d1bed4d81739bc05423a
6a520a6cfab6a7eff216add02a3278d5fdc6a13a
925620 F20101124_AADAAQ eryarsoy_e_Page_101.jp2
af8719dac7adc45cd35402760e26975c
0d6bab19eb83baeb8ca4e752fde1f19b359aac5d
66465 F20101124_AACZUO eryarsoy_e_Page_016.jpg
301d4d4ac2d640150d2ef9be65ff3e64
1a962b415aa6135ed4ff42af2b2e8a55661d7d37
56624 F20101124_AACZPR eryarsoy_e_Page_040.jpg
5e1abac8d16f4be1c3d3340569ecdfcd
ebc515ad34cb01105989fb4fb327d727ca026479
94063 F20101124_AACZZL eryarsoy_e_Page_064.jp2
b925f5087e41369519a6a34d917e9902
d506dbd7c64fab11158ea71f63a7cb6958512d9e
5111 F20101124_AADAKM eryarsoy_e_Page_077thm.jpg
f0d9ab84d638db68dad61288a8d481b0
c1ad265e3ea50b8ac38ad77d395fca1fdc1c286c
18758 F20101124_AADAFO eryarsoy_e_Page_089.QC.jpg
676cbd8774c09f7421418d17bb690236
1cb632ece09473c9dfc675a822aa416dd630b2c1
92359 F20101124_AADAAR eryarsoy_e_Page_102.jp2
749b24a4bab51411342013f0fc5e9425
efe0b6adeca03c235726e48a175e9c9c39a594d0
76597 F20101124_AACZUP eryarsoy_e_Page_017.jpg
33eda5e89b7362305af287a74506e9a3
9487891eb8d1117866d429970628aea3ed2da710
20425 F20101124_AACZPS eryarsoy_e_Page_102.QC.jpg
0c1475fecec82f924c8cf7447612a2b4
129a31aa9c066d2294c37e970f734b9ae9c1c86a
101479 F20101124_AACZZM eryarsoy_e_Page_065.jp2
a599fd12d8085dcfe07bc1ef6aec3a33
abcc9c083a231995c276c690316ec47b80dbf34e
6079 F20101124_AADAKN eryarsoy_e_Page_078thm.jpg
d47b9471c7d31a619f24c6db229c659f
cf7f3fccf338e1c7f701c857a1e5fcd2f9e17672
3277 F20101124_AADAFP eryarsoy_e_Page_075thm.jpg
395b47f92a8ba937f4cace17614c144b
edcfb518f61e7c817357127811318420b72f4514
108549 F20101124_AADAAS eryarsoy_e_Page_103.jp2
f2f5923045d1286b3329a5e1297b87a3
21ef1b3e60aa278f266d8e26113bc0c8652163c2
52326 F20101124_AACZUQ eryarsoy_e_Page_020.jpg
27550fb37a1cad21f74b2db68568467c
f1735af2bbc1c11dcf6a20f201c8646c0a0318da
5397 F20101124_AACZPT eryarsoy_e_Page_094thm.jpg
db7daac36be1a1505649ba2e294dcfaa
6e33aff82ab02fc57cd8efa3bcdf00f41446204c
106252 F20101124_AACZZN eryarsoy_e_Page_066.jp2
1167a9f83c36f5176aba0d4468f77d08
b6a9971fbccc0054b0f15b070890510a4b7d43cf
21253 F20101124_AADAKO eryarsoy_e_Page_079.QC.jpg
73727d964b2d7f0378161aee8adf9c37
4df5bdc5b2360e049151fcc4946d8e30b8e77715
26040 F20101124_AADAFQ eryarsoy_e_Page_118.QC.jpg
9bb0a31be116f9f811a343f9c6914ebf
407168f962f05df28593aafcf22941d7ab9b95fc
1051985 F20101124_AADAAT eryarsoy_e_Page_104.jp2
50d666ecc6848ec298735ebbb0b7b2ea
1c52d9ac93bddd87d7dc39bb09bef333eed5913c
59833 F20101124_AACZUR eryarsoy_e_Page_021.jpg
b6401e4f731dba3e98d3c19225130c63
d64c048a594d5458e31d6d242617c3a52f890667
1363 F20101124_AACZPU eryarsoy_e_Page_002thm.jpg
8e2e2fe33c04aa883ee9b5007fb563c7
02e46a640305558954dfbae8c2afd22f57fba69b
110272 F20101124_AACZZO eryarsoy_e_Page_067.jp2
8e9c20a083e9fb6d5931ce9009447880
71f422a574344c354dac9721668879aeeae793fd
6165 F20101124_AADAKP eryarsoy_e_Page_079thm.jpg
b451cbf5843e76a5636545652c9652ef
581e5ecfa92aa56882a46cb45a55e76fd35c339c
17600 F20101124_AADAFR eryarsoy_e_Page_074.QC.jpg
14914a4d0e5e82246b0b4bf66bbe8792
7d143c7755d6d66d5a99d570182370d7c8cccb2d
96910 F20101124_AADAAU eryarsoy_e_Page_105.jp2
15db568f29addd086f0e55b9a3cc219b
0ab250ed715fd381c8b1770c718bfcd1d6492958
75315 F20101124_AACZUS eryarsoy_e_Page_022.jpg
37c54032e9e4183c72ac08cb1bc6039d
5560c27168b6af33e2c375058cfd96bae8171733
F20101124_AACZPV eryarsoy_e_Page_062thm.jpg
61fafb0bac77c8bc2b0e3d34f0d60b16
c8ff18cdb32fe8f22f61978e72a305778713978a
83284 F20101124_AACZZP eryarsoy_e_Page_068.jp2
abfbacd2b11c845a1091b84078874c6b
fae3a05b54fa940da6e83d0e2295dd94d7cd474f
18383 F20101124_AADAKQ eryarsoy_e_Page_081.QC.jpg
df17aaf3e520531f962a246ecb5862af
9ee4bc03cf9f6db06b4117d49242053f42da5949
16621 F20101124_AADAFS eryarsoy_e_Page_037.QC.jpg
03bec3402746d78bb8dc3e05cefba0ee
4d223cb9ec98aad7bca3abae47d93708eb428881
948693 F20101124_AADAAV eryarsoy_e_Page_106.jp2
2a2ca7b592d72b3773ae4e7c8a2da53e
b2f9bbe6f0ce710c94cad86a8763b62081a58a0c
64373 F20101124_AACZUT eryarsoy_e_Page_023.jpg
43d2e5633b70b854038738cdf51c870d
a4577f91b613f33e59d1a5005dcd8962950a4b94
60480 F20101124_AACZPW eryarsoy_e_Page_056.jp2
bb738c01a8854394f28082580b23c885
56012aeed7509637f0eb521b25bc7c88a1eb0954
107979 F20101124_AACZZQ eryarsoy_e_Page_069.jp2
bbd6dcf2504f816fd6c3a2e25eb4d6bc
add06338ae9291f1977ad84509bbaba576a3dca4
5937 F20101124_AADAKR eryarsoy_e_Page_082thm.jpg
72d0c357ebcaf784e195b90e773fb625
bef7ad257b0589acbeb600e96d2b0a3dfb149c24
5408 F20101124_AADAFT eryarsoy_e_Page_068thm.jpg
7dc5d313737b08736a892f8165a4317f
1399bb9f67bf96c63863596a4c74603caa68339f
114865 F20101124_AADAAW eryarsoy_e_Page_107.jp2
d79b8a629efa16aa9d44186c06a6bcbe
cb25534d784ee59a1c1c4cc5772d7fc9f7e0760b
64886 F20101124_AACZUU eryarsoy_e_Page_024.jpg
3092c8d0ff433e53b952fe878438d95a
bef5c1b9a066902a23fe0fe91b4c6c9db8933eba
4852 F20101124_AACZPX eryarsoy_e_Page_008thm.jpg
ebf7f1aaaa6c69f27cc856c55e4ea0fd
4e7ef72c149a780e0cb28fdd77ff7d9530fd0ae8
104098 F20101124_AACZZR eryarsoy_e_Page_070.jp2
2864c4c1ec987c2cffd9d4337187f48a
a8bc9e00bf34b1652e4db89cd6f260deadd86997
18800 F20101124_AADAKS eryarsoy_e_Page_083.QC.jpg
257f591dfc447fd4ab5be0f6721d2c09
b316daabfb26e83ceb36cd90e1626dea72c840d9
5435 F20101124_AADAFU eryarsoy_e_Page_029thm.jpg
e4da484f82240642f4dd3b00818e0718
d7384d11e7f1048ab4abdb8b7d728ef41bff6727
81864 F20101124_AADAAX eryarsoy_e_Page_108.jp2
65006ef167e42684f4df2931a40d04b7
e82e59989c8596d0575a849148ee428202aab6c1
52968 F20101124_AACZUV eryarsoy_e_Page_025.jpg
d99af76249b2e40b58ffe69619b631fc
0af5175065c29f4861376864170b990e9cff5aeb
4905 F20101124_AACZPY eryarsoy_e_Page_092thm.jpg
30667d1b858833ed08edab84ed49f19d
e01042149037399932385f14bce4c238a98e3df6
898347 F20101124_AACZZS eryarsoy_e_Page_071.jp2
72f9614c6e06e7e11bbf82ae6caeb2bf
42a2fbb13d318d0a6bcbdf547dcb73b8b16bc5fd
1367 F20101124_AADAFV eryarsoy_e_Page_003thm.jpg
f8c80698d11d2a863c3989a9cce71aa3
d0220700aed84b15d9e06a1c4d1db3665478e232
878010 F20101124_AADAAY eryarsoy_e_Page_109.jp2
8be7ea87dbdb9726a06d052e72cd2894
c00a41f70e659f8b23480b26cc497b6f266d9f99
58297 F20101124_AACZUW eryarsoy_e_Page_026.jpg
07f3d87f35e54c797523d1fba9c75658
76d35bf7b9e9fff37aba87c30d64d5c192de0a2b
F20101124_AACZPZ eryarsoy_e_Page_113.tif
4f1aada1e4c34d86de449f07030226b5
94d982731f153712809fbcd8c09d87922aa877fa
97152 F20101124_AACZZT eryarsoy_e_Page_072.jp2
376307a5cde09b2f3802e03a0a8b9342
d0666ba115ed2711b245cd0a2c1266819325a51d
4535 F20101124_AADAKT eryarsoy_e_Page_084thm.jpg
56f6808c5041d30dd6a45862206e9372
e2919324f4c7ccc453e0d3db5024d08923ddf00d
5441 F20101124_AADAFW eryarsoy_e_Page_109thm.jpg
fb45a5f6bf5c011fecef18b2a9c29d4b
e5d569ba3b3994aec668a37a88ba81498ce8ee76
965249 F20101124_AADAAZ eryarsoy_e_Page_110.jp2
6101b9feb67c5dc4b554c35032cb83d1
98ea4ce7c4c2d5187b478d536cfca75ba69ce99e
53438 F20101124_AACZUX eryarsoy_e_Page_029.jpg
b32ade88b78b73d4b091a90a633bcf41
beb0eabaeb6af5f072659d3a75eb8c48265cb641
79258 F20101124_AACZZU eryarsoy_e_Page_074.jp2
867550af8518147a263e2793b86c6966
d00ef5c9a8ad9167aaa2c9188327d4eb3f40bb97
6013 F20101124_AADAKU eryarsoy_e_Page_086thm.jpg
83223c3909fad9a07c70c0903967085c
8af643201ff69951f19e7a1baeb8a74f113b37bd
6333 F20101124_AADAFX eryarsoy_e_Page_097thm.jpg
6610c24ddcbf2b9ae21182fe883d5ba4
cbc4209e34bb96c44fbd9e275f3827bdedb01139
46952 F20101124_AACZUY eryarsoy_e_Page_030.jpg
b621b091c1447a7ca3e134c6f866acbf
4241bf2b03610872c4c017454123981ba886dae2
42123 F20101124_AACZZV eryarsoy_e_Page_075.jp2
97efa612ef110b351b9c153c1c7e3ad7
085bd04f2547f31ea7847a905d298eb5b5d6a955
72024 F20101124_AACZSA eryarsoy_e_Page_019.jpg
fa261fe9e22f201edfb336d94c0db82d
08ec6655514f4070a38019ff8312d731a655e44b
18632 F20101124_AADAKV eryarsoy_e_Page_088.QC.jpg
3b0e69cf812977428fc5a45ebb6de95b
60ab131b9afcbfbb65d832a15a40f34d05b851a6
19168 F20101124_AADAFY eryarsoy_e_Page_087.QC.jpg
e0092928892f3dc495d7a0c58154c5b4
5a93520a15c3f996e24e46e7f48323eeec5fd293
61644 F20101124_AACZUZ eryarsoy_e_Page_032.jpg
977596d6b2f6280b9fb08a15b659398a
a57c66ac41ec11bf3091c2846e9eebbc875d1860
F20101124_AADADA eryarsoy_e_Page_059.tif
edcf693cd8954094977d5a58deb0b9f7
165563ec46ae8d5b95ade167d05be7a812392851
87594 F20101124_AACZZW eryarsoy_e_Page_076.jp2
d999a317c662479d3146cdf813b50ee0
c3fdcbd0cf155d9a3b3b222d5cb7d41721526a37
5541 F20101124_AADAKW eryarsoy_e_Page_088thm.jpg
dea5f90269fa5a30de0af841632760bd
2bc0c5af0bdb2ab7000ab0c7638fd9f828dba7a1
20339 F20101124_AADAFZ eryarsoy_e_Page_041.QC.jpg
5298e168a4f18c50fc02e35378c80866
6210a8e5f552c59c4d928088c10c5b852197b9f3
F20101124_AADADB eryarsoy_e_Page_060.tif
57cdf7a73ac5a727ea3bfabe6be5d967
93b45b8b387066fe3080defa11464f049d2e92ca
855841 F20101124_AACZZX eryarsoy_e_Page_077.jp2
87bf9106345246d4706cd6468727b370
20e86d7d759c5688af248b98e26d5d79e89ecb44
1051983 F20101124_AACZSB eryarsoy_e_Page_007.jp2
5d85c25c8b2bcdd39a6bfbf4cdfe6744
bae465129ff583c729afef408012fca7a868c305
4908 F20101124_AADAKX eryarsoy_e_Page_090thm.jpg
fac88946c5feaaa89e7b46fc704acc4c
8f70495685998c4180696d241a790b74db668601
F20101124_AADADC eryarsoy_e_Page_061.tif
5b8de748cc5b88e7f9038f0ce7f312ab
d5b7e833b9248330e351668d922dce1dfc1f7833
63208 F20101124_AACZXA eryarsoy_e_Page_102.jpg
6c15eff2ce3e49280420378059134271
d6e632d9521c332367ce4c38bad5251b69e16a14
99722 F20101124_AACZZY eryarsoy_e_Page_078.jp2
e95e52f81e69ef019ef9e5cab05054ec
75e099169a797e81cb3d731fd406965a69533d3d
67116 F20101124_AACZSC eryarsoy_e_Page_078.jpg
d5969842eba6c9481ee69828fd9dcab4
18247b8f77bdf4d848773cb30659292061e30461
15815 F20101124_AADAKY eryarsoy_e_Page_092.QC.jpg
a9e7b8347a107dae2d8c04b7607a52f6
4d36a887df421d9118448c3ed33c50de517340e3
18876 F20101124_AADAIA eryarsoy_e_Page_021.QC.jpg
cc8928feefecf42005fdd3f7cca6c669
d287cbeb59486563c8b3fd6c0d3ff76469a2869d
F20101124_AADADD eryarsoy_e_Page_062.tif
fd5424b9f123fc1b568dcff54b17ec0f
cf22fad5b4cc67751cd881e3c321c04f8aa36752
70619 F20101124_AACZXB eryarsoy_e_Page_104.jpg
99d40d4ebc30c4c102dc6174590c0273
364e64e8aa8ce3554ef8c978f11015714246162e
98978 F20101124_AACZZZ eryarsoy_e_Page_080.jp2
615eebb6c7b811737c1cf6b8ff5c33ea
59bab0e22584fc7522153af77d66b7889c2a617c
21938 F20101124_AACZSD eryarsoy_e_Page_051.QC.jpg
e686e0c776e0d68786433a5d5d51fb15
2de804f9e1ffd23e7c214553729d6b15093b4c35
20758 F20101124_AADAKZ eryarsoy_e_Page_093.QC.jpg
c3987ba4f96b894b5817be2f428bdd3a
e6bdfdc348dd953335717f92cd1c06db102180ad
6561 F20101124_AADAIB eryarsoy_e_Page_022thm.jpg
cde78b98e453e9745b534d8fef4fe2de
0afe7ebb3ab7477e6f655782a72a026606346136
F20101124_AADADE eryarsoy_e_Page_063.tif
ae2fb0863eff79e28f91a1cdd8b89d82
8548b0c5af654c5452a6996b7258ffd3cef6079f
68320 F20101124_AACZXC eryarsoy_e_Page_105.jpg
57a606b5f508e1a5c4697b10146c3aaa
11fc16d469379d1f391c7c3ab3ac894eb7dc18ab
1051924 F20101124_AACZSE eryarsoy_e_Page_100.jp2
f156eff56f9f392c23a613ae0d9912c7
cc57e234a4944850ba22a51ca26113e870a716c8
21211 F20101124_AADAIC eryarsoy_e_Page_023.QC.jpg
8da196431090af99b43390dff9a2a739
7036b7c56f2c7b796b17d8eabbf26eb716aac2fa
F20101124_AADADF eryarsoy_e_Page_064.tif
cdc0058ca67ff1e7dec57c8a94539cfa
ce8611bb54ae6b45bb1c80fa9ed7c4f704137162
81370 F20101124_AACZXD eryarsoy_e_Page_107.jpg
d6058cfc4b3e5caccdfabbe564904195
f41e0dffabd94a1df6a8f97ed6ef13817200c8dd
17935 F20101124_AACZSF eryarsoy_e_Page_047.QC.jpg
b37165da86e45bb74566812270162d66
43b73613bcd7105e269829e076059c8057a80d2a
5966 F20101124_AADAID eryarsoy_e_Page_023thm.jpg
be6407f9158b9db69f54287c484ae7c6
c04d2ec600da3c7546a5918feaecf08e79f7e48f
F20101124_AADADG eryarsoy_e_Page_065.tif
09e833011e05b7dd540a59ad509bd330
2e7ede4514bdd36e7b3750c3e9cd5c2d03177a09
55358 F20101124_AACZXE eryarsoy_e_Page_108.jpg
ddaaa5fa9a7baac60d6107dff9cddf3b
770e7a9345a8b759dace7107462616c0b61f47ef
6915 F20101124_AACZSG eryarsoy_e_Page_107thm.jpg
dd847336a539e3a358c3c8c8a93eea67
1294c2d18bca3c67324145b9ab87b051c0e82fd4
19377 F20101124_AADAIE eryarsoy_e_Page_024.QC.jpg
d166c687bd2f9862d5e38a2774e0f642
9f5a3ec73287c56addbc70c2b01ebc4f85a30c00
F20101124_AADADH eryarsoy_e_Page_066.tif
05c530446510e5fa24f06a56986ef22a
ecd13cde7fff8d35e39f186e6301ee877ac8c780
56816 F20101124_AACZXF eryarsoy_e_Page_109.jpg
6487c0cf2de923be6a60f021fa93385e
0837c7fef4bde819eb2a096db836ff7aa5fb450e
47762 F20101124_AACZSH eryarsoy_e_Page_061.jpg
3bf73ee926e6056c22e2929da5cb7e1a
8d0746b9c6538e4add9946751aae0f9fd58ad90b
6139 F20101124_AADAIF eryarsoy_e_Page_024thm.jpg
a5709b62bf77c252e3efef18f94edf44
4c3c3dd2e62b7b0ce56688019ced1d5b01ce4263
F20101124_AADADI eryarsoy_e_Page_068.tif
f9e625e2a25ba79ace1bd4949a96a434
8e273c6774f9c8f809f90ef321b6d7cc37f4b4c0
78398 F20101124_AACZSI eryarsoy_e_Page_058.jp2
750a0fb226038fb4c39efba651e16ffe
cfa5b292358fc68e7cc8befafa68b598b4aa2d7d
5454 F20101124_AADAIG eryarsoy_e_Page_025thm.jpg
87efadbab81539db719a3a1b14c82851
7cb9458f93d1f19bea0fa174eb89873f01873b6f
F20101124_AADADJ eryarsoy_e_Page_069.tif
d703326de69eee0f68b04999f8b12399
104b434bcc308008e0649f4f80f9b43ced9e1a60
63858 F20101124_AACZXG eryarsoy_e_Page_111.jpg
65a1a0ee21fe855676e57de91abac1e2
1df6a64545e93c4efc037a3eeaacf8f406f2bcdf
18274 F20101124_AACZSJ eryarsoy_e_Page_115.jpg
f0b518bba41799d71cbc6b073cd3851b
831e5332c57b63ef5eb9b88642074e486da8e0bf
5372 F20101124_AADAIH eryarsoy_e_Page_027thm.jpg
52f3d6879c8d58369a3242026aba75cb
54e2fc709e8fb7b4423cd42f5c2c1200ad0affd6
F20101124_AADADK eryarsoy_e_Page_070.tif
3a24795d8a0121c2eaf5bc8a51c373be
2d6f42d6e3d173914ab1ce168400805fbf8f5132
62742 F20101124_AACZXH eryarsoy_e_Page_112.jpg
b33d28a3fc0eb0cdb7c2fde215b85152
34a1cd3528752209075c6c0e1934d1b0a0cf2b67
36418 F20101124_AACZSK eryarsoy_e_Page_010.jpg
8475c48ad247f439f34824aca6a6d3e0
cc1b9ba96a6127d734517b85c6343e2674fbd1d0
16773 F20101124_AADAII eryarsoy_e_Page_028.QC.jpg
00e94ddab517bca9a943622cc13cd84f
4b59cc63086726b1b0a65a9f180dcc4f0d96e02c
F20101124_AADADL eryarsoy_e_Page_071.tif
fd564d11b4236566b009ee4f28a3cc3d
9927b57e39c92f4d25ed1f54505363ee8442a820
69801 F20101124_AACZXI eryarsoy_e_Page_113.jpg
f7f088013692e6f1a12ed512d4222a7d
ebbc8a0e7fb01d7c639c1f5d90cf934c3a041e2c
20932 F20101124_AACZSL eryarsoy_e_Page_016.QC.jpg
d4845d77a354fc09324474c68a012769
dddabc639c57e298d4f340ca02c6030c8e7cb43d
14752 F20101124_AADAIJ eryarsoy_e_Page_030.QC.jpg
47e7ade0a4ddc0c8ede9f3b9aa1d3d0c
cec9d6382fe1ee717c4ae982b169e0cdda10288a
69488 F20101124_AACZXJ eryarsoy_e_Page_114.jpg
b598fc1920ea3fb1dc2f789ed23c8b92
e68cbf21f6124e4034977d391cca7ce72d683507
4119 F20101124_AACZSM eryarsoy_e_Page_085thm.jpg
88362b2f95ab85b141d913f3b47794d2
0d037134b120ac8d357526d05a94a10cbd6dd825
4940 F20101124_AADAIK eryarsoy_e_Page_030thm.jpg
790da29d9cadc540f0162b39c208b21e
8965326d1133cbd8fb5b875f8ad2cf7e26299ea4
F20101124_AADADM eryarsoy_e_Page_072.tif
dadaa07f1b18f98145b469c690c7552d
f2de2e4ee580fd7c0af15f69bdb18bb44a56891c
103495 F20101124_AACZXK eryarsoy_e_Page_117.jpg
7243e84499c243f492b14e9b4ffe5320
f622ceb141c8ab257a16d8c9656234d33e33b1c0
97565 F20101124_AACZSN eryarsoy_e_Page_023.jp2
bf2d0e509703843ee92e0ee0322322da
d305fa5e9b11c9e2a731be52d26d855c1a103725
18640 F20101124_AADAIL eryarsoy_e_Page_031.QC.jpg
ca8eda4bce5671e650ef717fb1bc4041
97d467d86e402232da9c3d16feff9e64228744e0
F20101124_AADADN eryarsoy_e_Page_074.tif
cdc3efba4feb9fef664775cd119c705e
60c7d871448e13b17f6420f44d9fe654a70ee22c
91494 F20101124_AACZXL eryarsoy_e_Page_118.jpg
0806903e44ae84a11900d04dd6c8d33a
7ca0b02247eba4b40cb73a3f17479855960e304e
4833 F20101124_AACZSO eryarsoy_e_Page_057thm.jpg
f177e98025154d209c110e1b61634018
d710ea3502c56247d51710c944f6e35ae9d5c760
18857 F20101124_AADAIM eryarsoy_e_Page_032.QC.jpg
46c091adb868cd7296ccd9e2584ef516
93a4f7f6ec31966c48841628f99c571860d09e19
F20101124_AADADO eryarsoy_e_Page_075.tif
f977f2f9dcf9ebb8950a0baf0b8d3d74
0699d541ec09e6c7549f09d56cf878fdc1cde76a
95333 F20101124_AACZXM eryarsoy_e_Page_119.jpg
cdbd6f152da3bc4e54d9cc3fbc3b7ba9
bfb6fd4fe5bc4ffe50f5b7f160fbf8efbf5ded1d
132065 F20101124_AACZSP eryarsoy_e_Page_118.jp2
27e228d2b1f951b9452b9a91ec39f9e9
0f48fa561027f1775d64477655ec28ec570c465a
5975 F20101124_AADAIN eryarsoy_e_Page_032thm.jpg
9c236d5692a1c1433361d7f9da6d7cf0
d56b14ffd796caed26e0a81c6fead09359e2039f
F20101124_AADADP eryarsoy_e_Page_076.tif
cbff4cbe0e8a01ab4c55d4a88081e87b
4c7353b92f475085342c6803ae2eecc4dbf77534
87187 F20101124_AACZXN eryarsoy_e_Page_120.jpg
d75c41de4c603a7c9a7c301a7c784454
45a88dfd6d3f6df48cb0dbb4f653cb87a11bff2f
75010 F20101124_AACZSQ eryarsoy_e_Page_050.jp2
7f41a68840dca32095932ec95a8c169c
6e21283fadb90438ea7c2cba98287caca511e254
16770 F20101124_AADAIO eryarsoy_e_Page_033.QC.jpg
b9759ab0f7dece7b1c29eb95b9ce7576
8f5b1195f426acbb64c493c1dbf7b7a97a724707
F20101124_AADADQ eryarsoy_e_Page_078.tif
8ed45ad69e172e49735a7249e64a9b43
498146c3b68113edff315e7e52c16e303d078295
23881 F20101124_AACZXO eryarsoy_e_Page_121.jpg
1529ecedad287e69f9bab597afcc90ff
3ca269646b8b97cf0fc7453d49f5126857838035
20817 F20101124_AACZSR eryarsoy_e_Page_112.QC.jpg
ee17f5b07a2754a84a7c9f858aa960bd
e5faf3bbdae7b7d8653807b25ca8a7734b535acd
16033 F20101124_AADAIP eryarsoy_e_Page_034.QC.jpg
314ef398e1282cee95a9a7a128f2a947
5611a9b02cf35cee5a80b9b212222b96e2284b4e
F20101124_AADADR eryarsoy_e_Page_079.tif
ef70fcc8f54bb7b5c4b05c54b06099e2
cce37f076d1c11243618218c507f67dd8e04715a
22664 F20101124_AACZXP eryarsoy_e_Page_122.jpg
658c2f05886063579d9a3c2d790df6bd
a05721a9ac2a9ffb37161fe67197b95106f81ddd
4120 F20101124_AACZSS eryarsoy_e_Page_010thm.jpg
5a97d735ce09f31facace422f4388045
53a5d9dba7064f673ba180273a7182f448fc1f97
5153 F20101124_AADAIQ eryarsoy_e_Page_034thm.jpg
f0029e09cbecfbcd9109656101ab230e
6ae2d836124523df2fac215e9c579602259d5a51
F20101124_AADADS eryarsoy_e_Page_080.tif
5c9a33b9f03513711d873a49f993e0ba
a0c5c11a10edb03cbb1292da82037ce3139e0061
22820 F20101124_AACZXQ eryarsoy_e_Page_001.jp2
1ac1a25f5d873a6084ba575366a43c82
7302966bce2c4cf7f9e3b32fda0704593f3010b9
F20101124_AACZST eryarsoy_e_Page_116.tif
4cad4b8231d9023931f3febb70385f0b
299c1bc0e5e442c17657445ca74c4462f8ebe581
F20101124_AADADT eryarsoy_e_Page_081.tif
28eef530f1e9f8ae0ecac8e7dae8df3d
04a9cc7f53e1954a862c91c35a3f532986538a6b
5528 F20101124_AACZXR eryarsoy_e_Page_002.jp2
02550f92f9a73d602ba983f930e1feb7
a2b68ba389c0e915fdcb1f08c17e55557070254f
94815 F20101124_AACZSU eryarsoy_e_Page_079.jp2
90a96753ae0c4f99dcc4a4d58c1262ea
18c8099b03f243a4afa49e2ee7e87abdea69bc28
5782 F20101124_AADAIR eryarsoy_e_Page_035thm.jpg
5be3ae74445c0a36fee80cce645ea636
e963f90b8b5d35fa9b52a12182e3bd774b082c9d
F20101124_AADADU eryarsoy_e_Page_082.tif
4703281daf74239c473ac3d8c4d07b33
4763321b258678ef595ebfa8f919d7aebd0092b7
6083 F20101124_AACZXS eryarsoy_e_Page_003.jp2
6f0c546041912b73eab8222ab5dc3efd
e067d5ac2fc3f82ff0e8d80b65e0f3a347bb0672
5467 F20101124_AACZSV eryarsoy_e_Page_045thm.jpg
57b9810cec62a91a9f77890bc5ca47e3
a7205e7e0cefaa294eb2b23d040734422f8e964a
21858 F20101124_AADAIS eryarsoy_e_Page_036.QC.jpg
7f69b21969d0d34694d97e08ee0bcc71
913640df184c639838b790f9fbc0d010f45eea62
F20101124_AADADV eryarsoy_e_Page_083.tif
00044c669d8fabad3c1add6e9981545a
77be13601df8735c350b8659da04e5123fda4baf
85789 F20101124_AACZXT eryarsoy_e_Page_004.jp2
122fadacaa9d1b5e0b10b0bc3a5db847
90f12845b354926317ca6477510bc15d116d5293
4591 F20101124_AACZSW eryarsoy_e_Page_056thm.jpg
c161453a5e0e12bc54896cf146cd35af
14656ee5650bbf7f8f407a46d0d2a868ea86fc81
6406 F20101124_AADAIT eryarsoy_e_Page_036thm.jpg
6215d6d4dec313d2c68e6be02f9df679
fccfb8f8cf328a13be1639e74c065037ed42fa90
1051968 F20101124_AACZXU eryarsoy_e_Page_006.jp2
c66b6ca319754fa0a060b260c8d709e7
b27b78ef8d228a401e42f7b4005e35db1d5438b9
F20101124_AACZSX eryarsoy_e_Page_073.tif
cc78176afd7ffd148382533e80c9a5f8
a8ea0816d601225eeb115ec253a5e3d6d8839b14
F20101124_AADADW eryarsoy_e_Page_085.tif
3c03a34d6fa41368af337e50b3e176c7
2bd782a4e7138ac1b31e459e6f9fe1cf2d270a70
5307 F20101124_AADAIU eryarsoy_e_Page_037thm.jpg
f1302534e559ef8df21f49542f51203d
91de1254196005307b20593b261da37c3def0eac
1051973 F20101124_AACZXV eryarsoy_e_Page_008.jp2
3861c4bded9cdb5c084d226bf075c289
b9db1cd5942d0bda40d645a937430f97bb47f664
55291 F20101124_AACZSY eryarsoy_e_Page_008.jpg
3e8462e23dac15c612c6c7235b8f36f6
35a294227d66bb83395c43b89db6dcb4469911f5
F20101124_AADADX eryarsoy_e_Page_087.tif
bbe4e17609c030bd002c7729805df9fe
f254ca214f16754deaedb1f596db4aedd1dd642b
21732 F20101124_AADAIV eryarsoy_e_Page_038.QC.jpg
f92c425544e9f8be4c94a32ef1d97a45
bebd1a834ef2bf8d1f5e6f77799527361de8717f
48676 F20101124_AACZXW eryarsoy_e_Page_009.jp2
7f0e59df64d4b658fa74faa78cd87500
66ec72cb4746585fc822caddab24de6c23c8f0ee
21135 F20101124_AACZQA eryarsoy_e_Page_073.QC.jpg
1352b3d8a8a73c9a95acc68a8619ba4e
39ff06bce4ebf325e4c4ae4f7b961566be5f6775
16386 F20101124_AACZSZ eryarsoy_e_Page_053.QC.jpg
05b7519553ef449e948f98c3fe3e9701
0072b6ef91ad50eda33d505eb4d645092228d119
833206 F20101124_AADABA eryarsoy_e_Page_111.jp2
0094f2375a13e677751b7aacb4277505
65c6a71d4eb6506c2ea7755278fdb61da5d99c9e
F20101124_AADADY eryarsoy_e_Page_088.tif
57e8565c0ea27f635bdf289d4de14667
8a5777d2036afe89fdd18a6b9a9b831ee484d2e0
6241 F20101124_AADAIW eryarsoy_e_Page_038thm.jpg
1e36b4ee96b5b09bbb4bad3553f948dd
bce0517aa698866898a3f75ea035f9285ca67f78
51764 F20101124_AACZXX eryarsoy_e_Page_010.jp2
a642ce001bc4142d9749dc0fc97e2b66
5489e76facf4936b88faed8ffcecb7f88fdc3893
F20101124_AACZQB eryarsoy_e_Page_046.tif
fb1bff834e0f57ad0ec4d552a41b9722
2fde43851ab7945dc2763e2f971a565b299359ec
106421 F20101124_AADABB eryarsoy_e_Page_113.jp2
2d3fb8625abd9962f0e743f7fbb4f9eb
c331fd0d01f2b09edf9fe540c9424125817a76b4
F20101124_AADADZ eryarsoy_e_Page_089.tif
285eabeb859efda8618ca819d8580980
96c3c2649d9f6318e783dfb221d7cdb8e24fd118
13438 F20101124_AADAIX eryarsoy_e_Page_039.QC.jpg
a16cc5ba4f2cd337f6c332a236760c9c
955c8b506e4338737919994e95cc52a597baa84f
60314 F20101124_AACZXY eryarsoy_e_Page_011.jp2
9b75c437d595388b6ca72903845b4dd1
8321e965c8019b3841eccd3189828f44362c490b
5108 F20101124_AACZQC eryarsoy_e_Page_055thm.jpg
6318ff255c429402d03e42c88cf78fc0
463f65ff6d56ebd09a9580f98547c28c69ca297e
104665 F20101124_AADABC eryarsoy_e_Page_114.jp2
b4f28b8458f30945581832c09b19f7ad
b0b2b7e2a4994fa122cce8757a9d3ec6ebff143c
52474 F20101124_AACZVA eryarsoy_e_Page_033.jpg
1747abbe488a600615d027b0e133d0a9
3586321d0b3914a097dca47a7f34f5379099511a
4264 F20101124_AADAIY eryarsoy_e_Page_039thm.jpg
e70dab0d097bc62ff2bd552112105885
efaf9717fec408c9bd8c998499c0569e26359568
6734 F20101124_AADAGA eryarsoy_e_Page_120thm.jpg
fb3b129dd19900378f11a9f4a2a62707
1f7e9d8e350d8a5fb07a4c987c1f92076b9ec39f
78234 F20101124_AACZXZ eryarsoy_e_Page_012.jp2
bb854e1487392b70984c863e5979f6b5
b4ef4c715087f3352b0a9783c3877d958b0876c8
6170 F20101124_AACZQD eryarsoy_e_Page_048thm.jpg
379b0fa741ecd6889f1f9bf10fbf9972
469dfb24428f085e85945322f4f269f3bb53abed
20759 F20101124_AADABD eryarsoy_e_Page_115.jp2
a40f6f3546b2fdc8e0720869676e76b6
0d4c37ffcf493afa3f153b47c9da6c7e7f30f5dc
57365 F20101124_AACZVB eryarsoy_e_Page_035.jpg
ad7c771958f8727bffe53dce5eafa3ea
68f716035336cc8af0f819413ec0fd8ed3c0bc0c
18819 F20101124_AADAIZ eryarsoy_e_Page_040.QC.jpg
3567014fdbb8c439a7be3968b3ce8921
4a6b694a9f7af8aaa04198254e07fdfc48e35ef0
6649 F20101124_AADAGB eryarsoy_e_Page_069thm.jpg
53e036eb4cf58d50c33ac3ecd7c8853f
c1fba72742e57cb72d9ce0a992c1d881296f8a8d
50154 F20101124_AACZQE eryarsoy_e_Page_034.jpg
29da6127b7f78824668b8269ed57fd89
9f9ec3b8efe7704d2b15c771bbfaad9c817e9d97
F20101124_AADABE eryarsoy_e_Page_116.jp2
ade3c674899ccd989f6f029c8163ba91
d4a0f04e38001a03d618776bc23679972055ecff
68306 F20101124_AACZVC eryarsoy_e_Page_036.jpg
58e0cbe0fdfe34db5c9f045b56bb8432
5895d9b01dc3a551c3d4b3de55796ec5190675e8
6903 F20101124_AADAGC eryarsoy_e_Page_121.QC.jpg
67282d7d48ae1316435d9c0f75d1462d
38a54c3ffa61fabc7b1bf5f4bd71e5d18bdec34c
54469 F20101124_AACZQF eryarsoy_e_Page_045.jpg
d92398abfd7bb8ba522bca269d028efc
2dafbac93ddb49d2b6462675cc07dd3ae2e14677
F20101124_AADABF eryarsoy_e_Page_117.jp2
5a019e3e1d08723d5611383f67b19f1f
40de1afd8dd32e91d3dc0538aee6e64de8da8a19
51338 F20101124_AACZVD eryarsoy_e_Page_037.jpg
ec1618a35131f4de188c2040581e0d2c
d4a4cc174873e3ff259973e85b2c9f300a340f92
5798 F20101124_AADALA eryarsoy_e_Page_093thm.jpg
052740b01b830e6de69f6a83e87bf963
f3fefb566f0846faf4e91cd52a81226294d1062f
5363 F20101124_AADAGD eryarsoy_e_Page_049thm.jpg
adacb2f3b408769669c25f4042176171
931a37183e6ab7cac91f460a311e3b5cb0dd2573
58712 F20101124_AACZQG eryarsoy_e_Page_004.jpg
3469c3dad809fbe7d318695d92879f1a
3324050b51b8f48c3a8fd30fbe770c1e80482d2e
1051963 F20101124_AADABG eryarsoy_e_Page_119.jp2
b9f24c2912ece0ef381d8dc17959e036
166cb1dffd0dbc0fd058490b3bd216001012611f
16884 F20101124_AADALB eryarsoy_e_Page_094.QC.jpg
23cf808ba455399c289ccdfea515062f
a8ffed3b51f567597cfed4dd9b472ab85d358b4c
6662 F20101124_AADAGE eryarsoy_e_Page_067thm.jpg
dfab800656c01635e924f3474ee1720f
4690db3347cfe33d5d3f71f84025b9f837da90fa
62620 F20101124_AACZQH eryarsoy_e_Page_062.jp2
0516413594ae434222b6c9fe832ff2a6
f405d6821a6f7a484aee61e396ace94928dd7a9d
F20101124_AADABH eryarsoy_e_Page_120.jp2
ab6c21093d511172ba60e478b086f0f0
707a0e591a4e4dd42dc05e76f474713fd2aee252
69302 F20101124_AACZVE eryarsoy_e_Page_038.jpg
e77b44061d2a48f36f7d72f538914dd0
e6177bce61aff5e635ca62662a2762ccd8124fdc
21209 F20101124_AADALC eryarsoy_e_Page_095.QC.jpg
479cd92ac1c7c8f9ec09a5a9844d0a91
feeecbfd1c417375980088ec669fc7268c0b819a
5001 F20101124_AADAGF eryarsoy_e_Page_005thm.jpg
424ec1518b5d3c4805cc04c337123e03
cf96fc1f3de4f8ca66bf1da8bf542766f6d867ef
5479 F20101124_AACZQI eryarsoy_e_Page_046thm.jpg
7eabe05c94023075027e4df6deb93329
2cd30055ca53d87a0b18c50937998e665be84a6f
28903 F20101124_AADABI eryarsoy_e_Page_121.jp2
e5bd7a90087f78ee3b7691df8d1388bc
e068d3d9be00a01204991446e9fe4282fcdb5033
44796 F20101124_AACZVF eryarsoy_e_Page_039.jpg
78929f74317e0cbf0e5a7e35d71d134a
885e08f87489c81ecb1c282ee02aff693ecdc113
4607 F20101124_AADALD eryarsoy_e_Page_096thm.jpg
0f3fbf2ac9e225fdbc47ba06fac52b91
926a29b33e329fc6c7bd8dd0e1f3f44431aa94d7
6248 F20101124_AADAGG eryarsoy_e_Page_065thm.jpg
f65feaef98f9cfce0c41355e34ace47f
7b2a806bf4f0a37d420a9becc3c0aa35a70ba52d
22246 F20101124_AACZQJ eryarsoy_e_Page_078.QC.jpg
7074f7fa040be534a34862dbb5db8a79
2079a2c29df5b186c81ce3274c735a22264e18ee
26223 F20101124_AADABJ eryarsoy_e_Page_122.jp2
c7491bf4930b77d52cb21a8fc6bfcfb6
16f2e971a06ec4238c39918b33ba92cc0e5c844c
59887 F20101124_AACZVG eryarsoy_e_Page_041.jpg
88400a91bb3a163d48fb28cb6d8bcc83
4ee23863c326e074bba29cfce88e1a8d44f82c23
20610 F20101124_AADALE eryarsoy_e_Page_097.QC.jpg
f53e6c9eeaae181af60e184049b812ec
3667a016bcdda38c35aa1279f9bc5451547831ca
15353 F20101124_AADAGH eryarsoy_e_Page_057.QC.jpg
5d3f3fb05a5d34091074ad888822edcb
b68e9a1555224ff5f1357f9497bf87f38ad9544d
F20101124_AACZQK eryarsoy_e_Page_081thm.jpg
e57de2802a128a3cb14f394c93244676
5ac31000ed28f8e36ec63f4b38280dbded6f6471
64083 F20101124_AACZVH eryarsoy_e_Page_042.jpg
eb8062643dc9baf367af93d33f6a3c95
622b704906d4e21e19a47669376101f52c570cad
6198 F20101124_AADALF eryarsoy_e_Page_098thm.jpg
fcbe58890bc44363cd5112b78f3ba706
ca71503a2e00680ca357d56e1198b6d4bfe20e2b
F20101124_AADAGI eryarsoy_e_Page_104thm.jpg
d3ef3dd8e3982c995d94722b882cfc67
d09a6168083a7d49782be04f142f60ea54476f10
72833 F20101124_AACZQL eryarsoy_e_Page_110.jpg
5e224081dd36b040e00cfd590f7aef2b
54b68d30b3e5d8863ec05b79d229d68d23cba819
F20101124_AADABK eryarsoy_e_Page_001.tif
825f5c7c3d43f738dd6650c19d230ddc
2d33144a9718dca328afeb1fb394b1788db0e65b
53431 F20101124_AACZVI eryarsoy_e_Page_044.jpg
280df9f546429bb13ac6805646543788
4a4e6348edb68177f98c1d98177817916659ffbb
22008 F20101124_AADALG eryarsoy_e_Page_099.QC.jpg
25e2c3feb480689b595f9fc1bd3028eb
064d404b017f6ccf31b062288d598cbd6d15e3bb
5677 F20101124_AADAGJ eryarsoy_e_Page_047thm.jpg
0e903148827eee86aaaa25a5df09bc57
a69cbaa34e69642faf78b982e2f994ef48ece7fc
17710 F20101124_AACZQM eryarsoy_e_Page_071.QC.jpg
8ff605cd9e7d7dcba1f24be163e7022f
6fafd8cb85f7bd344cc3e7dad06f50ad02ae80b6
F20101124_AADABL eryarsoy_e_Page_002.tif
ec06e16458a53adff44b80dfff3b4b08
dad16dc62157c0118e21aedc82893075ebf5801d
57355 F20101124_AACZVJ eryarsoy_e_Page_046.jpg
b1f10918a73a66d89b221f04285c4bc0
a9eb9c5876475ce78338704fe3b21caab5807f85
6180 F20101124_AADALH eryarsoy_e_Page_099thm.jpg
573ee23d8a5e248ae133d978c23ab770
a4928c8bcc70c396b719702ea9d18c6b400525ea
4003 F20101124_AADAGK eryarsoy_e_Page_009thm.jpg
280337c5a9466e002de0857d0e5d0aa5
eaf445b0eb029ff9ec286d12906620d302d709db
1017756 F20101124_AACZQN eryarsoy_e_Page_089.jp2
f7ea7363e291d64260079957d8b81bff
a4d8ca5c40fd168e0121d218c0177d5020423013
F20101124_AADABM eryarsoy_e_Page_003.tif
658d071dec56004ceb3327e8683cd046
d2c7af263e619723ba2c2f189090fab78de1e269
64235 F20101124_AACZVK eryarsoy_e_Page_048.jpg
d3770b0341546a5fd7365f9e4cca2683
e1773146241be88d4f71a92170360d9e098220f8
6020 F20101124_AADALI eryarsoy_e_Page_100thm.jpg
44a8e5895b83d9e1ad860b8ee17e41e5
0ed3a2e2e9d6237d26ab9b5acc56851017f41b77
5698 F20101124_AADAGL eryarsoy_e_Page_087thm.jpg
8fd0dddbed03332ad9b9e207f218e9b3
e9148791d330b780120a9752b4b1748ba9fcd590
65589 F20101124_AACZQO eryarsoy_e_Page_030.jp2
f42c14cf7779b21fe47efcc248d8d937
da2e95c9e9f7804ed4a7665a7040c44bcd973894
F20101124_AADABN eryarsoy_e_Page_005.tif
f03043ad698557792750baaab5b3e7ee
c69daee680ad7ccba2d277db64a33f77c9c4d5ec
55788 F20101124_AACZVL eryarsoy_e_Page_049.jpg
510925882ddb72dc255dc7a547c139b8
b45010483d0b4cbe0d833721a354d4a704a2f238
19771 F20101124_AADALJ eryarsoy_e_Page_101.QC.jpg
345b963da83ac09e12d55dae6a271919
8251857fd82c80cbc5bde1e364fbce845e597c8d
19295 F20101124_AADAGM eryarsoy_e_Page_026.QC.jpg
dda3413403157778850b301f449d45ab
a055366d0259bd5df8d75bf0464b49b8d7896c05
24093 F20101124_AACZQP eryarsoy_e_Page_017.QC.jpg
d4e07a8be578eff4927ca791bb6c15e1
bbc08bf644c5893e4104e9829cc61c1a30b54edf
F20101124_AADABO eryarsoy_e_Page_007.tif
9662f93de9e393b91b333ae484dee3bb
e2ea763661128b7efc45e74c97ccdf0bcbcb43ac
54006 F20101124_AACZVM eryarsoy_e_Page_050.jpg
c9a8e0485240a94862428190be830e4d
8d2646612f6f2897a13e1fdec14361ef24f3be2a
5621 F20101124_AADALK eryarsoy_e_Page_101thm.jpg
075da1cf4d4272cf6eb675b6bb88bc61
a2551df3903c0c92b37b4b1ed4903fde886044ca
22061 F20101124_AADAGN eryarsoy_e_Page_048.QC.jpg
8cb4ef8be7fa01112bcbc58d740d6a1e
b879d821403abedc4ee2aa77e8d12e7f68955a8a
F20101124_AACZQQ eryarsoy_e_Page_024.tif
872d34628b5a21a13a921908fdcbbd95
bc18bda2223a446d31f1a332ae1d818df3b6fb6d
F20101124_AADABP eryarsoy_e_Page_008.tif
a78dfbede404a081cd0b9d87abaaeff1
4963d241b1e38631f859f239675f02b52f7548c4
67653 F20101124_AACZVN eryarsoy_e_Page_051.jpg
26195820993dbe660681f046d30ddc8e
b4c6f2e8bec19cc6cc365d6ae7f921f2fcc5dae5
5740 F20101124_AADALL eryarsoy_e_Page_102thm.jpg
39ddefb957800dc97c89385da3069493
c01ebc2a6d793903dfc71f37ecb345c246634419
6505 F20101124_AADAGO eryarsoy_e_Page_019thm.jpg
d985e5d0a96c836364f465ffb2c87f45
13234fbafa210907a77e6c76a19cdb5e3d3a20a5
F20101124_AADABQ eryarsoy_e_Page_009.tif
d122b3efd74451ccad57b97f32103cbf
4f5ede1297014712781bd658400ac513a0201769
51912 F20101124_AACZVO eryarsoy_e_Page_052.jpg
99927c77d52cef1862cfdf66a7321cfe
f7f19215dea40814bbcc93e615d18bfe309857bf
55685 F20101124_AACZQR eryarsoy_e_Page_071.jpg
a1afda9998088a9b2a468d660806b5c9
50fc2c8ce1418074a4580bd5da7ce4d104149b91
6675 F20101124_AADALM eryarsoy_e_Page_103thm.jpg
b3c2eaa7461a36892f70ee941c6999c3
25b3582136b88cbc30f20c064194223ddde5448e
F20101124_AADABR eryarsoy_e_Page_010.tif
1c1efbcd680b50610576629aaf0236b3
581974ea8e388869ccce9647c6db407a96a7808f
52186 F20101124_AACZVP eryarsoy_e_Page_053.jpg
b7e68ee428b16c05c900a671f905c295
cb8fc9d31dea5e4fcda14f3034cf4e86e2cade80
54699 F20101124_AACZQS eryarsoy_e_Page_084.jp2
ff7afc0b9bb79494542b0cb9164715d9
3a521ebcc9bb2ad69f97c87100ef32e447c3f4f8
21926 F20101124_AADALN eryarsoy_e_Page_105.QC.jpg
3138075e559f6333e3aeec9cc1a3e6c1
7fc2d488847baf33fde54d3a61aa6bf29856eee1
6945 F20101124_AADAGP eryarsoy_e_Page_118thm.jpg
e09c08a4f1102c22dfefa8da3d90a742
a7b06825f7796ec3f2b1e5bc89a7bed4c65c2c4a
F20101124_AADABS eryarsoy_e_Page_011.tif
a4bf362715145c8a815dd474066596ed
19fe7f263dc1848f7957ee895dcbdca27e6c5a51
57832 F20101124_AACZVQ eryarsoy_e_Page_054.jpg
35576acbf14a7049acb9da173438f558
b7dbc0863698153947f286d4c9f4b394a5cdd280
14597 F20101124_AACZQT eryarsoy_e_Page_011.QC.jpg
b9bdce05f51637fc49a7ec872fd414c7
13106d145a36b5a03ea1bae9c5956dec2a393c0a
6173 F20101124_AADALO eryarsoy_e_Page_105thm.jpg
ed27b5d6ef4227bd01916776aedf6524
63073f2f6ae19ae00bf77754acbc1e0144be5bd5
6315 F20101124_AADAGQ eryarsoy_e_Page_042thm.jpg
b86b05fa054ee5c53b094da62d143e9b
2fdbc330d1d5a79262ea45d4be5d366e3bb0512e
F20101124_AADABT eryarsoy_e_Page_012.tif
bf896efd34eb1c77c1307220c48bbacc
89b97a7f6c7f44f4cfba97909644aa343324a894
51202 F20101124_AACZVR eryarsoy_e_Page_055.jpg
cb6ad498e4748ba888aefc3e4602df4b
b7d36178b542f98f74f7d8ad978dcc0c8b06ce3e
F20101124_AACZQU eryarsoy_e_Page_058.tif
3a795e0d510046047fd091d8e80a3f13
f53feb5fd14c8b1916b1961504bb505a750e5442
25358 F20101124_AADALP eryarsoy_e_Page_107.QC.jpg
a9262a73b6406e325e48f12249b395d1
9587d97cb619133229d5b2d0c210cd64c46cebc6
19437 F20101124_AADAGR eryarsoy_e_Page_035.QC.jpg
094eff688c2fe564ba7e1b9e919e7ba3
d71fd884f65c3d4d701831c3dca752c4f9cd1af5
F20101124_AADABU eryarsoy_e_Page_014.tif
cb3fe907f128f07678790e2ce3944c1b
feb1ed4e67be243b73791d1e7f2a8ce52f1422d4
43090 F20101124_AACZVS eryarsoy_e_Page_056.jpg
0643eea57c57b56ce76405dea0bdd81f
731e632cffe79b6eaa7f9a3bd7a0ad5e35f4abd4
5537 F20101124_AACZQV eryarsoy_e_Page_083thm.jpg
3999cbdabd3c922426d73650ec4bc8c5
0b0bbc1654b9ff9e4abe47b587ed901c6fcb83c9
18413 F20101124_AADALQ eryarsoy_e_Page_108.QC.jpg
6bfbab4e61d905dd8b8694cc808e9257
a367f81b187adf1c9682078bf67a40d843e3c4a3
26290 F20101124_AADAGS eryarsoy_e_Page_119.QC.jpg
4d9578ca2a00e32e5ef9edf181eed658
016e823034dc54251ce7d2e193afade14c988bef
F20101124_AADABV eryarsoy_e_Page_015.tif
d13dd9ef87134db93313a5d10499e155
80af1238dd11aaa1695c3c93ad80e33b31cc3360
55523 F20101124_AACZVT eryarsoy_e_Page_058.jpg
0a9b9b3ca2236b305c17a0692d0555eb
e487d8e29087dfb67ad42dc464a38260b11270d8
18001 F20101124_AACZQW eryarsoy_e_Page_027.QC.jpg
9ecff5d43150bdd8d509bf2ec6deb834
abb2661680084371972969a88e895d19237c6770
5434 F20101124_AADALR eryarsoy_e_Page_108thm.jpg
1676c677f45ec77365ce42fd09d595f8
5b62295d5e825b8ae86c01b2d1934dbf8eeb1fe9
5366 F20101124_AADAGT eryarsoy_e_Page_021thm.jpg
c2dc95656936c3a497814a7c09be6e48
d54a24bb8bc7249abb4584e7807de9de1a97d448
F20101124_AADABW eryarsoy_e_Page_016.tif
0f5475ae05f227b79e1d00c124bcbed0
59adc871466f3b4ef7bc25ed6871e0862cfe280b
44855 F20101124_AACZVU eryarsoy_e_Page_060.jpg
9ee74f77dc208d09fcd9572aa0f48262
787cdbee9056b8f7aa3936da7a69168e04db9bb1
67878 F20101124_AACZQX eryarsoy_e_Page_098.jpg
a75c2b44cb26b18b796d90098c38dd19
9a11103a03ede551675340ea82bf70a5cb101b27
17668 F20101124_AADALS eryarsoy_e_Page_109.QC.jpg
fa90e3996167708b446384ba44560ef9
89a48dd00abf5095840944a326549fdb3f4cc06c
24158 F20101124_AADAGU eryarsoy_e_Page_022.QC.jpg
f8247a6fdace631f93002a8d38fa3a16
29fb187883614ed6a1937f73d8fc722f9eef9b09
F20101124_AADABX eryarsoy_e_Page_017.tif
531bf9ef0e20c5a0ee8446b8b79b6d9a
0c573c5436738c0e764515353f67b615f2c04efc
13702 F20101124_AACZVV eryarsoy_e_Page_063.jpg
e7b55afec8ebce30c5a20de130c7993d
187d25bf51435da9a79e0e745a6a641259a23033
F20101124_AACZQY eryarsoy_e_Page_038.tif
f3ab9a5a12e387e13b3894e53a6afde7
00350fd5682c0502a66c8ea58ef81606035853d9
22557 F20101124_AADALT eryarsoy_e_Page_110.QC.jpg
301b03b179dfda9500c3602a8d7ef57b
05c435403cae07393268b4335aeb97343cfe9bdf
5815 F20101124_AADAGV eryarsoy_e_Page_091thm.jpg
316ee42d61387db91ba090d695ca5dbd
ea957f67eb430a7ac9a5cb1b4470cf89058fbf64
F20101124_AADABY eryarsoy_e_Page_018.tif
0311ecbdb551207e53062ec5168ff839
9eb250cc85bfe530c513d26547a76644aa5878fe
63893 F20101124_AACZVW eryarsoy_e_Page_064.jpg
6c7fd179051441940ad2797487b649e5
bffa9f7422fd8f83fe9c4b82e6fd6dff6de705f5
F20101124_AACZQZ eryarsoy_e_Page_117.tif
cf8cec199e19e67468f5a037392e682d
ee125088d700458b1a2c832b11debcb23d95363b
5242 F20101124_AADAGW eryarsoy_e_Page_033thm.jpg
ff01568c2e7c397a7074142ae35ad911
153bdcd1a59a6ba3c95d7659c35d152bf54167be
F20101124_AADABZ eryarsoy_e_Page_020.tif
374d9ba41a64027f3ba64bff296a0209
1d77c81e0c696849f578d1fa3c9805bf69adc6cd
66889 F20101124_AACZVX eryarsoy_e_Page_065.jpg
ab5c0dff54d30cf0f2a64997b37bc083
f3051ccaa695f53fbe0230cc4f6e78485adf07fc
6334 F20101124_AADALU eryarsoy_e_Page_110thm.jpg
11720d72b6f16a404de6ad0cd9db8284
55b69253c9c1cefe008ec8ff657b3cf15c392ed6
15106 F20101124_AADAGX eryarsoy_e_Page_061.QC.jpg
525c5c5ec5a82d700043f44e79980ea9
7247ec13a977fee33c64061570483d7eda640671
19616 F20101124_AACZTA eryarsoy_e_Page_076.QC.jpg
b8a31568f54bbe8826d91841d6592563
606409103b77767223f7a620cd72d4902908664b
72857 F20101124_AACZVY eryarsoy_e_Page_067.jpg
31c54e0b9ed9093531790c46f62d767d
b0edbb54da46f39927fb1a116f49010b9e4b867a
5987 F20101124_AADALV eryarsoy_e_Page_111thm.jpg
a9e25246fa7652f1d5b320d94724f39c
5f70820f4756597bf26f922a89bda73aa11b38de
16493 F20101124_AADAGY eryarsoy_e_Page_090.QC.jpg
c387f2acf9555b0ba371106fb3c24221
66093298cffe5b4e6b786c305dfc807476754e7e
44763 F20101124_AACZTB eryarsoy_e_Page_062.jpg
9462ea59f39734ec5f2c040351321eb4
5692e1662e57616f408110e76107772566c8cf8c
56634 F20101124_AACZVZ eryarsoy_e_Page_068.jpg
5a487152df1ad0e76b62548b4a2a7f29
217e07dfd5abaf6b545866c9100dac9ce85b0a2b
F20101124_AADAEA eryarsoy_e_Page_090.tif
04fb7543ee5f63db0ba27deacb2d4d00
c3c808df3c2b7cb57932020a7b5cc012c4eed648
23201 F20101124_AADALW eryarsoy_e_Page_113.QC.jpg
1c61d7b70c8e2c285ab88cc6a8b8f147
bd8db2fb17d07f3dff269755b0546d4fed07b267
5878 F20101124_AADAGZ eryarsoy_e_Page_112thm.jpg
660560fddb4dbff708db3ea85f71f71e
1f261759d2d3ea5995608be9c188e2a2e3f3799c
F20101124_AADAEB eryarsoy_e_Page_091.tif
5916d23d271777d9ce7319ff93a2055a
1c31e61e8bf779bf800d999ab4332dfa0a1e480a
6452 F20101124_AADALX eryarsoy_e_Page_113thm.jpg
6430ac4bb2a37044777650f2b38ea8eb
e580c8539bf4bc2ae3a1361f946bf1aa3ec9cf19
5576 F20101124_AACZTC eryarsoy_e_Page_026thm.jpg
c747048824cb6816d84fd9bf045ed08a
220709edae16d85c3b6a260e6e004752edc5d25c
F20101124_AADAEC eryarsoy_e_Page_092.tif
f805aa3e1fb33338f5bb297608d061e3
89d7764265d3888910f8b63c7adc336f6b53b5b1
12175 F20101124_AACZYA eryarsoy_e_Page_013.jp2
0e16c928e4fdec44cbf7c3a4fca8d3f9
8d7e915307504cb0b4c4ffc5b3922565faaa28c1
22544 F20101124_AADALY eryarsoy_e_Page_114.QC.jpg
20aa32159d5d7485604806ecc921640f
896e7bd80bc2092420f12080c61c2498ce2c3670
5822 F20101124_AADAJA eryarsoy_e_Page_040thm.jpg
960e2f1e8c3156e210d0b15194ef26b9
32d41fcddddd73cc514fa6f7d21810deeac0c56b
5284 F20101124_AACZTD eryarsoy_e_Page_053thm.jpg
36d348bd4d20841ee84ea1b56fec5d81
08efb625378ec33863395c89c160580d7046b3e1
F20101124_AADAED eryarsoy_e_Page_093.tif
ec44c868dfcfacc98843aec9ba61b700
56c92b23d48c2f22f1db4431f2daefad3661cd81
100823 F20101124_AACZYB eryarsoy_e_Page_016.jp2
08f918058f95e0065a9ce0027b134f58
cbc11a83bed50d495c93696b972af62a0b1c47f2
6080 F20101124_AADALZ eryarsoy_e_Page_115.QC.jpg
c5c6271abb6623783f141a21eb42fbc4
a1de242eee9c0e1a9f91bffd2e85676f6b5c538f
6018 F20101124_AADAJB eryarsoy_e_Page_041thm.jpg
63c75afe46a236dee80f5ffcad3c3b11
d3db136264e7cb929426569f020fa62a07047997
F20101124_AADAEE eryarsoy_e_Page_094.tif
0de793da34d9695c08d8bbf2ed0f72b6
23b17db2f0ccd498cdd5d289575cea43c48e9be1
F20101124_AACZTE eryarsoy_e_Page_058.QC.jpg
f844cd3abfbf7f58536bedb53fc8c96c
c087eaab2c3b7e01d4d2aa3a7fb24738b2fac66a
113300 F20101124_AACZYC eryarsoy_e_Page_017.jp2
b984ff394016c729c1c13ccc2cc2b31c
f474b59ae8a81019ca83edf32066e5db41fd6cf4
21245 F20101124_AADAJC eryarsoy_e_Page_042.QC.jpg
5b9b540f2a5b54110c1cd5abaf09e97e
3ffaeee47f87010e8b71009ec802c8d8d7a459ec
F20101124_AADAEF eryarsoy_e_Page_095.tif
8831bf687f3fb14288000ad91d8d7f96
8fc945aab21300897807abdefc15a7d62baa0773
60712 F20101124_AACZTF eryarsoy_e_Page_077.jpg
528f8df2f9698f763cb1140b2b378c81
a3d7053da978a9eadff68cbcdedf181d29fc14c4
107663 F20101124_AACZYD eryarsoy_e_Page_018.jp2
568ea5091bdfb1444c8755a427d3d748
5d3b9d7600df5904b475b2d256d0e5286c45a692
6016 F20101124_AADAJD eryarsoy_e_Page_043thm.jpg
8d712f841c1aa1e01eff7c15bae8d24d
d0cdff4fd010bc5839cfa0e255ddfb48a3101cc8
F20101124_AADAEG eryarsoy_e_Page_096.tif
8b4f4204528fecddca3ffc024e0f48bc
b2b816d1daeb2dbdfb5dfeebea1a78cc3ab4cb15
F20101124_AACZTG eryarsoy_e_Page_052.tif
4ce82bf55a2c06f6b624c291f204cdbb
9a0a709e1203021bb6d1e711f14a5763f9f2d3da
109145 F20101124_AACZYE eryarsoy_e_Page_019.jp2
b7d2c83b85effe28c5defc0562e78691
62ae6fddb3b99600d877088a2d20f81db405a5a7
17085 F20101124_AADAJE eryarsoy_e_Page_044.QC.jpg
a9809c7edfd94c2cd6b28fe511345fee
15a8cc079ece4e11cbf5215ca09cdba87efe9162
F20101124_AADAEH eryarsoy_e_Page_097.tif
c2ceee531d7591a2c28840b034e09e4d
7fab9fb5578ec2a816331bb6f548acf469737261
5258 F20101124_AACZTH eryarsoy_e_Page_028thm.jpg
a666f1c9057dc82ebb73a87326bbd029
3fbc56b1d8cb2b3a63904d7cc8b5cb30b0593f9d
116101 F20101124_AACZYF eryarsoy_e_Page_022.jp2
8c197c0c146c30e26ba6d2d2b772d5a8
22a7908f965369fa2a19b55961455f97f9bb3dd6
5492 F20101124_AADAJF eryarsoy_e_Page_044thm.jpg
921760ee28e0d0560a4635f7ff5c2e98
294ff8ed94ec8923e37e361beafda1679ba34186
F20101124_AADAEI eryarsoy_e_Page_098.tif
245e4e871a424cbaca6205f23783b1e1
9896e7519efcbf47949cd4c84c61f91797a5ab87
F20101124_AACZTI eryarsoy_e_Page_045.tif
ef5e2f3fbd61dcb03d5c7c54ed31ef2a
bcc1fc8043f8547e880efa362dc353b4e5a9c3af
79533 F20101124_AACZYG eryarsoy_e_Page_025.jp2
1bd833a08811682f969ccd8e569a3c46
b4aee6df8cb41db3ac9978dbb84ba0bed425aca7
18214 F20101124_AADAJG eryarsoy_e_Page_045.QC.jpg
79982152e603c0d6f3782302c0583a70
e8d45f0333bac1870df48495256a1ffd46269ab1
F20101124_AADAEJ eryarsoy_e_Page_099.tif
f1c4292f6181ec6acbad35b964eee2ea
abb0c96d47e662f5b4a0ec0a5b88273a3220d07a
57579 F20101124_AACZTJ eryarsoy_e_Page_031.jpg
c0e1420ebcee3682baba8cc54a5811f8
751e2f754434af08a08e24d2bf2e1002f7f33f48
18352 F20101124_AADAJH eryarsoy_e_Page_049.QC.jpg
f709008fcb8674b7f8f6d03ffaa54b0e
33b93d7b8aea5e14f21300c8dc2e1a61dfa951ac
F20101124_AADAEK eryarsoy_e_Page_100.tif
02f8ca217fdd2471ed47e59244264f0c
f63d8667b1ac883b69074491011149421f7524d4
17900 F20101124_AACZTK eryarsoy_e_Page_006.QC.jpg
e14d0074f98be53b276a6b96440ad729
0be909332ed82389329b0603392e23f40089dcc7
86737 F20101124_AACZYH eryarsoy_e_Page_026.jp2
4b92609620f1baec11f6194740ae404e
534c9c022bd3b98ab0e8e826d8e2179e7d1a8210
18697 F20101124_AADAJI eryarsoy_e_Page_050.QC.jpg
c19c792d84b4b3040f1e658b1061a26d
ef57feb4679cc4f70c9efb6b4f5e238f3e0df6ad
F20101124_AADAEL eryarsoy_e_Page_101.tif
73779d4b9fe675ae836e81a9990d5596
0b841a58b46794eccd8c56c675455708272b27fe
6252 F20101124_AACZTL eryarsoy_e_Page_059thm.jpg
f406c5e224e9088c3ea418f2940bf54f
4822d7e83596285e18b13ea09ec5fbb8a3607f76
80045 F20101124_AACZYI eryarsoy_e_Page_027.jp2
cf558a5e8a248a5d6bbd207c7894232c
5936433acb8518e4a11e2c2bb0dcda8a2e1ac257
5322 F20101124_AADAJJ eryarsoy_e_Page_050thm.jpg
6f85fb9d4e6d512bf6b4d139586b9c31
0067378b407e959813245f886b5ea12407b8ce80
F20101124_AADAEM eryarsoy_e_Page_102.tif
d143a278b6814895e1406176d7ed99c9
ae7ec64abf6125705cf50b24009b9e541f8cf1c6
54224 F20101124_AACZTM eryarsoy_e_Page_074.jpg
5607d7f8939db7b8b1b7e72798902a11
f2269a4d5c72555689c766811079e7ba3447cb62
73652 F20101124_AACZYJ eryarsoy_e_Page_028.jp2
b11dde209b37463f29f32fe7dde71e31
767c293a2f0885f34e38700a5a31d02c7023fbde
6249 F20101124_AADAJK eryarsoy_e_Page_051thm.jpg
7abad93100dc930b3e76cb051bb92de5
763520e06d1baf5d5e5e650dfdba4261d23541d2
76484 F20101124_AACZYK eryarsoy_e_Page_029.jp2
eb4461176a8369dd8c7a77c6b2e9af2c
a869a86117ab77e2d02f42ef2ce10b1e459f6191
6211 F20101124_AACZTN eryarsoy_e_Page_080thm.jpg
cc9cbe63d90dd4028ae8b54ecd3539fe
2a3fef94494b76ed4371fde93a87c68cd4adbd34
17493 F20101124_AADAJL eryarsoy_e_Page_052.QC.jpg
bd23231ac34c0a8f0662dbbdec0bcb7e
20965b375c1b11ebc298fd2d956e4cc32473344b
F20101124_AADAEN eryarsoy_e_Page_104.tif
e360fdbe1d75c52eef9d68780af40da6
d2d59639137a2ab4d77c393cb21b6527ef0d73ae
82396 F20101124_AACZYL eryarsoy_e_Page_031.jp2
05ea97366bd88316e48ec4306e23ff39
e2d722380b55e0200b0af70661e869511bce121b
64084 F20101124_AACZTO eryarsoy_e_Page_059.jpg
e63e1328f74f427a02aa1227e7bdf238
e3525a106f2f9833c83412d6faaaba3778b4bc8c
5553 F20101124_AADAJM eryarsoy_e_Page_052thm.jpg
b0ce9bc636337389fe13456e510f481f
e661225f5a06772f54eab5260b1ae87697e212f1
F20101124_AADAEO eryarsoy_e_Page_106.tif
290fb3c3fc537304d35ab5f0d7f4da7f
56a34e8f22bf9cdc700be81ac70e1372f3c436f8
19199 F20101124_AACZOS eryarsoy_e_Page_029.QC.jpg
5e9d6c468fc95c94f28615f4479fcc8b
ab41be68a730c1d5063b5d15dbc9f263ff8898e3
89026 F20101124_AACZYM eryarsoy_e_Page_032.jp2
f887af69e1b44e3ccdb6e8b07c0947f6
e8873ec0d08c11d041269d74920746498768459b
54350 F20101124_AACZTP eryarsoy_e_Page_047.jpg
25dc2eb6907aa4bf0cd6f7ebbd5d8b9a
3d47a0815f9098e29eef606f4285322de13ea02c
18677 F20101124_AADAJN eryarsoy_e_Page_054.QC.jpg
56a1cf2919a6b7444708b89894e9f3a8
a74f51db5b8c2c273a9aa1f545d9ead981c6c9d4
F20101124_AADAEP eryarsoy_e_Page_108.tif
6b6ce32a9d0fc7db00282cd09cdfd638
a3f97fa9fcad0f67ea4c6ee25305dbcb6ad64857
1051981 F20101124_AACZOT eryarsoy_e_Page_005.jp2
eeaffc1cc211a6d116124e3e818e36b7
3ffa5a480caf2c95de85ec6ff56c81178938c085
77371 F20101124_AACZYN eryarsoy_e_Page_033.jp2
8ed375a81400527e37501213e7333596
a18b89baa8686eaea22da1b864166e05d2452672
81423 F20101124_AACZTQ eryarsoy_e_Page_014.jp2
2a1afb6adab31022c3a3c4274e122f5c
295039049e77a455251d947a6e979a33753d533a
17286 F20101124_AADAJO eryarsoy_e_Page_055.QC.jpg
17594c16537bac5425ec1aba16fc0ab1
1efa48c365588200fd10998fec6538d006879938
F20101124_AADAEQ eryarsoy_e_Page_109.tif
8126b7187b695cf94362cc8e982bf3d6
3a8f26e2518165a66a12b9b932b1762a018951d0
93487 F20101124_AACZOU eryarsoy_e_Page_112.jp2
0eb03199c770fa9c2ec314072775e062
6da7219dc5bc615e968ce25582e2fee383ed8eb4
70740 F20101124_AACZYO eryarsoy_e_Page_034.jp2
166eecf009f6e11581ca4920c1fde0c7
816e18a1ac3a270e0a47907723b87e7419582304
12515 F20101124_AACZTR eryarsoy_e_Page_063.jp2
3b4f394294c6d6a9f40a2b36ce50db68
65aeac1edfc0d3ff21d6b2a1f5e24a20586b726d
14513 F20101124_AADAJP eryarsoy_e_Page_056.QC.jpg
55950b6138cae48d411875fb8eaf6e18
b57d50f2ff25740b16ea39e0a308c595628c71dc
F20101124_AADAER eryarsoy_e_Page_110.tif
27b603e534e6a2e793b1267bdc00dfde
21990e03c7588c2d26a34215f170fdc65179765a
F20101124_AACZOV eryarsoy_e_Page_111.tif
3c83da7cdd7dba5acc008d7e60aaf46d
fd1f6af23e6033a974acfafaaf053d84af6bc84c
86273 F20101124_AACZYP eryarsoy_e_Page_035.jp2
2945743cc0fd5dc87fc558c09fb07162
d17b795a40e233296e41b08dae916edc2318f246
F20101124_AACZTS eryarsoy_e_Page_067.tif
62c346a408f58b22fe4467f1526782ed
9904362da580fcfa6eae5d9f6c7636e16111df89
5328 F20101124_AADAJQ eryarsoy_e_Page_058thm.jpg
87eb31c9130d1b7080ad551b60e652d5
48daaa6be53eaccecb9be1d4821cf5e64fb99943
F20101124_AADAES eryarsoy_e_Page_112.tif
ba4af7e37ccc6615477c5e34d8de2905
c3d04f194f9d9cd15c2116e2eb4ab080afeceedf
F20101124_AACZOW eryarsoy_e_Page_006.tif
b30bf38ddb06192f108918815dc1d8db
835aa825aa845a15269b90c978ba323a1e360db7
100617 F20101124_AACZYQ eryarsoy_e_Page_036.jp2
cd62bc4be2f5a1e4bf2237ac200ee074
64a614f7f3ef2b756de1197101e4302473ebcbb4
23550 F20101124_AACZTT eryarsoy_e_Page_015.QC.jpg
a6fb1996c5e6e9affc2138e085262dda
9d740197ab09d77e3b5446ada9d6c6943d27817e
21487 F20101124_AADAJR eryarsoy_e_Page_059.QC.jpg
f03bb28c8f7c027cb1e08dd18e415052
9f1512dcd9fd57be21d18b06eb29b5d2f06c5d3f
F20101124_AADAET eryarsoy_e_Page_114.tif
5644b49506df1aa1d95e05aae794b568
a26abec05bf9fa39561dfcbab9d537721551b97e
93782 F20101124_AACZOX eryarsoy_e_Page_024.jp2
e0b93e299d69c60dff9fdaf32adf7027
433dedece183e203d96f615325caf39e4cea149f
73733 F20101124_AACZYR eryarsoy_e_Page_037.jp2
80e1e4f5e2a3bebd4cd33a575c66cbf5
27e18891aca1c047a4e28beebe3dccfdf8f56ddb
62540 F20101124_AACZTU eryarsoy_e_Page_086.jpg
d7b7f744e0aa5be6dd9f62fdc7dbf310
292b4c04c14a7c9d0260e8797c33bd3845d4422e
F20101124_AADAEU eryarsoy_e_Page_115.tif
2f6d53f7004c02394332de2c69e83f49
077f37db090ffa2b670c3ed5c26184dd399ec89e
62887 F20101124_AACZOY eryarsoy_e_Page_043.jpg
8526f9a02660f958bde626a02cfcc18b
2571e84d1a391335903f4e83095504441c3f4ca8
104573 F20101124_AACZYS eryarsoy_e_Page_038.jp2
1180087b77b45a4af6835e51dc3cafe2
2e70e30dc50deb6edfd74f55ed3a5c41e3b2bc4f
6163 F20101124_AACZTV eryarsoy_e_Page_017thm.jpg
682ab8023d7987a29cfe490e0355d962
5ca3d7cdca653dbb9abc824dd17c3be84cbd9031
4765 F20101124_AADAJS eryarsoy_e_Page_060thm.jpg
e39606f84d98aa2faae132fcc54e9f4a
1cf6b0e910b8a6ff0981fa49786b754d816ded0b
F20101124_AADAEV eryarsoy_e_Page_118.tif
22833c128633220b5afb14edee701d04
d564c8e02918f9b7d3b5053f1691182c16e0f1a0
74984 F20101124_AACZOZ eryarsoy_e_Page_103.jpg
c4ea0285f355818cb44c71c5dfe664ad
2878968a70c4e92bc8c414e2485a9facc67c28b9
574924 F20101124_AACZYT eryarsoy_e_Page_039.jp2
56b3a93e76c61c4abbede04f26cc8b04
1b62b63afb2f3bff1e18e9eba261e3f160e4d7f7
20294 F20101124_AACZTW eryarsoy_e_Page_043.QC.jpg
54e829abeafbb15b4aba8b3e7e14944e
bdb9408f2b5a34f502bf86297ddbd31edee5eb70
4860 F20101124_AADAJT eryarsoy_e_Page_061thm.jpg
cbed36ead0eeed2b175a96b901c3ae0f
e78eb0791ab689ff0375a86a87185602bbfd7abc
F20101124_AADAEW eryarsoy_e_Page_119.tif
8688cb88508aca8a107a3f321a182e2c
0f6dac09e324fbb9a30bfd0113bea22e5f138f38
82949 F20101124_AACZYU eryarsoy_e_Page_040.jp2
6992bde4000157c7c6801a59e91da8d7
88aa1f107d7bcf9ff960aebbf2cf40de35bca19b
95875 F20101124_AACZTX eryarsoy_e_Page_073.jp2
dc4c621a6b888152e534d16a27f149d5
a3129e77b1a72df9fd2cf77e6037c35e815c9756



PAGE 1

USING DOMAIN-SPECIFIC KNOWLEDGE IN SUPPORT VECTOR MACHINES By ENES ERYARSOY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2005

PAGE 2

Copyright 2005 by Enes Eryarsoy

PAGE 3

to my beloved wife and both of our families….

PAGE 4

iv ACKNOWLEDGMENTS It is a rarity that an auth or can claim the entire work for a dissertation. I would like to thank several great people who made me feel special to be around them throughout the isolating experience of writing this dissertation. Without th em this work would have been impossible. Inadvertent misinterpretations and mistakes are solely due to my negligence and possibly stubbornness. I am most appreciative for the support of my supervisory committee chair Gary J. Koehler and cochair Haldun Aytug (a.k.a. the tw o amigos). They have been truly helpful and kind, especially whenever I barged into their offices. I extend my thanks and sincere appreciation to Gary. I have enjoyed his support, friendship, guidance, and understanding since the day I met him. He has been my hero in academia. I thank Haldun Aytug for his brilliant ideas, his eagerness to pursue them, and his patience and support. I also thank to Dr. Selwyn Piramuthu, Dr. Richard E. Newman, Dr. Anurag Agarwal, Dr. Praveen Pathak, Dr. Selc uk Erenguc, and all other Decision and Information Sciences (D.I.S.) faculty for th eir collaboration and support during my Ph.D. study. I would also like to thank my parents for their support. Most of all, I would like to thank my beloved wife Meziyet, for her l ove and support; and my parents, my brother and my sister for always being there for me. La st but not least, I thank my dearest friends Enes Calik, and Avni and Evrim Arg un, for their invaluable friendship.

PAGE 5

v TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii LIST OF SYMBOLS AND ABBREVIATIONS..............................................................ix ABSTRACT......................................................................................................................x ii CHAPTER 1 INTRODUCTION........................................................................................................1 Background...................................................................................................................1 No Free Lunch (NFL) Theorems and Domain Knowledge..........................................2 Motivation and Summary.............................................................................................6 2 LEARNING THEORY AND SVM CLASSIFIERS....................................................8 Basics of Learning Theory............................................................................................9 Learning Theory..................................................................................................10 Probably Approximately Co rrect (PAC) Learning......................................10 Vapnik-Chervonenkis (VC) Theory.............................................................13 Structural and Empirica l Risk Minimization...............................................19 Generalization Theory for Linear Classifiers......................................................25 Effective VC-Dimension.....................................................................................28 Covering Numbers...............................................................................................29 Fat Shattering.......................................................................................................31 Luckiness Framework.........................................................................................35 Luckiness Framework for Maximal Margin Hyperplanes..................................38 Support Vector Machin es Classification....................................................................46 3 DOMAIN SPECIFIC KNOWLEDGE WI TH SUPPORT VECTOR MACHINES..51 Review on Relevant SVM Literature with Domain Specific Knowledge..................52 An Alternative Approach to Incorporat e Domain Specific Knowledge in Support Vector Machines.....................................................................................................54

PAGE 6

vi Charecterizing the Input Space............................................................................55 Charecterizing the Input Sp ace with Box Constraints.........................................57 Characterizing the Input Space with a Polytope..................................................60 4 ELLIPSOID METHOD..............................................................................................63 Ellipsoid Method Introduction....................................................................................63 The Lwner-John Ellipsoid........................................................................................63 The Ellipsoid Method.................................................................................................66 Optimizing Linear Functions over Ellipsoids......................................................71 Different Ellipsoid Methods................................................................................75 Different Ellipsoid Methods’ Formulation..........................................................77 Shallow Cut Ellipsoid Method............................................................................78 5 COMPUTATIONAL ANALYSIS FOR DOMAIN SPECIFIC KNOWLEDGE WITH SUPPORT VECTOR MACHINES................................................................80 Overview.....................................................................................................................80 Comperative Numerical Analysis fo r Box-Constraints and Polytopes......................82 Polytopes versus Hyper-rectangles......................................................................82 Generating Polytopes...........................................................................................82 Using Ellipsoid Method to Upper Bound the Polytope Diameter..............................85 Central Cut Ellipsoid Method..............................................................................86 Maximum Violated Constraint............................................................................89 Shallow/Deep Cuts..............................................................................................90 Proceeding After a Feasible Point is Found........................................................92 Fat Shattering Dimension and Luckiness Framework................................................94 6 SUMMARY AND CONCLUSIONS.........................................................................99 LIST OF REFERENCES.................................................................................................103 BIOGRAPHICAL SKETCH...........................................................................................109

PAGE 7

vii LIST OF TABLES Table page 3-1 Generalization error bound performances under different settings. Numbers in bold shows that the bound is too lo ose to provide any information.........................57 5-1 Summary of random polytopes gene rated in 2, 3 and 5 dimensions........................85 5-2 The central cut ellipsoid method application on 2 and 3 dimensional datasets.......88 5-3 The central cut ellipsoid method application on 2 and 3 dimensional datasets according to maximum violated constraint selection...............................................89 5-4 The ellipsoid method with deep cu ts for 2 and 3 dimensional datasets...................90 5-5 With deep cuts method converges faster and generated ellipsoids are of smaller size........................................................................................................................... .92 5-6 Proceeding after a feasible point is found by randomly choosing a violated constraint and assigning maximum value that can be assigned..........................94 5-7 Proceeding after a feasible point is found by choosing the most violated constraint and assigning maximum value that can be assigned..........................94 5-8 The performance comparison for a 3-dimensional input space with 5 constraints.. ..............................................................................................................97

PAGE 8

viii LIST OF FIGURES Figure page 2.1 Consistency of learning process ..............................................................................21 2.2 Risk Functional for structure .............................................................................24 2.3 Two dimensional separating hyperplane, 2, b w where 12ww 'w..........................................................................................................26 3-1 Using box constraints for characterizing the input space.........................................58 4-1 The L-J Ellipsoid for a polytope..............................................................................64 4-2 When maximizing a linear function 'cx over an ellipsoid ,E Dd, the center of the ellipsoid lies between minz and maxz ..................................................................73 4-3 In every iteration the polytope is c ontained in a smaller ellipsoid...........................76 5-1 The L-J approximation.............................................................................................84 5-2 The central-cut ellipsoid method illu strated on a 3-dimensional polytope which has a diameter of 3.77..............................................................................................87 5-3 Volume reduction does not n ecessarily reduce the diameter...................................88 5-4 The ellipsoid generated with the cu t has a diameter of 330 and its volume is about 56.9% of the fina l ellipsoid in A....................................................................91 5-5 The shallow cut method can continue ev en after the feasible region is found.........93 5-6 The -approximate L-J ellipsoid for the polytope..................................................96

PAGE 9

ix LIST OF SYMBOLS AND ABBREVIATIONS X Input space X x Input vector x, x Input vector associated with a pos itive or negative la bel, respectively Y Output space y Output label 1,1y Note that iy denotes an output label of a specific input vector ix Real numbers n n-dimensional Euclidean space, or real vector space H Hypothesis space or hypothe sis class. Note that iH denotes a specific hypothesis space S Training sample set hH Classification function that maps : hX in general shH Selected hypothesis for classification task Distribution Error probability 1 – the Confidence level l Sample size 2-Norm A Cardinality of a set A or absolute value of A, if A PH Cardinality of the largest set shattered by H

PAGE 10

x d VC-dimension of H. Note that id denotes the VC-dimension of iH. n Dimension of the input space H B Growth function n mC The number ways of taking n things m at a time e Constant, base of the natural logarithm PQ Different events used in context Pr Probability function i p Probability of event i k Number of errors made on training sample S. W Parameter set for classifier function. e.g., in support vector machines, nW W w Parameter of a classifier function LF Loss function R F Risk function F Distribution function log Logarithm to base 2 ln Natural logarithm Class of all linear learning machines over n ,FB Classes of linear functions f FgB Linear classification functions b Bias, b Covering number Margin. Note that i denotes the margin of a specific input vector xi

PAGE 11

xi sm Margin of a function on sample set S R Radius of a ball cont aining all input points Hierarchically nested sequence of countable hypothesis classes (i.e. 112,..,:...nnHHHHH) HI (Hilbert Space) A Hilb ert space is a vector sp ace with an inner product x y such that the norm defined by xxx transforms HI into a metric space. A An arbitrary set A nonnegative number (it is both used in luckiness framework as well as lagrangian multiplier in support vector machines formulation.) le Level of a function U Unluckiness function L Luckiness function L a Lagrangian function ˆ SS two sets of vectors of the form '' 1,...,l Sxx and '' 12ˆ ,...,ll Sxx respectively two functions of the form ,,leL and ,,leL a class of functions R f with VC-dimension of 1, |RfR sv set of support vectors v A non-negative integer n P An n-dimensional polytope Empty set SD Space diagonal HR Hyper rectangle B a n-dimensional ball

PAGE 12

xii Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy USING DOMAIN-SPECIFIC KNOWLEDGE IN SUPPORT VECTOR MACHINES By Enes Eryarsoy August 2005 Chair: Gary J. Koehler Cochair: Haldun Aytug Major Department: Decision and Information Sciences Our goal was to develop a methodology th at incorporates domain-specific knowledge for pattern-recognition problems. As different classifier algorithms capture pertinent domain knowledge in different ways, we specifically examined methodology for incorporating Support Vector Machines a novel learning algorithm in patternrecognition. We began with a detailed literature revi ew on learning theory and Support Vector Machines (SVM), an efficient and mathem atically elegant technique for machine learning. We incorporated prior knowledge to enhance generalization error bounds on learning with SVM. First, we consider ed prior knowledge about the domain by incorporating upper and lower bound s of attributes. Then, we considered a more general framework that allowed us to encode prio r knowledge to linear constraints formed by attributes. Finally, by comprehensive compara tive numerical analysis we compared the

PAGE 13

xiii effectiveness of incorporating domain know ledge versus using pure SVM error bounds obtained without incorporating domain knowledge.

PAGE 14

1 CHAPTER 1 INTRODUCTION Background “Intellect is the part of human soul which knows, as distinguished from the power to feel” (The American Heritage Dictiona ry, 2000). The roots of creating a human-made intelligence can be traced back to Greek my thology, where the gods made artificial persons to carry out certain tasks. In the middle of 20th century a new stream of research emerged, aimed at comprehending human inte llect. The new research paradigm, called artificial intelligence (AI), received much attention. Star ting with Turing machines, many real-world problems have been addressed using functions not provided by the human brain. As the first game-playing program ‘Ch eckers’ led to the fam ous Deep Blue that won the Chess re-match against world champion Kasparov1, the field of artificial intelligence has evolved. Looking for a mechanistic way to mimic th e human brain’s reas oning, AI research bifurcated into two main streams. The fi rst stream (computational neuroscience, or neuroinformatics), focuses on unraveling th e truly complex relationship between the structure and functions of the brain by trying to decipher a huge black box that starts from structures as little as molecules and ends with functions and behaviors that the human brain exhibits. 1 The original Checkers program was written in 1956. Deep Blue won the re-match against Kasparov in 1997.

PAGE 15

2 The second stream focused on mimicking the functions of the human brain and replicating these rather than understanding how the brain operates Machine learning, which is a sub-area of AI, studies computer algorithms that can accomplish tasks requiring intelligence. More specifically, it studies developing algorithms that can learn from experience to make inferences about fu ture cases. A large va riety of real-world problems (face detection, voice recogniti on, automatic target recognition, cheminformatics, bioinformatics, and ‘pattern recognition’ problems, in general) fall into this category. An algorithm that carries out a learning task is called a “learning machine”. In machine learning, the experience is introduced to the learning machin e in the form of a set of training examples. Typically, machine l earning algorithms use this set to train and calibrate the learning machine so that it can infer future cases or unseen situations. For pattern-recognition (a.k.a. classi fication) task, the learning ab ility of a learning machine corresponds to how well the algo rithm is calibrated to cla ssify unseen examples (i.e. examples not included in the training set). This ability is called the “generalization ability of the learning machine”. An ideal learning machine would require less effort (i.e., few training points and a fast training process) and would produce re sults of good quality (i.e., good generalization ability). No Free Lunch (NFL) Theorems and Domain Knowledge Many classification algorithms such as neural networks (H ristev, 1998), decision trees (Wagacha, 2003), and support vector mach ines (Vapnik, 1998) have been used on a number of pattern-recognition problems. Ho wever, a classification algorithm that performs extremely well on one task may fail to perform well on another. In the literature, this phenomenon is captured by the “No free lunch” theorems (Wolpert and

PAGE 16

3 MacReady, 1995). They reflect the idea that for every positive training situation there is a negative training situ ation, this ideology resemble s to David Hume’s skepticism (Freddoso, 2005): ‘our expectation that the futu re will be similar to the past doesn’t have a reasonable basis but it is ba sed on belief only’. In skepticism, even from the greatest number of observations we cannot generate a rule nor can we predict a consequence from any of the known attributes. Using (NFL) theorems, Wolpert and MacReady (1995) analyzed different algorithms. The NFL theorems investigate the performance of various algorithms in areas such as optimization, search, and learning. Generally, the structure of the solution space is very crucial in selecting a good methodology. Wolpert (1996a, and 2001) develops NFL theore ms for learning. In these, he claims that a learning algorithm cannot guarantee a go od generalization abil ity of the machine by merely relying on low misclassification rates resulting from errors made on the training set, attributes of th e learning machine, and a larg e training set. No algorithm, when averaged across all problem inst ances, can outperform random search. From our view point, the NFL theorems sugge st that taking as much information as possible about the problem doma in may enhance learning. Therefore, regardless of the classification methodology used incorporating prior assump tions as well as domain knowledge in machine lear ning is of importance. The idea of capturing domain knowledge co mplements both NFL theorems and AI community. The AI community attributes the popular phrase ‘knowledge is power’ to Dr. Edward A. Feigenbaum, a pioneer in AI (Augier and Vendele, 2002). The phrase

PAGE 17

4 originated with Francis B acon, an English philosophe r who laid out a complex methodology for scientific inquiry kn own as the ‘Baconian Method’. ‘Bacon suggests that you dr aw up a list of all things in wh ich the phenomena you are trying to explain occurs, as well as a list of things in which it does not occur. Then you rank your lists according to the de gree in which the phenomena occurs in each one. Then you should be able to deduce what factors match the occurrence of the phenomena in one list and don't occur in the other list, and also what factors change in accordance with the way the data had been ranked. From this Bacon concludes you should be able to dedu ce by good inductive reasoning what is the form underlying the phenomena.’ (Wikipedia, 2005) The phrase “knowledge is power” in the AI context has a similar, but distinct meaning. In our context, it means that capturing knowledge about the domain of the pattern-recognition problem and in corporating this into the l earning process are the actual sources of power of the learning machines. There have been many studies addressi ng the issue of in corporating domain knowledge in various methods, such as deci sion trees or neural networks. Our study focuses on a machine learning method calle d Support Vector Machines (SVM). The SVM method uses a novel and mathematically eloquent technique that relies on strong theoretical underpinnings of Statistical Learning Theory and Structural Risk Minimization (SRM): Statistical Learning Theory (SLT) developed mainly by Vapnik and Chervonenkis (1971), and Vapnik (1 982) relates statistics to the learning processes. The SVM method is mainly used for classificat ion and regression. Gi ven a classification problem, the SVM method uses SLT to calcula te optimal classifiers by maximizing the margin between the classes. Structural Risk Mi nimization (SRM) : For pattern-recognition one chooses the best learning machine to perform the task. Once the learning machine is chosen, its

PAGE 18

5 generalization performance can be computed based on the training set and SLT. We use SRM to choose the learning machine that be st minimizes the generalization error. For classification problem s, to maximize the generalization ability, SVM minimizes a bound on the generalization error by solving a convex optimization problem that maximizes the margin between separate d classes. Since the problem is a convex minimization problem, optimality is guaranteed in other learning machines (such as neural nets and decision trees) known approaches do not guarantee optimality. Our main objective is to develop a met hodology that incorporates domain-specific knowledge for learning. We quantify its impact on learning accuracy and development of SVM. Different classifier algorithms captu re pertinent domain knowledge in different ways. In SVM literature, domain knowledge is usually captured (loosely speaking) via kernel functions. Kernel functions map all th e instances into another space (called the feature space) in which classification task is performed. In feature space task can be achieved easier. However, most studies have used generic kernels such as Gaussian, Polynomial, or Radial Basis Function (RBF) to incorporat e domain knowledge; but lack any specific reason why they might perform well. The fact that these kernels seems to help supports the optimistic view that they incorporate some sort of domain knowledge. One problem with using domain knowledge is that kernel c hoice highly affects learning performance. The problem with usi ng generic kernels is that no a priori knowledge is used during kernel selection (hence our use of ‘optimistic’ domain knowledge). Generating kernels ta ilored to problems in hand of ten requires trial and error methods and a priori knowledge for the problem domain may be lacking or not directly used.

PAGE 19

6 Another problem is with trying to capture domain knowledge through sample points. The limited number of training points available for training may inhibit learning. In learning, an insufficient tr aining set size poses serious prob lems. Niyogi et al., (1998) have addressed the issue of generating virtual examples to increase an effective training set size. They show that in some cases cr eating virtual examples is mathematically equivalent to capturing prior knowledge. Fung et al. (2001, 2003) captured prior knowledge (in the form of polyhedral sets) a nd used it in the formulation of linear SVM classifier. No studies have yet captured prior knowledge by char acterizing the entire input space (i.e., all of the po ssible instances on which classifi cation task is performed). An attribute represents a characteristic of an entity or an instance that we find it of interest. In knowledge discovery process, the basic classification task is to classify entities, or so called points, according to thei r attributes’ values. For each attribute there is a specific domain from which a value is assigned. Motivation and Summary In this study, we propose an alternative approach to incorporate prior knowledge about the input (or instance) space. We investigate the case where the input space is contained in a convex, closed and bounded set, rather than the classical SVM approach that assumes a ball of radius R contains all the training points. Initially we start with box constraints that contain the input space, th en we extend our appro ach to an arbitrary polytope that contains all potential trai ning points. In our sett ing prior knowledge is derived from the convex, closed set that co ntains the input set whose shape depends on the problem domain. We believe that our approach makes two major contributions to the literature: Firstly, we incorporate prior knowledge direc tly from the input space. That is, knowledge

PAGE 20

7 is derived immediately from th e attributes of the input spa ce. The contribution in this approach, as indicated above, is as follows: A sample set not equa l to the whole input space is never complete as there are always unseen cases which can potentially reveal more insights about the domain. However, wo rking with the input sp ace itself rather than a finite set of points drawn from it help s extract knowledge w ithout worrying about unseen data. Secondly, given a sample size and a le vel of confidence, by utilizing prior knowledge about the input space we potentially reduce generaliz ation error. Alternatively this also means that by incorporating doma in-specific knowledge, a sample size that is needed to accomplish the learning task with a pre-defined level of confidence may be reduced. The remainder of this dissertation is organi zed as follows. In Chapter 2, we provide comprehensive review on learning theory, and current literatu re on support vector machines. In Chapter 3, literature review on domain knowledge in learning is provided and an alternative way to incorporate domai n knowledge in support vector machines is proposed. Chapter 4 discusses the ellipsoid me thod and its applicabil ity to the problem. Chapter 5 is dedicated to computational anal ysis of our approach to incorporate the domain knowledge.

PAGE 21

8 CHAPTER 2 LEARNING THEORY AND SVM CLASSIFIERS Detecting regularities and not-s o-obvious patterns in data that are beneficial from an organization’s or an indi vidual’s point of view is cal led the Knowledge Discovery process. As the process is usually carri ed out on sizable databases, “Knowledge Discovery” and “Knowledge Discovery in Data bases” (KDD) concepts usually refer to the same activity. In database literature, the KDD process is composed of several iteratively repeated subproce sses. Knowledge discovery process is defined as “the nontrivial process of identifying valid, novel, potentially usef ul, and ultimately understandable patterns in data” (Fayyad et al., 1996). In their study, they also include their version of steps for KDD that can be outlined as: Learning application domain Creating a target dataset Data cleaning and pre-processing: noise in data is reduced, irrelevant data are removed, and missing data are handled. Data reduction and projection: data repr esentation is realized by carrying out dimensionality reduction via elimination of highly correlated variables. Choosing function of data mining: d eciding on the kind of operation to be performed on data such as classificati on, regression, summary, clustering, time series analysis… Choosing data mining algorithms: which models and what parameters are appropriate for the KDD process. Data mining: looking for patterns in the da ta by using a data mining algorithm for a selected purpose.

PAGE 22

9 Interpretation: interpreting the results that are yielded by data mining process, and revising and remedying previous processes if necessary. Using discovered knowledge: utilizing th e knowledge derived through the previous steps in order to enhance the system performance. Of all the steps above, data mining is proba bly the most popular. In fact, the term “Data Mining” sometimes amounts to the whole process of Knowledge Discovery (Kohavi and Provost, 1998). However, Data Mining usually refers to quantitative and algorithmic approaches and is also known as Machine Learning and pattern-recognition. Kohavi and Provost (1998) provide their de finition for Machine L earning as “Machine Learning is the field of scient ific study that concentrates on induction algorithms and on other algorihms that can be said to learn”. An algorithm that carries out a learning task is called a “learning machine”. In this study we are especially interested in “support vector machines” as learning machines, and as indi cated in the introduction, SVM relies on strong theoretical underpinnings in learning theory and optimizat ion theory. In the following section we briefly di scuss learning theory and re late it to support vector machines. Basics of Learning Theory Valiant (1984, p. 1134), de fines learning as follows: “A program for performing a task has b een acquired by learning if it has been acquired by any means other than explicit programming. Among human skills some clearly appear to have genetica lly preprogrammed elements while some others consist of executing an explicit se quence of instructions that has been memorized. There remains a large area of skill acquisition where no such explicit programming is identifiable. It is this area that we describe here as learning”. In statistics, Learning Theory refers to computational analysis of machine learning techniques (WordiQ Encyclopedi a, 2005). Different learning th eories exist in literature such as Probably Approxia tely Correct Learning (PAC ) (Valiant, 1984), Bayesian

PAGE 23

10 Learning, and Statistical Learning theories. These learning theories not only deal with analyzing existing machine learning algorith ms, but also inspir e creation of new algorithms. For example, the Boosting algor ithm (Schapire, 1990) and Support Vector Machines are machine learning algorithms based on PAC Learning, and Statistical Learning Theory. Learning Theory In the first section of this chapter we provided a general introduction to Learning Theory and Machine Learning. As we noted many machine learning algorithms have been developed and applied on different lear ning tasks such as the ones we mentioned earlier. However, in this study our focus is Support Vector Mach ines as a Machine Learning algorithm. Therefore, we will limit ourselves to Support Vector Machines related aspects of Learning Theory. Probably Approximately Correct (PAC) Learning Valiant’s paper “A Theory of the Le arnable” (1984) laid the foundation of Computational Learning Theory (COLT) that focuses on the mathematical evaluation and design of algorithms by considering sample data drawn independen tly and identically distributed (i.i.d.) from an unknown but fixed distribution to study problems of prediction in learning theory. Let nX be the input space and also let every point in the input space have a binary output label, y, assigned. We a ssume a fixed but unknown sampling distribution defined over the input-output pa irs. In PAC Learning, the basic classification task is to choose a function from a specified hypothesis sp ace so that the learning is approximately correct, meaning that the probability of mi sclassifications made by the classification

PAGE 24

11 function (or hypothesis), over the input sp ace is bounded by a pre-defined confidence level. Formally, let ,1,1yX xbe input-output pair s from an input space X ,yx is called an example. Let h be a classification functi on (or hypothesis) from an hypothesis space H. Also let 11,,...,,llSyy xx be a collection of l examples (i.e., the sample) drawn i.i.d. from an unknown but fixed distribution Then, the error of hypothesis h can be defined as: ,: errhyhy xx. The error term measures the expected costs of an error a nd is usually referred to as the risk functional Here the cost function is an indicator function. Wh en studying problems of prediction in learning theory, PAC Learning considers bounding the e rror rate as one of the prime tasks. A second goal is to do so in polynomial time. Th ere are several factor s affecting the risk functional such as the selected hypot hesis for the classification task ( s h ), the richness of the hypothesis space, the desired confidence level 1 and the number of training examples available (sample complexity). For any shH the pac bound can be expressed as ,,lH It is clear that the error bound is a function of sample complexity, hypothesis space complexity and confidence level, 1 A more formal way of writing a pac bound is as follows: :,,l sSerrhlH This states that given a sample size l the probability that risk functional is greater than the pac bound ,,lH is very small (if is small). However, in order to go beyond a definition, a further analysis of the bound on risk functional is needed. First of

PAGE 25

12 all, the bound is a function of sample co mplexity and the hypothe sis space. Therefore sample size determination and characteristics of an hypothesis space have impact on the bound. Secondly, a distribution free bound is doomed to be more pessimistic and looser than a bound on a specific benign distribution.Th ese issues must be addressed in order to bound the risk functional. Let H denote the cardinality of the hypothesis space ( H if the space is infinite). During the learning process a samp le set is processed by the given learning algorithm to output an shH such that ,,errhlH if possible. Hypothesis h is called consistent with the sample if no errors are made on the sample set. The probability that a hypothesis h correctly classifies all of S but has error greater than can be bounded as: : consistent and ,,1l l lSherrhlHe .2 With H being finite, if we have at least one consistent hypothesis s h from hypothesis class H that correctly classifies sample set S, then by using the cardinality of hypothesis space we can bound the probability as: : consistent and ,,l l ssSherrhlHHe In turn we can bound this to giving: : consistent and ,,l l ssSherrhlHHe 2 2lim1...1 1!2!!n x nxxx ex n For x and l 1l xle

PAGE 26

13 In order to satisfy the bo und, the error must satisfy 1 ln H l and therefore we set: 1 ,,ln H lH l This expression indicates that the error bound is disproportional to sample size and confid ence level we pick, as expected. More importantly, it also indicates that for an overly complex hypothesis space (i.e., if H is large), even though the sample set is cons istent with the hypothesis, the error bound may be high and therefore there is a higher over fitting risk. Moreover, the cardinality of the hypothesis space must be finite for the bound to hold. However, in general, the cardinality of the hypothesis space is neither necessarily small nor finite. For instance, in Valiant’s original paper (Valiant, 1984), he considers a Boolean function mapping, hence it is finite. However, in many cases, such as “learning with linear classifiers with real weight vectors”, the cardinality of hypothesis class is infinite. Vapnik-Chervonenkis (VC) Theory To make the learning process more feasible and to extend pac theory over hypothesis classes with infinite cardin alities, the concep t “cardinality of H is extended to “expressive power of H ” by VC Theory, first introduced by Vapnik and Chervonenkis (1971). Before we formally state VC Theory let us understand “shattering”, “growth function”, and “VC-dimension” concepts. Let H be the hypothesis space defined on instance space X and 11,,...,,llSyy xx be a sample of size l. A hypothesis class His said to shatter a

PAGE 27

14 set of points 1,..,lSxx if and only if for every possibl e input-output pairs there exists a hypothesis that labels all of them. Definition 2.1 (Shattering): A set of instances 'Sis shattered by hypothesis space H if and only if for every binary classification 1,1 y of instances in 'S there exists some hypothesis in H consistent. (Mitchell, 1997) For real valued functions Gurvits (2001, p.82) defines shattering as follows: Let H be a class of real-valued functions on domain X We say that H shatters a set 'SX if there exists a function ': hS such that for every subset E S there exists some function EhH satisfying: E f xhx for every \ x AE E f xhx for every x E The pseudo-dimension of H denoted PH, is the cardinality of the largest set that is shattered by H (Gurvits, 2001, p.82) The VC Dimension of an hypothesis space H ( dim dVCH) is the cardinality of the largest set that can be shattered by H If the hypothesis space is linear classifiers over n, then given any set 'S of 1n points in general position (not lying in an 1n dimensional affine subspace), there exists a function in H that consistently labels 'S whatever the labeling of the training points in 'S Also, for any set of 1ln inputs there is at least one labeling that cannot be realized by any function in H Thus, the VC Dimension for linear classifiers is n + 1.

PAGE 28

15 The complexity of the hypothesi s space or expressive power of H over a sample of sizel can be calculated by using a “growth function” H B l. The bigger theH B l, the more behaviors the H on a set of l points has. Let l X be the superset of all samples of sizel, for 1,1hxgrowth function is bounded as: 112 ,...,()max():max,,....,:2l ll l HH l XBlBSSXhhhhHxxxxx Then, the VC-dimension for the growth function given above is dimmax:()2l HVCHdlBl and is infinite only when the set is unbounded. Above, a tighter bound on the grow th function would mean a smaller d, which means less power is necessary to shatter l points. Although the gr owth function above grows exponentially in l, in VC Theory, it can be bounde d by a polynomial expression in l of degree dim dVCH : Lemma 2.2 (Vapnik, 1971): Let X be an (infinite) set of elements x and let l X be some set of subsets S of the set X Denote by H B l the number of different subsets 1,...,, l lSSX xx of the set 1,...,lxx. Then either 11 ,...,sup,...,2lll lX xxxx or 02l d d H i l iBl el C d where dis the last integer l for which the equality is valid and i lC is the number of ways of taking i things l at a time.

PAGE 29

16 Proof of Lemma 2.2: For ld 2l HBl and for ld 02d l H il Bl i In the case ld, we have 01 d l and knowing that0()d H il Bl i we may write: 00()1ddil dd d H iill dddd B le ii llll which yields d Hel Bl d Note that the structure in th e Lemma above is also known as “Sauer’s Lemma” (Sauer, 1972). Also note that, Vapnik (1998) uses the term “growth function” for lnH B l, while we use the notation in (Cristianini and Shawe-Taylor, 2000). By using the growth function, we can re-write the pac bound as: : consistent and ,,l l ssHSherrhlHBle In order to minimize the risk functional based on our training error we use the “doubling trick” of John Canny Chernoff by introducing a ghost sample S. The doubling trick is based on applying Chernoff bounds in the learning context can be stated as follows: Lemma 2.3 (Chernoff, 1952; Cr istianini and Taylor, 2001): Suppose we draw a set S of l random examples from the error bound on training examples can be bounded by the probability of having zero error on the training examples but high error on a second random sample S. That is : consistent and ,,l l ssHSherrhlHBle can be bounded by

PAGE 30

17 2::0,2::0, 2ll SS Sl ShHerrherrhSShHerrherrh given 2 l Proof of Lemma 2.3: Assume we draw a sample 'Sof size 2 land randomly divide it into two equal sets S and S Assume that the number of errors made on sample 'S is known (k), and we also know that proba bility of making error is at least The probability of making l errors is at least 1 2 and 2 l l (from 2 l ), Therefore the probability of making error at least 2 l times is at least 1 2 Let P be the event that no errors are made on S, and Q be the combined event that P occurs on the set S and there are at least 2 l errors on set S Then we know that Pr|1/2 QP and PrPr|Pr QQPP Therefore, Pr2Pr PQ Hence, provided that 2 l we can write: 2::0,2::0, 2ll SS Sl ShHerrherrhSShHerrherrh Given 2 l k errors are made on sample 'S of size 2lwe can bound the probability that all of the 2 l k errors are made on S 11 002 1 2 22/kk k iill li kk li Therefore we can write

PAGE 31

18 /2::0,222l l SHShHerrherrhBl. Combining the results so far yields the following expression: /2/22 ::0,22222d l ll SHel ShHerrherrhBl d Therefore a pac bound for any consistent hypothesis h can be shown to be: 222 ,,loglogel errhHd ld Theorem 2.4 (VC-Theorem by Vapnik and Chervonenkis, 1998): Let H be a hypothesis space having VC-dimension d. For any probability distribution on 1,1 X with probability 1 over l random examples S, any hypothesis hH that is consistent with S has error no more than 222 ,,loglogel errhlHd ld provided dl and 2l (Cristianini and Taylor, 2001, p. 56). Bounds on generalization error are derived by co mputing the ratio of misclassification. Therefore a bound on the numbe r of errors over a set of input points can be calculated by multiplying the generalization error by the cardinality of the input point set. This, in a sense, is similar to calculating the cost of misclassification if our loss due to misclassification is simply the count of misclassifications (then generalization error is said to be the risk functional). In the following subsection we discover more about loss functions and we me ntion two basic induction princi ples that relate selecting appropriate hypothesis for classifi cation and risk functionals.

PAGE 32

19 Structural and Empirica l Risk Minimization We previously defined the risk functional as measuring the expected cost of misclassification. For a classifi cation problem, the input vectors x from a subset X of n are classified by a function of the form ,,,hXW xwxw, where w is a parameter of the classifier function of parameter set W. For yx input-output pairs, a loss function that counts number of errors can be defined as: 1, ,, 0, otherwise hy LFh xw xxw For a distribution function Fy, the learning algorithm disc overs a relationship as a joint distribution function between input and output pairs Fyx by investigating the conditional probability distribution | F yx that outlays their stochastic relationships. Hence, the risk functional to be minimized can be stated as: ,,, R FLFhdFy wxxwx Estimating the risk functional depends on the sample set 11,,...,,llSyy xx and the learning problem is to choose a value *w for which *infWRFRFwww. However, as the probability distribution Fyx is unknown, the solution to the above problem cannot be computed explicitly. As such, an induction principle is usually invoked. Two common principl es that are used to find or approximate to the best classifier are the Empirical Risk Minimizati on (ERM) principle and the Structural Risk Minimization (SRM) principle. The Bayes classi fier is the overall best classifier not restricted to any hypothesis class, and is unknown (Devroye et al., 1996). The objective

PAGE 33

20 of these two principles is to fi nd a classifier that is as close as possible or the same as the Bayes classifier. Empirical Risk Minimization. Empirical risk minimizatio n (ERM) is an induction principle that substitutes minimi zing the empirical risk functional 11 ,,l l Emplii iRFLFh lwxxw for minimizing ,,, R FLFhdFy wxxwx In pattern-recognition one of the mo st common approaches to minimizing l Emp R F is to minimize the number of errors made on the training set. Due to Glivenko-Cantelli Theorem we know th at when there are a large number of observations available (l is large) the empirical distri bution computed fr om sample set S is convergent to the actual distributio n. That is, with probability one, lim0l lFF xx. Similarly, in ERM, for a given hypothesis class H the bias in l Emp R F is small as l increases. That is lim0l Emp lRFRF ww, and hypothesis selection is done ac cording to this criterion. It is hoped that a hypothesis found by the ERM principle have low tr ue risk or true error. However, the hypothesis se lection chooses an hypothesis so argminl Emp R FRF is small, but is not necessarily close to a Bayes classifier. That is, if we let R F denote the Bayes error, then for **infinfll EmpEmp hHhH R FRFRFRFRFRF

PAGE 34

21 *lim0l Emp lRFRF is not necessarily true (Devroye et al., 1996) Definition 2.5 (Consistency for ERM): We say that the principle (method) of empirical risk minimization is c onsistent for the set of functions ,, LFhxxw, W w, and for the probability distribution function F x if the following two sequences converge in probability to the same limit: *infP l W R FRF w and *infP l Emp l W R FRF w. (Vapnik, 1998) The idea of this definition can be seen in Figure 2.1. Note that consistency for a classification rule is also know n as asymptotically Bayes-risk efficient (Devroye et al., 1996). Figure 2.1: Consistency of learning process (Adapted from Vapnik, 1998, p.81) Vapnik and Chervonenkis (1998), give a proof of consistency of ERM in “The key theorem of learning theory” and shows that when l is largel Emp R F minimization is consistent. Moreover, Theorem 2.4 can be used to establish upper bounds on the generalization error. Usually, when the hypothesi s class is not rich enough to capture the concept of interest, some errors may be made on training data. Therefore when a l E mp R F*inf R Fw RF l

PAGE 35

22 hypothesis space may not be able to capture all the variations and possibly some noise in the data, the VC-Theorem can be adapted to tolerate errors on the training set. The following Theorem by Vapnik provides the genera lization error for such cases (Cristianni and Shawe-Taylor, 2000, p.58). Theorem 2.6: Let Hbe a hypothesis space having VC dimension d. For any probability distribution on 1,1 X with probability 1 over l random examples S, any hypothesis hH that makes k errors on the training set S has error no more than 2424 ,,loglog kel errhlHd lld Provided dl Proof: Vapnik (1998, p.263) Blumer et al. (1989) and Ehrenfeuch t (1989) also provided lower bounds on generalization errors. For a consistent hypothesis the bound is tight, as the following theorem states. Theorem 2.7: Let H be a hypothesis space with finite VC-dimension 1d. Then for any learning algorithm there exist di stributions such that with probability at least over l random examples, the error of the hypothesis h returned by the algorithm is at least 111 max,ln 32 d ll (Cristianini, Sh awe-Taylor, 2000) Proof: Ehrenfeucht et al. (1989) The bound in Theorem 2.6 is tight up to th e log factors as there are lower bounds of the same form (Bartlett a nd Shawe-Taylor, 1999). However, although upper bounds hold for any distribution, lower bounds are valid only for particular distri butions (Bartlett and Shawe-Taylor, 1999).

PAGE 36

23 Structural Risk Minimization. As we indicated earlier, when l is largel Emp R F minimization is consistent. Therefore, the ge neralization ability fo r the ERM principle on small sample sets cannot benefit fr om the theorems we have above. Structural Risk Minimization (SRM) is an induction principle that looks for the optimal relationship between sample size (l) available, and the quality of learning by using the function s h chosen from a hypothesis class H and the characteristics of the hypothesis class H (Vapnik, 1998). The SRM principle requires a nested se quence of hypothesis classes. Smaller classes may not be rich enough to ca pture the concept of interest. Let 112,..,:...nnHHHHH be a hierarchically nested sequence of countable hypothesis classes. This is also called a “deco mposable concept class” by Linial et al. (1991). Each hypothesis class jH in the hierarchy is composed of a set of functions ,:,1,...,iijjhhHiH xw with same and finite VC-dimension jd (1......jnddd ). The SRM principle says that if jk is the minimum number of training errors for any hypothesis ,ijhH xw, then for a fixed training set S the number of errors jk for each hypothesis class jH satisfies 1......jnkkk The theorem can potentially be utilized to select a hypothesis class that minimizes the risk functional bound. On the one hand, the numbe r of training errors decreases as the selected hypothesis class of structure gets more complex. However according to Theorem 2.6, for overly complex struct ures the confidence level declines ( increases), and the selected hypothesis class mimics the training set rather th an the input space and overfitting occurs. On the othe r hand, if the selected hypothe sis class is not adequetely

PAGE 37

24 complex to discover the input space, th e hypothesis class underfits the input space. Namely, in SRM, when deciding which structur e is the best one for the learning task, the Occam’s Razor principle corresponds to choosin g the smallest number of features (the hypothesis class with smallest VC-dimension) sufficient to explain the data. The tradeoff can be seen in Figure 2.2, ad apted from Vapnik (1998, p.81). Figure 2.2. Risk Functional for structure In SVM, the SRM principle is applied ove r a hierarchically nested sequence of countable and linear hypothesis classes. In order to select the optimal hypothesis class iH according to Occam’s razor, the tradeoff betw een empirical risk and confidence interval is measured for different-complexity hypothe sis classes and too much complexity is penalized. An alternative approach to this is the Minimum Description Length (MDL) induction principle (Rissanen, 1978). Basically, MDL principl e sees the learning problem Risk Functional Confidence Interval Empirical Risk Overfitting Underfitting Model Complexity (h) h1 hn h* Risk

PAGE 38

25 as a data compression problem in which a simple r description of data than the data itself is needed for representation. As Grnwald (2004, p15) states: “According to Rissanen, the goal of induc tive inference should be to `squeeze out as much regularity as possible' from the given data. The main task for statistical inference is to distill the m eaningful information present in the data, i.e. to separate structure (interpreted as the regularity, the `meaningful information') from noise (interpreted as the `acc idental information').” In a recent study, Aytug et al. (2003) s uggested addressing th e tradeoff by using genetic algorithm, and computing MDL fo r model selection accordingly. Grnwald (2004) offers a very comprehensive lite rature review on MML (Minimum Message Length) and MDL. In Vapnik (1998)’s framework, a data gene rator generates data i.i.d. and according to a fixed but unknown distribution, and the da ta generating machine can be infinitely complex. Despite the SRM’s algebraic strength, there is a crucial shortcoming of the original principle. Shave-Taylor et al. (1998, p.1927) quotes from Vapnik: “According to the SRM principle the structur e has to be defined a priori before the training data appear (Vapnik, 1995, p.48)”. When using SRM it is assumed that there is a hypothesis clas s in the structure wherein the target function lie s. This may not always be true. In our “Effective VCDimension” section, we will include the appr oaches that address this shortcoming. Generalization Theory for Linear Classifiers A linear classification function :nfX maps an input vector x into a real value. A label is assigned as follows. If 0 f x then the vector x is assigned to the positive class (i.e., 1 ) or the negative class otherwise. A linear classifier f x can be written as:

PAGE 39

26 1 n ii i f bwxbxwx where b w are the parameters characterizing the linear classifier (Figure 2.3). The vector w is the weight vector and b is called bias term. Figure 2.3. Two dimensi onal separating hyperplane, 2, b w, where 12ww 'w. The linear classifier shown in Figure 2.3 has been frequently used in the literature. Neural network researchers have referred to it as “perceptrons” while statisticians preferred the term “linear discriminants” (C ristianini, Shawe-Taylor, 2000). One of the first learning algorithms for linear classifier s was the perceptron algorithm. The details of perceptron algorithm are not given here but can be found in (Cri stianini and ShaweTaylor, 2000) or (Vapnik, 1998). However, the following definitions and Theroem 2.12 are included due to their importance. w b w

PAGE 40

27 Definition 2.8 (Functional Margin): Let 11,,...,,llSyy xx be a linearly separable set. The func tional margin of an example ,iiy x with respect to a hyperplane b w is the quantity iiiyb wx. Moreover, the functional margin distribution of a hyperplane with respect to a training set is the distribution of the margins of the examples in S. The functional margin of a hyperplane is minii S x. Let R be the radius of the n-dimensional ball that contains all elements of the nontrivial set S. Given a linearly separable set of poi nts, the perceptron algorithm achieves the classification task within a pre-speci fied number of iterations as the following theorem states. Theorem 2.9 (Novikoff): Let 11,,...,,llSyy xx be a non-trivial and linearly separating set and let maxii SRxx be the radius of ball that contains the set. Suppose that there exists a vector *w in canonical form such that *1 w and *iiyb *wx Then the number of mistakes made by the perceptron algorithm on S is at most 22 R Proof: Different proofs are available. Cris tianini and Shawe-Taylor (2000, p.14) gives a proof for the theorem. Theorem 2.9 states two important aspects of linear classification. One, scaling of the data does not change the ratio / R therefore the algorithm is scale insensitive. Two, the learning task is easier when the functi onal margin of the se parating hyperplane is

PAGE 41

28 large. The second aspect, due to Navikoff’s theorem, is that the result will be analogous to those obtained in the following section when we attempt to derive more effective estimations for the VC-dimension of linear hypotheses by taking the observed data into account. Effective VC-Dimension In the Vapnik-Chervonenkis (VC) Theo ry section, Theorem 2.6 showed generalization error can be upper bounded by using the VC-dimension as a complexity measure. In fact, Cristianini and Shawe-Tayl or (2000) state that th e bound is tight up to log factors. For higher VC-dimensions a learne r needs a larger training set to achieve low generalization error, and the bound is independent of distribution and data. That is, in a distribution dependent sense, for some benign di stributions it is possi ble to derive tighter bounds by taking a distribution dependent VC -dimension measure as domain knowledge into account. The following proposition by Cristianini and Sh awe-Taylor (2000) characterizes the relationship between VC -dimension and linear learning machines. Proposition 2.10 (Cristianini and Shawe-Taylor, 2000, p.57): A linear learning machine of dimension n denoted byn, has the following characteristics: Given any set S of 1n training examples in gene ral position (not lying in an 1 n dimensional affine subspace), there exists a function in that consistently classifies S whatever the labelling of the training points in S For any set of 1ln inputs there is at least one classification that cannot be realised by any function in

PAGE 42

29 The proposition states that in data indepe ndent sense, the linear learning machines cannot accomplish learning task since, for highe r dimensions, there will be some sets that cannot be classified correctly. Therefore, domain knowledge derived from data may be used to enhance learning. Covering Numbers The error bound derivation formulas de pend on measuring the hypothesis class H from which a hypothesis s h will be drawn. In the Probably Approximately Correct Learning section we mentioned that if the expr essive power of a hypothesis class is large, even though the sample set is consistent w ith the hypothesis, the error bound may be high and there is a higher overfitti ng risk. If the hypothesis class is infinite, however, learning is not possible. In the literature, there are different ways of measuring the “size” of an hypothesis class. For example, in earlier sections we noted that the original pac learning paper by Valiant (1984) considered a Boolean function mapping, hence the number of hypothesis in this hypothesis class was finite. However, w ith linear classifiers of real weight vectors the cardinality of H could not be measured. In the VC -theory section, we introduced the VC-dimension as an alternativ e way of measuring the “expre ssive power” or “size” of an hypothesis class. So far, the functional margin in the training set S did not have any effect on generalization bound computation. However, one would expect that for classification problems with large functional margins the classi fication task would be easier. In order to tighten the error bounds further, we need to measure the expressive power of an hypothesis class by taking the functional margin into account.

PAGE 43

30 To measure the expressive power of th e hypothesis class, th e notion of “covering numbers” for measuring massiveness of sets in metric spaces is used. The notion dates back to Kolmogorov and Tihomirov’s original work (1961) and is the backbone of the quest to calculate the effective VC-dimen sion of real-valued function classes by upper bounding covering numbers for them. A metric space is a set with a global distance function, the metric, with which a non-negative distance between every two elem ents in the set can be calculated. A covering number can be informally defined as the smallest number of sets of radius with which a set in the metric space can be covered. The concept of covering number for real-valued functions was further im proved by Bartlett et al. (1997). The formal definition for covering number for real-valued function classes is as follows: Definition 2.11 (Covering Number): Let F be a class of real-valued functions on a domain X A -cover of F with respect to a sequence of inputs 12,,...,lS xxx is a finite set of functions B such that for all f F there exists gB such that 1maxii ilfg xx The size of the smallest such cover is denoted by ,, FS while the covering numbers of F are the values ,,max,,lSXFlFS (Cristianini and Shawe-Taylor, 2000, p.59) If the covering number for a metric space is finite, then the metric space is totally bounded (Anthony and Bartlett, 1994). In a sense, covering numb ers enables one to view a class of functions at a certain resolution In the following section we will see how to

PAGE 44

31 incorporate covering numbers in measuring expr essive power of classifier functions by relating to functional margin. Fat Shattering In this section, first we explain the deri vation of generalization error if the sample set has a margin of smf in Theroem 2.12. Theroem 2.12: Consider a real-valued function space F and fix For any probability distribution on 1,1 X with probability 1 over l random examples S, any hypothesis f F that has margin smf on S has error no more than 22 ,,,log,,/2log errflFFSS l provided 2/ l (Cristianini and Sh awe-Taylor, 2000, p.60) Proof: Previously, we noted the doubling trick of Chernoff by introducing a ghost sample S according to which the pac bound that contains margin information can be written as follows. 2::0,, 2::0,, 2l Ss l Ss SSfFerrfmferrf l SSfFerrfmferrf Now, we include covering numbers in this formulation. For a /2cover on sample SS that satisfies max/2ii iSSfg xx, we know that function g has zero error on S, and a margin greater than /2 On the ghost sample S where f can make errors however, the margin bound /2 doesn’t hold. Following Cristianini and Shawe-Tayl or (2000)’s notation, if /2Serrg denotes the

PAGE 45

32 number of errors f made, or the number of points for which /2smg then we can write the pac-bound as 2 22::0,, 2 2::0,/2,/2 2l Ss S l Ss Sl SSfFerrfmferrf l SSgBerrgmgerrg Previously, we used the growth func tion to measure the complexity of hypothesis space over the sample size and similarly here, 2 22::0,/2,/222 2el l Ss Sl SSgBerrgmgerrgB By the definition of “covering numbers”, the amount of behaviors that the classifier can have is 2max,, 2lSSXBFSS and the result follows. The elegance of the expression lies in in cluding the covering numbers in the error bound. Similar to growth function, the log of ,,/2 FSS is a measure for VCdimension. The next step is to upper bound the quantity ,,/2 FSS by using the fat shattering dimension. As noted earl ier, Kolmogorov and Tikhomirov (1961) provided the initial framework for such a bound. First, le t us give the definiti on of fat shattering. Definition 2.13 (Fat Shattering): Let F be a class of real-valued functions. We say that a set of points X is shattered by F if there are real numbers ir indexed by i x X such that for all binary vectors y indexed by X there is a function y f F satisfying if 1 if 1ii yi iiry fx ry

PAGE 46

33 The fat-shattering dimension Ffat of a the set F is a function from the positive real numbers to the integers which maps a value to the size of the largest shattered set, if this is finite or infinite otherwise. (Shawe-Taylor et al., 1998, p.1929). As the dimension depends on value it is also referred as the scale-sensitive VCdimension Earlier in Sauer’s Lemma, we used the VC-Dimension to bound the growth function. Here, similarly, we will show how th e fat-shattering dimension is used to upper bound the covering numbers at precision. Different stud ies for different learning systems have addressed the issue of upper bound ing covering numbers such as Bartlet et al. (1997). Smola (1998) includes some of the ways to upper bound covering numbers by VC-dimension. The following lemma by Shawe-Taylor et al (1998) is originally due Alon et al. (1997) and it provides a bound on the covering numbers in terms of the fat-shattering dimension. Note that th e original paper assumes o= 0 and c= 1 and therefore is slightly different. Lemma 2.14 (by Shawe-Taylor et al, 1998, p.1929): Let F be a class of functions from X into co and a distribution over X Choose 01 and let /4Fdfat. Then the following holds. 2 224 log,,1loglog elocloc Fld d If the bound on covering numbers is applie d to Theroem 2.12, the generalization error for linear classifiers w ith large margin can be bounded as the following corollary states.

PAGE 47

34 Corollary 2.15: Consider thresholding a real-valued function space F with a range 0,1 and fix For any probability distribution on 1,1 X, with probability 1 over l random examples S, any hypothesis f F that has margin smf on S has error no more than 228324 ,,,logloglog ell errflFd ld provided 2 l dl, where /8Fdfat. (Cristianini and Shawe-Taylor, 2000) The generalization error th erefore, can be bounded, once the fat-shattering dimension is bounded. Here, we constrain ourse lves to linear learning machines. There have been several studies bounding fat-shatte ring dimension for linear learning machines, such as Bartlett and ShaweTaylor (1999), Gurvits (2001), and Hush and Scovel (2003). The following version is due to Ba rtlett and ShaweTaylor (1999). Theorem 2.16: Suppose that X is a ball of radius R in a Hilbert space HI : X HIRxx and consider the set :1, FXxwxwx. Then fat-shattering dimension can be upper bounded as 2 F R fat Proof: Bartlett and Shaw e-Taylor (1999). Above, note that classifier function does not include the bias term (i.e., the classifier passes through the origin). The fat shattering dimension function is from the positive real numbers to the integers mapping the geometric margin into the size of the

PAGE 48

35 largest shattered set. Finally, we can state an error bound for real-valued function space F defined in Theorem 2.16, by combining Theorem 2.16 and Corollary 2.15. Corollary 2.17: Consider thresholding a real-valued function space F with a range 0,1 and fix For any probability distribution on 1,1 X, with probability 1 over l random examples S, if R is the radius of the smallest ball containing them, any hypothesis f F that has margin smf on S has error no more than 2 222264324 ,,,logloglog 8 Rell errflF lR provided 2 l 2 264 R l The above bound has important implications. First of all, by incorporating a margin of the linear classifier one can take advantage of tighteni ng generalization error bound for benign distributions with large margins. Secondly, in Theorem 2.4, the bound depended on the VC-dimension. Unlike the bound dependi ng merely on VC-dimension, this bound is dimension free, and it suggests that for be nign distributions even in infinitely large dimensions it could be possible to derive tight gene ralization bounds. Luckiness Framework According to the original SRM framework by Vapnik, a decomposable concept class structure is determined before the data appear and the target function to be learned lies within the struct ure. The most suitable hypothesis class is selected according to criterion defined in Theorem 2.6. De pending on the VC-dimension and number of errors made, the generalization bound, to an extent, fails to incorporate large margin phenomenon, in the sense that patterns that are further apart from each other would

PAGE 49

36 intuitively be easier to recognize. Therefor e, in the previous section the error bound depending on fat-shattering dimension potentially gives an advantage of incorporating the easiness of separability for sample if a large margin is observed. A learner calculates the effective VC-dimension after observing data as he or she chooses between VC-dimension and fat-shattering dimension depending on the “easiness” of the sample. This tradeoff was originally given in Vapnik (1995) for a s ubset of the input space that is contained in a ball. Shawe-Taylor (1998) extends this to bound fat-shattering dime nsion of the class of hyperplanes. The following version is valid for linear classifier functions. Lemma 2.18: Let F be the set of linea r functions with unit weight vectors, and input space X is contained in a ball of radius R Let also the bias (or threshold) be bounded from above by R :1, FbXxwxwx, : X HIRxx, and bR then the fat-shattering function can be bounded from above by 29 min,1FR fatn Proof: Shawe-Taylor et al. (1998, p.1932) The difference in fat-shattering dimensi on bound given above and Theorem 2.16 is due to bias term in the hyperplanes. In a recent study by Hush and Scovel (2003), the result of Lemma 2.18 was improved and both upper and lower bounds on fa t shattering dimension were essentially tightened to equality.

PAGE 50

37 Lemma 2.19: Let HI denote prehilbert space of dimension nand F denote the class of functions on X defined by :1, FbXxwxwx with bR Then 225 max,1min,1 4FRR fatn Proof: Hush and Scovel (2003) Shawe-Taylor gives generalization b ound based on Lemma 2.18. Here we will include slightly different version of th e bound by including the results of Hush and Scovel (2003). The generalization bound can now be derived by combining Lemma 2.19 and Corollary 2.15. Definition 2.20 (Set Closure): The closure of a set A is the smallest set containing A Definition 2.21 (Support of a Distribution): Support of a dist ribution is the smallest closed set whose set closure has probability zero. Theorem 2.22: Consider thresholding linear f unctions with real-valued unit weight vectors on input space X Assume that all inputs 1,1 X are drawn identically and independently acco rding to an unknown distribution whose support is contained in a ball of radius R in n centered at the origin. With probability 1 over training set S with l random examples, any hypothesis that has a margin Smf of or bigger has error no more than 228324 ,,,logloglog ell errflFd ld

PAGE 51

38 provided 2 l 285 4 R l where 285 4 R d The theorem assumes that the entire input space X is included in a ball, denoted by B a, and the generalization error depends on the radius of B a. On the one hand, the radius of the ball containing the input sp ace may not be calculated as we allow to be any distribution function according to which inputs are independently and identically drawn. On the other hand, we do observe the radius of a ball that contains all of the points in the set S. One may expect that a more meaningf ul generalization error must rely on the latter ball, rather than the ball th at contains the entire input space. This important shortcoming was first a ddressed in Shawe-Ta ylor et al. (1998). Intuitively, with R fixed, a potential generalizati on error bound that depends on the assumption “sample points are cont ained in a ball of radius R centered at the origin” would yield a somewhat looser bound than that of the one depending on the assumption “entire input space is contained in a ball of radius R centered at the origin”. They defined a luckiness framework to r ecover the results of Theore m 2.22 by bounding the ratio of unseen data points that may not be contained in the ball of radius R that contains the sample set. Luckiness Framework for M aximal Margin Hyperplanes By using SRM’s decomposable class stru cture above we described how one can approximate the ideal target function by favor ing functions that perform well on a sample set. Theorem 2.6, SRM’s initial framew ork, and Lemma 2.14 by Alon et al. (1997) together allow us quantify some character istics of our observations by mapping them onto a one measure, generaliza tion error bound, for a particular class in the decomposable

PAGE 52

39 structure. Luckiness or Unluckiness fr amework works by measuring luckiness (or unluckiness) of a hypothesis based on sample. Luckiness function, : LSH and unluckiness function : USH provide a means to measure performa nce of a specific hypothesis class in a context called “level of a f unction” defined as follows. Definition 2.23 (Level of a Function): The level of a linear function f F relative to L (or U ) and S is the function ,1,1:,,,,lleSfygFgyLgLf xxx or ,1,1:,,,,lleSfygFgyUgUf xxx. In the following, we state definitions that will prove useful later when we attempt to derive error bounds in the luckiness framework. Definition 2.24 ( -subsequence): An -subsequence of a vector n x is a vector 'x that is obtained by deleting a fraction of coordinates of x. A partitioned vector x y is of the form 11,..,,,..,nm x xyy xysuch that 'xx, and ''x y x y for a partitioned vector, corresponds to deletion of coordinates. For instance, for the vectors 1,...,iini x xx and '' 1,...,lSxx defined for the sample set S, an subsequence of the set that is obtaine d by deleting a fraction of coordinates of a subset of S.. Definition 2.25 (Probable Smoothness): Let '' 1,...,lSxx and '' 12ˆ ,...,llSxx. A luckiness (or unluckiness) function is probably smooth with

PAGE 53

40 respect to two functions ,, leL and ,, leL if for every distribution the following holds 2''''ˆˆˆˆ ::0,,,,,,lfFerrflefleLf SSSSSSSSSS or 2''''ˆˆˆˆ ::0,,,,,,lfFerrflefleUf SSSSSSSSSS The idea is to measure probable smoothness from the sample and estimate luckiness or unluckiness from the first ha lf of the double sample to bound the growth function by ,,, leLf S. In other words, if the function f is probably smooth, then there exist at least ,,, leLf S many luckier classes with no errors. Theorem 2.26 (Probable Smoothness Theorem): Suppose i p 1,....,2 il, are positive numbers satisfying 2 11l i ip L is a luckiness f unction for a function class H that is probably smooth wi th respect to functions and l and 01/2 For any target function tH and any distribution with probability 1 over l independent examples S chosen according to if for any i a learner finds a hypothesis h in H with 0Serrh and 1,,,2ilLSh, then the generalization error of h satisfies ,, errhli where 24 ,,1log4,,,log4 4i ip liilLShl lp Proof: Shawe-Taylor et al.(1998, p.1934).

PAGE 54

41 The original study (Shawe-Taylor et al., 1998) illustrated the luckiness framework by four examples which included one that considers VC-dimen sion unluckiness (big growth function corresponding to being unlucky), and another that concerns maximal margin hyperplanes. We will only focus on th e latter. First, we summarize their study in their context, and then we attempt to provide error bounds depending on new upper bounds on fat shattering dimension as well as tw o simple algebraic errors we were able to detect in the original study. Definition 2.27 (Unluckiness for Maximal Margin Hyperplanes): If f is a linear threshold function in canonica l form with respect to a sample S we define the an unluckiness as 2 2, R Uf S. Proposition 2.27: Consider a class of linear separating hyperplanes F defined on X as :1, FbXSwxwS with bR and 1maxi il R x. The unluckiness function for maximal margin hyperplanes is probably smooth with 92 ,,, 9Uel lUf U S and with 182 ,,loglog32log422log 2 el lUdlm ld Proof: Shawe-Taylor et al.(1998, p1936) Combining Theorem 2.26 with Proposition 2.27, Shawe-Taylor et al.(1998) obtain the generalization error in luckiness framework. Corollary 2.28: Suppose i p 1,....,2 il are positive numbers satisfying 2 11l i ip, and 01/2 For any target function tF and any distribution

PAGE 55

42 with probability 1 over l independent examples S chosen according to which is a probability distribution on X if for any i a learner finds a hypothesis f in F with 0Serrf and 1,,,2ilLSf, then the generalization error of f satisfies ,, errfli and 24 9log2log32log 9 2 ,, 8 1297loglog32log168log4 1297iel Ul Up li l el Ulll U where UUSf is the unluckiness function. (Shawe-Taylor et al.1998, p.1936) Now, we attempt to state rather novel ve rsions of Proposition 2.27 and Corollary 2.28 in our framework as Proposition 2.29 and Corollary 2.30. Proposition 2.29: Consider a class of linear separating hyperplanes F defined on X as :1, FbX SwxwS with bR and 1maxi il R x. The unluckiness function for maximal margin hyperplanes is probably smooth with 5 48 ,,, 45Uel lUf U S and with 22183216 ,,logloglog42log ell lUdl ld Proof: We follow Shawe-Taylor’s proof with new fat shattering bounds. The proof is composed of two parts. In the first part, with probability of 1 1,, lU a fraction of points in the double sa mple for which ball of radius R

PAGE 56

43 constraint is violat ed is calculated. In the second part, 2,, lU a fraction of of points in the second half of the sample that are either not corre ctly classified or have margin less than /3 is calculated. The result follows by summing up 1,, llU and 2,, llU First part: Consider a class |RfR with 1 if 0 o/wR R f x x. The class has VC dimension 1. The permutation argument requires for ld and 1 d 1 01il B ll i where d is the VC-dimension. We write the pac statement with confidence level of 1 2 as: 1: consistent and ,, 22l l RRSferrflUBle by using the doubling trick of Cherno ff, we can re-write this as, 2 1 ˆ 1ˆ :0,2:0, 22ll SRRSRR Sl SSerrferrfSerrferrf hence we obtain, 124 log21log l l Note that, 1 in Shawe-Taylor et al. (1998) given as 112 log21log l l seems to be due to an algebraic mistake. Second part:

PAGE 57

44 Now, for the points of the double sample th at is contained in side of the ball, we estimate how many are closer to a hyperplane than /3 Shawe-Taylor et al. (1998) uses their equivalent version of Theorem 2.22 a bove to derive a bound with /3 First, we will mention their results in their framework, and then we will immediately indicate the bounds in our framework. Combining the fat shattering dimension in Lemma 2.18, and Lemma 3.9 in Shawe-Taylor et al. (1998, p. 1932) with a confidence level of 1 2 and for /3 the second part immediately follows 2184 loglog32log el dl ld for 221297/1297 dRU 12182 loglog32log422log 22 ll el dlm lld Alternatively, in our framework, we make use of Corollary 2.15, Lemma 2.19, and Theorem 2.22 as follows. With 2 25 4FR fat for 3 with 1 2 confidence level, in Theorem 2.22 we get, 2 228328 logloglog ell d ld where 2 21445 4 R d Combining 1 and 2 we have 22183216 logloglog42log ell dl ld

PAGE 58

45 Corollary 2.30: Suppose d p for 1,...,2 dl are positive numbers satisfying 2 11l d dp. Suppose 1 0 2 and the target function tF and is a probability distribution on the input space X With confidence level 1 over the sample set S with l elements, if the sample set is consistent with an hypothesis f that is 0Serrf then the generalization error of f is no more than 94 24 log1loglog2 2 ,, 832 2loglog2log428log4il p li l ell dll d Corollary 5, is the generalization error for the classification problem for which the support of the sample set obtained lies in a ball centered at the origin with radius R However, Theorem 2.22 gives the generali zation error for the problem for which the entire input space is included in side of such a ball. It must be noted that above bound in Corollary 6.26 in Shawe-Taylor et al. (1998) that appears as 24 9log2log32log 9 2 ,, 8 1297loglog32log168log4 1297iel Ul Up li l el Ulll U is different than that we give in Corollary 5. The error term in their framework has a “+1” missing due to an algebraic mistake, which ma kes the error bound 24 9log12log32log 9 2 ,, 8 1297loglog32log168log4 1297iel Ul Up li l el Ulll U

PAGE 59

46 The difference between two bounds is due to two main reasons. First, we use the generalization bound theorems based on 1,1 classification given in Cristianini and Shawe-Taylor (2000) as our basis. ShaweTaylor et al. (1998) however, consider 0,1 scheme with thresholding and folding argumen ts integrated to their bounds. Second, we also make use of tighter fat shattering bounds due to Hush and Scovel (2003). Support Vector Machines Classification In the previous section we derived the n ecessary expressions to pick a hypothesis class based on generalization error. We also know how to estimate generalization performance of a linear classifier, given th e hypothesis class. We now in a position to devise an efficient way of l earning and computing those hyperp lanes. In this section we will briefly introduce Support Vector Machin es for the hard margin case where the training set S is assubed to be linearly seperable. For the soft margin case, regression and other variations of SVM we recomme nd Cristianini and Taylor (2000). Following our previously defined notation, let w be a weight vector, x and x be two negative and positive points respectively. Assume that a functional margin of 1 is realized. Therefore, with geometri c margin notation one can write 1 1 b b wx wx which implies 11 2 ww xx www In order to find the maximum margin hyperplane, by using 1w one must solve the following “hard margin” problem. Proposition 2.31: Given a linearly separable set 11,,...,,llSyy xx the maximum margin hyperplane is the solution of the following problem

PAGE 60

47 ,min .. 1 for 1,...,b iistybil www wx Proof: Replacing ,maxb w with ,1 minb ww the result immediately follows. It turns out that corresponding dual problem of the above optimization problem has certain characteristics that make it appealing to work with that we will mention after stating the dual problem. The following pr oposition gives the dual form of optimal hyperplane formulization. Proposition 2.32: Given a linearly separable set 11,,...,,llSyy xx and consider the following problem with solution realizing the maximum. 1,1 11 max 2 .. 0, 1,...,ll iijijij iij l ii i iyy sty il xx Then, the weight vector ** 1l iii iywx solves the maximal margin hyperplane problem stated in Proposition 2.31. Proof: The Lagrangian for the problem stated in Proposition 2.31 is 11 ,,1 2l ii iLabyb iw wwwx with Lagrangian multipliers 0i Now, in order to compute the dual problem statement we need to differentiate the Lagrangian with respect to w, and b. 1,, 0l iii iLb y w wx w, therefore 1l iii iy wx

PAGE 61

48 1,, 0l ii iLb y b w therefore 10l ii iy Also note that the Karush-K hun Tucker conditions are ***10, 1,..,iiiybil wx. Plugging expressions above in the or iginal problem statement gives 1 ,1,11 1,111 ,,1 22 1 2 1 2l iii i lll ijijijijijiji ijiji ll iijijij iijLbyb yyyy yy w wwwx xxxx xx Definition 2.33 (Support Vectors): The iS x training points for which the constraint in Proposition 2.31 is strictly satisfied, that is ix satisfying 1 for 1,...,iiybil wx are called support vectors. Donated by sv, the set of points that are the closest to the ma ximum margin hyperplane are called support vectors. The bias in hyperplane formulation does not explicitly appear in the formulation, therefore it can be calculated easily from the primal constraints *** 111 maxmax 2iiii yybwxwx. Also, the maximal margin hyperplane can be expressed in dual representation as ** 1,,l iii i f bybxxx. KKT conditions indicate that i values are non-zero only for the input vectors that lie closest to hyperplane, as all other points would

PAGE 62

49 automatically have functional margin of greater than one. Therefore, there are at least two closest points, and the number of non-zero i values is equivalent to number of support vectors. Proposition 2.34: Given a linearly separable set 11,,...,,llSyy xx, the maximum margin hyperplane is the solution of the following problem ,min .. 1 for 1,...,b iistybil www wx has a solution of i isv where sv is the set of support vectors. Proof: The result follows from KKT conditions ** 1 **** ,1 **,, l iiiii i l ijijij ij jjiiij jsvisvyfbyyb yy yy xxx wwxx xx *** 1, 1,...,jjj jsvjsvybjl Note above that, according to Proposition 2.31, the maximal margin hyperplane satisfies 1/2 **1i isv ww In summary, in this chapter we revi ewed generalization error bounds. The error bounds we review were in pac-like form, and in a distribution free sens e. We started from a general hypothesis class a nd towards the end of the ch apter we focused on maximal

PAGE 63

50 margin hyperplanes as they are of greatest importance for support vector machines. Then we included the support vector m achine formulation at the end.

PAGE 64

51 CHAPTER 3 DOMAIN SPECIFIC KNOWLEDGE WITH SUPPORT VECTOR MACHINES Wolpert and MacReady (1995) and Wolper t (1996a, 1996b, 2001) analyze different algorithms from a skeptical point of view sim ilar to that of David Hume’s “Problem of Induction”. The resulting No Free Lunch Theo rems investigate performance of various algorithms in the areas of optimization, search, and learning. In general, they indicate that the structure of the solution space is ve ry crucial in selecting a good methodology (Wolpert and Macready, 1995). Wolpert (1996a and 2001) studies the “ no free lunch” (NFL) for learning and claims that a learning methodology that depends only on low empirica l misclassification rate, small VC-dimension, and a large traini ng set cannot guarantee a small generalization error. Wolpert (1996a) analyzes “off-the set” (OTS) generalization error of supervised learning mathematically. In his follow up pa per, Wolpert (1996b) claims that if no assumptions are made about the target, th en the generalizatio n performance has no assurance. In summary, Wolpert (2001) clai m two things. One, for a learning algorithm there are as many situations in which it is superior to another algorithm as vice versa. Two, the generalization performance depends on how much the selected classifier based on prior belief, or sample set, actually coinci des with the posterior di stribution, or target function. According to NFL, no learning algorithm performs better than random guessing over all possible learning situ ations. For a detailed survey on NFL theorem one can refer to Whitley and Watson (2004).

PAGE 65

52 From our point of view, NFL theorems indi cate that taking as much information as possible about the input space into account may enhance learning since some prior knowledge about the target may increase a l earning algorithm’s chances of constructing the target concept more accu rately. Therefore, regardless of the classification methodology, incorporating prio r knowledge in machine lear ning is of importance. Review on Relevant SVM Literature with Domain Specific Knowledge On the complementary side of the NFL theorems, in the artificial intelligence community’s “Knowledge is Power” phrase attr ibuted to Dr. Edward A. Feigenbaum, a pioneer in AI (Augier and Vendele, 2002). The thrust of this phrase from our point of view underlines the fact that in the machine learning context the phrase complements the basic idea of the NFL theorems. In the general machine learning framewor k there are many studies about “prior knowledge” incorporation. For example, Niyogi et al. (1998) study “prior information” from the sufficiency of size of the training se t. They show that in the absence of prior information, a larger training set might be needed to learn well. In support vector machines literature however, there have been only several attempts to incorporate domain knowledge. Supp ort vector machine research is relatively young. It is not a surprise that incorporat ing domain knowledge in SVM is even younger and remains a fertile research area. The first successful study on incorp orating domain knowledge was due to Schlkopf et al. (1996). They extract support vectors from the examples, and generate artificial examples by applyi ng invariance transformations to generate so-called virtual support vectors to increase accuracy.

PAGE 66

53 According to our review of the SVM literat ure on domain incorporation, there are mainly two ways of incorporating domain knowledge in SVM: Incorporating domain knowledge with kernels Incorporating domain knowle dge in SVM’s formulation One might see “fat-shattering” dimension as a “domain-dependent” measure, and therefore as another way of incorporati ng domain knowledge. However, it is data-set dependent and rather than exha ustively “domain-dependent”. In the support vector machine literature kernels represent the main method of capturing domain knowledge. In theory, kernel s have a great potential to tailor SVM learning to the problem on hand. However, ma ny SVM applications using kernels merely compare different generic kernels in te rms of their relative performances. In an earlier study, Schlkopf et al. (1998) incorporated domain knowledge to construct task-specific kernels for an image classification task. Several other studies focused on buildi ng task-specific kernels. For example Jochaims (1998, 2001) builds domain-specific ke rnels for text categorization. Another example on task specific kernels in the SV M literature is latent semantic kernels by Cristianini et al. (2002). To the best of our knowledge, explic itly incorporating the encoded domain knowledge in support vector machine formula tions was first introduced by Fung et al. (2001). In their study, domain knowledge was ex plicitly introduced to the formulation of a linear support vector machine classifier in the form of polyhedral sets. Each of the polyhedral sets, called a knowledge set for a spec ific class, is incor porated into the SVM formulation The SVM algorithm is slightly diffe rent since the constrai nt set now includes the knowledge sets. As the polyhedral sets need not have to include any training points,

PAGE 67

54 in their application, the resulting hyperplan e is found to be significantly different than linear SVM separation. Furthermore the propos ed method has the potential of replacing missing training data with prior knowledge in terms of knowledge sets. However, from their study it can be observed that the shap e of the new formed, knowledge-set-dependent hyperplane is different than a no-knowledge-set -dependent hyperplane only when at least one of the knowledge sets contains a point that could be a support vector i.e. closer to the boundary than other training examples. In a follow-up study, Fung et al. (2003) studied incorporating prior knowledge in the formulation of nonlinear kernels. They illustrated the power of prior knowledge on a checkerboard example. With 16 points, each located at the center of 16 squares, they showed that including prior knowledge about two of the sixteen squares and using a Gaussian kernel, the correctness level drama tically increased to a level which can be achieved by as many as 39,000 training point s and no prior knowledge. One must note that this remarkable improvement is due to and subject to the av ailability of very particular knowledge of the domain. An Alternative Approach to Incorporate Domain Specific Knowledge in Support Vector Machines In the previous section we reviewed two pioneer studies utilizing prior knowledge by adding membership knowledge sets into SMVs formulation. However, in real life the knowledge sets might not be available or might be recondite due to the problem’s nature. For example, for complex input spaces knowledge sets may be difficult to observe. Moreover, even when they are available, they might not effect or improve the results as they may incorporate inessential knowledge (i.e., the knowledge describes examples that lie further from the separating hyperplane than training points). Intuitively, the efficiency

PAGE 68

55 of such fine-drawn knowledge sets requires an abundance of prior information and their encodability. Due to those reasons the applicab ility might be limited in many domains. At the very least, this is a very specific us e of apriori knowledge. We offer a different approach requiring less knowledge where rath er general information about the input space is taken into account. Charecterizing the Input Space In order to introduce our approach in th e following we give some definitions, and then characterize the input space. Definition 3.1 (Bounded Sets, Closed Sets, Compact Sets): A set X is closed if the set contains all of its limit points. A set X in n is bounded if it is contained in some ball 2222 12...n R xxx of finite radius R A closed bounded set is called a compact set. Definition 3.2 (Hyperrectangle, Orthotope): Hyperrectangle, also known as orthotope, is a generalization of rect angle and rectangular parallelpiped. Definition 3.3 (Diameter, Generalized Diameter): In a closed and bounded set, the farthest shortest distance between two points on the boundary is called a diagonal. In Euclidean space, the diameter of a set nS is simply the distance between two points that are furthest apart. Definition 3.4 (Space Diagonal): The line segment connecting opposite polyhedral vertices (i.e., tw o polyhedral vertices that do not share a common face) in a parallelpiped or other similar solid.

PAGE 69

56 Definition 3.5 (Convex Hull): The convex hull of a set of points in n is the intersection of all convex se ts containing those points. In other words, the convex hull of a set of points is the smallest convex set that includes those points. Definition 3.6 (Polyhedron, Convex Polytope): A polyhedron is the intersection of a finite nu mber of halfspaces. A convex polyhedron, or a polytope, can be defined as a bounded polyhedron or as the convex hull of a finite set of points in space. We denote a polytope by nP. Characterizing the input space X in terms of domain knowledge available is an important step in encoding prior information. As we noted earlier, Fung et al. (2001 and 2003) incorporated some domain knowledge in terms of polyhedral sets without asserting any specific shape or boundedness on the input space. However, earlier in the luckiness framework we observed that the distance of the furthest point from the origin is one of the determinants of the fat shattering dime nsion and the generalization bound. Because of this we observe that generalization bounds tend to be very loose if the input space is not assumed to be bounded. Table 3-1 shows generalization error bounds’ performances under different scenarios. In Table 3-1, the “Bound based on d ” column represents the bound obtained by using VC-dimension as the data independent VC-dimension and The “Bound based on F f at and X is contained in a ball of radius R” column represents the generalization bound obtained when the input space is assumed to be inside of the ball of radius R The “Luckiness Framework” column re presents the genera lization error bound when the radius R merely represents the radius of the ball containing training sets. We see that fat-shattering dimension dependent bounds provide insights only when the margin between classes is large, and bounds with luckine ss framework are too loose.

PAGE 70

57 Table 3-1. Generalization erro r bound performances under diffe rent settings. Numbers in bold shows that the bound is too loos e to provide any information. d R l Bound based on d Bound based on F f at and X is contained in a ball of radius R Luckiness Framework 20 100 10 1000 0.10 0.332 7.4539 n/a 20 100 20 1000 0.10 0.332 1.7784 n/a 20 100 30 1000 0.10 0.332 0.7423 n/a 20 100 40 1000 0.10 0.332 0.3940 n/a 20 100 50 1000 0.10 0.332 0.2388 n/a 20 100 60 1000 0.10 0.332 0.1570 n/a 20 100 60 1000000 0.10 0.001 0.0017 4.535 20 100 60 10000000 0.10 0 0.0003 0.876 The dramatic difference between the rightmo st two columns is simply due to the fact that the values in the rightmost column are derived without assuming that the input space is bounded. Motivated by this example we conclude that char acterizing the input space is likely to be crucial in computing ge neralization bounds. In the following sections we initially start by assuming upper and lower bounds on the attributes of the input space. Later, we tighten these box constraints and a ssume that the input space is contained in a polytope, and finally in a general convex body. However, our main focus is on input spaces contained in a polytope. In order to follow the tradition of pac -like bound generation, we need to examine certain prope rties of polytopes. In the following section we start with box constraints which ar e a special form of a polytope. Charecterizing the Input Space with Box Constraints In the knowledge discovery process, the ba sic classification task is to classify entities (points) according to their attributes’ values. For each attribut e there is a specific domain from which a value is assigned. Often, this specific domain is not infinite and is

PAGE 71

58 bounded. In practice, domain ranges for many attr ibutes (like age, inco me level, etc.) can naturally be bounded from a bove as well as below. We would like to explain the motivation fo r using box constraints in support vector machines in Figure 3-1. Figure 3-1. Using box constraints fo r characterizing the input space. In Figure 3-1 A, the larger ball containi ng most of the points denotes the ball of radius R centered at the origin that contains th e training set. The three outlier points on the upper right part depict off-sample points th at lie outside of the ball. The gap between the luckiness framework and fat-shattering bounds are simply due to those points. If upper and lower bounds are not known, potentially there could be many points that are not contained in the ball. In fact, in a distribution free se nse, the reason generalization error bounds generated in the luck iness framework are loose is due to the fact that there could be many of those points. A rectangle is formed by using the upper and lower Part A Part B

PAGE 72

59 bounds on the two attributes. In Figure 3-1 B, using a linear transformation the rectangle is re-centered at the origin and everything is included inside of the box. The benefit of taking upper and lower bounds for each attribute is potentially two fold: The luckiness framework is not needed to generate generalization bounds, as the input space is already containe d inside of a hyperrectangle. By using a simple linear transformation, the entire input space can be centered around the origin, and therefore the diameter may be smaller. As the fat shattering dimension depends on the margin as well as the diameter of the ball, the fat shattering dimension, and the generaliz ation bound are poten tially smaller. Proposition 3.7: Let nX be the input space contained in an hyperrectangle 'HR Let also ',ijHR xx be two vectors furthest apart in 'HR The fat-shattering dimension at scale is always smaller than or equal to the fat shattering dimension at scale calculated by consideri ng the smallest ball of radius R containing the input space. Proof: If the input space is contained in a hyperrectangle, then the space diagonal (SD) is 2ijSDR xx and the result follows. Note that, fatshattering dimension can further be improved by using a transformation '2ijHRHR xx. Figure 3-1, Part B graphically illustrates the proposition. The space diagonal of a hyperrectangle can be computed easily as follows. Let 1,...,nubub and 1,...,nlblb denote upper and lower bounds for attributes 1 through n respectively. Then 2 2 1n ii iSDublb. In this section we demonstrated that just knowing upper and lower bounds alone, one can potentially improve the generalizat ion error depending on the fat-shattering

PAGE 73

60 dimension. In the following section, we try to incorporate more domain knowledge in the form of linear constraints and analyze the ca se that the input space is contained in a polytope. Characterizing the Input Space with a Polytope In many pattern-recognition problems th ere are interdependencies among the attributes. Some of them are strong correlati ons among the attributes that can be observed statistically while others can even be written in mathematical form. In machine learning literature highly co rrelated attributes may mislead or not contribute much to the learning process. For example, kth nearest neighbor and decision trees are sensitive to correlated attributes (M adden et al., 2002). Neural networks are less sensitive to correlated attr ibutes, although they also be nefit from feature selection (Madden et al., 2002) that elimin ates correlated attributes. We investigate the case where we can denot e the relationships among the attributes in the form of linear constraints, and im prove on the box constraints of the previous section. We assume that the input spa ce is contained inside of a polytope. In Section 3.2.1. we computed the sp ace diagonal for hyper-rectangles. In this section we focus on upper bounding the fat-shat tering dimension for the polytope case. In Chapter 2, we included results for the effective VC-dimension for linear classifiers. The effective VC-dimension, called fat-shattering dimension, is bounded by 2 25 4FR fat where R is the radius of the n -dimenisonal ball centered at the origin. In our case the radius of the n -dimensional ball corresponds to ha lf of the metric diameter of the polytope.

PAGE 74

61 Lemma 3.8: Let HI denote a pre-Hilbert space of dimension nand F denote the class of functions on Pdefined by :1, Fb xwxwxP with bR Let x y P be two points furthest apart in P. Then the Ffat dimension for 2 x y P satisfies 2 25 min,1 44Ffatn xy. Proof: The metric radius of the polytope is 2 x y Using Lemma 5 by Hush and Scovel (2003) the lemma follows. In order to compute the fat shattering dimension, the metric diameter of the polytope must be calculated or upper bounde d by some expression. The problem of finding the two points furthest apar t can be stated as follows: 'max .. st xyxy AxaA y a Unfortunately, calculating the metric diameter of the polytope is a convex maximization problem, and is very difficult to solve. Therefore we condense our focus on finding an upper bound on the metric diameter. '(,) max .. f st Aax y x y AxaAya In order to assert an upper bound (,) fAa, on the metric diameter, different methodologies may be used. For example, we can find a vertex, and then attempt to form a simplex containing the polytope (Zhou a nd Suri, 2000) However, for practical purposes that will be evident later, we pref er to follow a rather general methodology and work with ellipsoids. The idea is to find an el lipsoid that tightly fits our polytope and use

PAGE 75

62 the diameter of the ellipsoi d to upper bound the diameter of the polytope. The diameter of an ellipsoid can be informally defined as th e length of the longest axis. The fact that calculating the diameter of an ellipsoid is simple, and it upper bounds the metric diameter of any polytope it contains make this approach attractive. A minimum-volume ellipsoid that contains the polytope is called a “Lwner-John ellipsoid” (John, 1948). There is no known wa y of explicitly computing Lwner-John ellipsoid for polytopes. However, computa tional considerations provide compelling reasons to work with ellipsoids. Hence, we dedicated the following chapter to computing such ellipsoids.

PAGE 76

63 CHAPTER 4 ELLIPSOID METHOD As discussed in Chapter 3, finding an upper bound on the metric diameter of a polytope is a difficult problem, but not uncommon. This type of problem also arises in machine vision, robotics, and computer game design. In machine vision and robotics, the distance between the surrounding environmen t and a robot must be known to avoid collision. However, object shapes could be complex and therefore computing the distance between a robot and objects in the environm ent could be very difficult. In order to facilitate computation, minimum-volume ellipsoids for each complex shape are computed. So, instead of directly gauging distances between object s, distances between ellipsoids are computed as an approxim ation (Rimon and Boyd, 1992, 1997). The same approach is also used in game design to determine impacts and collisions among game objects. Ellipsoid Method Introduction In our context, we would like to upper bound the metric diameter of a given polytope. By the definition, any distance in an ellipsoid is upper bounded by its diameter, i.e. the length of the major axis of the elli psoid. Hence, one possible bound is the length of the longest axis of the minimal ellipsoid containing the polytope. The Lwner-John Ellipsoid Definition 4.1 (The Lwner and John Ellipsoid): For any bounded convex body with non-empty interior K, there exists a unique minimum-volume ellipsoid enclosing the body. This ellipsoid is called the Lwner ellipsoid of K. Also, every

PAGE 77

64 compact convex body contains a unique el lipsoid of maximum volume, known as the John’s ellipsoid of K. (Blekherman, 2002) In general, The Lwner-John ellipsoid (sometimes spelled as Loewner-John, or abbreviated as L-J ellipsoid) of a convex body denotes a special ellipsoid with the property that the convex body contains the e llipsoid obtained from the L-J ellipsoid by shrinking it n-times where n is dimension of th e convex body. This is illustrated in Figure 4-1. Figure 4-1. The L-J Ellipsoid for a polytope. A) L-J Ellipse for a 2-dimensional polytope, and ellipse obtained by shrinking L-J Ellipse two times. B) Illustration for the L-J Ellipsoid for the polytope. C) For this 3-dimensional polytope, the ellipsoid that is formed by shrinking LJ ellipsoid three times lies completely inside of the polytope. B C A

PAGE 78

65 If the polytope is given in the form of a set of vertices, then there are known polynomial time algorithms (Rimon and Boyd, 1992, 1997; Kumar and Yildirim, 2004) to tightly approximate the Lwner-John ellipsoid that contains all the vertices. For example, Kumar and Yildirim proposes a 1 -approximation algorithm that approximates in 3/ Ond where n is the number of points and d is dimension. Rimon and Boyd (1997) formulate the problem as a convex optimization problem. They minimize the volume of minimum containing ( n+ 1)-dimensional ellipsoid centered at the origin subject to containment of vertices of the n -dimensional polytope. They show that the ndimensional minimal ellipsoid can directly be obtained. The complexity of the problem is linear in the number of vertices. These algorithms prove useful when the ve rtices are known. This is the typical case in robotics where the polytopes are defined as the convex hull of extreme points which can be easily calculated. Also in robotics, di mensionality is limited (all objects are three dimensional) and therefore computation of ve rtices is somewhat less complex than the mdimensional case. However, when a polytope is defined in terms of the intersection of a finite number of halfspaces (i.e., |n PxAxa, A is an mn matrix), rather than a set of vertices, then the above mentioned algorithms lo se their attraction. This is due to two main reasons. First, there is no known algorithm that can compute the veritces of a polytope P in time polynomial in m n and number of vertices (Avis and Devroye, 2001). Second, once the vertices are known, then the need for computation of a minimum ellipsoid is not needed, as the metric diameter can directly be calculated.

PAGE 79

66 There are known bounds on the number of vert ices of a polytope in the literature, but finding the actual number of vertices is a complex task. For example, McMullen (1970) gives the following upper bound on the number of vertices of a polyhedron: 12 22 nn mm UBmn mnmn where m an n are dimensions of dimensions of matrix A in Axa that has a full column rank and has no redundant equalities. With 40 c onstraints in 20 dimension, for example, the bound exceeds 40 million. Lavallee and Duo ng (1992) give an algorithm to find all the vertices of a polytope. The Ellipsoid Method The algorithm to approximate the L-J ellips oid can be built on top of the normal ellipsoid method, a method developed in the 1970’s initially by Shor (1970) and Iudin and Nemirovskii (1976) to solve convex, not necessarily diffrentiable, optimization problems (Bland et al., 1981). The ellipsoid me thod was initially used to find a feasible solution to a system of linear inequaliti es. Later Khachiyan (1979) showed how the ellipsoid method can be implemented in pol ynomial time to solve linear programming problems. The method attempts to enclose the feas ible region of a linear program by an n dimensional ellipsoid, then sli ce it up and form another elli psoid while still including the feasible region – P. The objective of the ellip soid method is to find if the feasible region is nonempty. The ellipsoid method was bui lt on top of Levin’s method (1965) for minimization of convex functions. Levin (1965) used center of gravity to approximately minimize a convex function over a polytope P. In his method, at each iteration the center

PAGE 80

67 of gravity of the polytope is ca lculated, and used to test a convergence criterion. If the centroid satisfied the convergence criter ion then the method would stop with an approximate solution. Otherwise, a hyperpla ne, whose normal is the subgradient of the convex function at the centroid, cuts the pol ytope into two and the method continues to operate on the reduced polytope containing th e optimal solution. The main disadvantage of the method was that calculating the centroi d at each iteration is very time consuming (Liao and Todd, 1996). Khachiyan (1979) showed that the ellipsoid method can be modified to check the feasibility of a system of linear inequalitie s in polynomial time. The so-called ellipsoid method was the first polynomial-time algorith m to solve linear optimization problems. The ellipsoid method starts with a large ellipsoid containing polytope P, and then uses the center of the ellipsoid -rather than cetroid of the polytope in Levin’s methodto reduce the search space. The ellipsoid method has been widely studi ed. Over one thousand journal articles have been published, and many variations of the method have been developed. A more comprehensive review of ellipsoid methods can be found in Bland et al. (1981) and Grtschel et al. (1988). Before we commence to give details about the ellipsoid method, we provide some definitions that will prove useful in studying the method. Definition 4.2 (Eigenvalues, Eigenvectors): For an nn matrix D, if there is a non-zero vector nx such that Dxx, then is called an eigenvalue of D. x is an eigenvector corresponding to eigenvalue of the nn matrix D. An nn symmetric matrix has n eigenvalues, and they can be found by solving

PAGE 81

68 det0DI. Moreover, an nn square symmetric matrices has n orthogonal eigenvectors, and the number of non-zero eigenvalues for a nn matrix is exactly equal to rank of the matrix. Definition 4.3 (Positive Definite): An nn matrix D is called positive definite if for any non-zero vector n x, '0 xDx holds true. A symmetric matrix is positive definite if and only if all of its eigenvalues are positive. Definition 4.4 (Ellipsoid): An ellipsoid centered at na can be defined by a set of vectors E in n of the form 1,|1nEE DdxxdDxd where D is an nn positive definite symmetric matrix. Moreover, D is called characteristic matrix of the ellipsoid. It turns out that eigenva lues and eigenvectors have very useful geometric interpretation for ellipsoids. For the ith axis, if i is the eigenvalue co rresponding to this axis, then the radius of the ellipsoid along this axis is 1i Definition 4.5 (Spectral Radius): For any nn matrix Z, the spectral radius of Z, denoted by Z is 1maxi in Z where i is the modulus of i Proposition 4.6: Let nP be a polytope and let E be the L-J ellipsoid that contains the polytope P. The metric diameter of the polytope is bounded by 1max2/i in Proof: The two points on ellipsoid E that are furthest apart from each other lie on the major axis of E The distance between any two points in P, is less than or equal to the length of the major axis of E as P is contained in E The radius of the

PAGE 82

69 ellipsoid along this axis is 1max1/i in and the result follows. Note that, finding 1max1/i in is equivalent to finding a minimal eigenvalue, but instead of that one could find the spectral radius of 1 D, as they mean the same thing. As we noted earlier, in order to upper bound the metric diameter of the polytope P, it would be useful to find the di ameter of the L-J ellipsoid for P. In the remainder of this section we review the ellipsoid method, a meth od that can be used to approximate the L-J ellipsoid. An ellipsoid centered at d with positive definite matrix D is defined as: 1,|1nEEDdxxdDxd. Note that, since the characteristic matrix D is positive definite, so too is its inverse. Definition 4.7 (Affine Transformation): A affine transformation is any transformation that preserves collinearity (i.e. all points lying on a line initially still lie on a line after transformati on) and ratios of distances (e.g. the midpoint of a line segment remains the midpoint after the tr ansformation). For any nonsingular (i.e., invertible) nn matrix E, and ne, we say that :nnAT AT xExe is an affine transformation. According to the notation, a unit ball around the origin can be denoted by E I0. By definition, we can say that ellipsoids are images of unit balls under affine transformations, and the transformation is bij ective (Feige, 2004). For example, consider an affine transformation defined as 1/2 xD y e. Then the reverse of the affine transformation is 1/2 y Dxe. If the transformation is a pplied on the unit ball that is

PAGE 83

70 defined as ',0|1nEE Iyyy we can clearly see that 1/21/2,|1nEEDeyDxeDxe which is equivalent to 1,|1nEEDeyxeDxe, and therefore 1/2,, EE DeDI0e is the affine transformation of the unit ball into an ellipsoid. Definition 4.8 (Norm): A function is called a norm if the following conditions are satisfied: The norm function is :nN 0 N x for nx and 0 N x iff 0 x NccN xx for n x c NNN x y x y for ,nxy For every norm N a distance measure between any ,nxy ,Ndx y is defined as ,NdN x y x y Definition 4.9 (Ellipsoidal Norm): For a positive definite matrix D, the following is called ellipsoidal norm. '1DxxDx. The diameter of an ellipsoid is two times the square root of the largest eigenvalue of the matrix D. Similarly, the smallest axis of th e ellipsoid can be calculated by finding two times the square root of the smallest eigenvalue of the matrix D. The volume of an ellipsoid is ,,det VolEVolEI Dd0D, where VolEI0 is the volume of the unit ball in that dimension. Hence, we conclude that the volume depends only on determinant of the characteristic matrix and number of dimensions. The calculation of the volume of a unit ball in n is given in the literature as:

PAGE 84

71 /2 /212 ,0~ /2!n n ne VEI nn n From above, for an ellipsoid E and affine transformation function AT xExe, we can write ,detdet,0nVolATEVEI DdED. Optimizing Linear Functions over Ellipsoids The idea behind covering optimizing linea r functions over ellipsoids here will prove useful later in disc ussing the ellipsoid method. Maximizing a linear function 'cx over a unit ball EI0 is achieved by the vector c c. For a unit ball that is not centered at the origin, E Id, the answer is c d c, and for any k and E Ikd it is k c d c. Recall that for any ellipsoid EDd 1/2,, EEEI DdD0d 1/21/21/2,,, EEIEIDDd0DdDd Therefore, denote optimizing a linear function over an ellipsoid as 'max|, E cxxDd. However ''1/21/21/21/2max|,max|, EE cxxDdcDDxDAxDDd '1/21/2max|, EI cDA yy Dd

PAGE 85

72 1/2 '1/2'1/21/2 1/2Dc cDcDDAd Dc '' ''1 cDccd cDc cDccd By using ellipsoidal norm definition '1DxxDx, we can write: 1''''max|, EDcxxDdcDccdccd The value that satisfies this is max'argmax|, zE cxxDddb. Notice that if the objective was to minimize, then 1''''min|, EDcxxDdcDccdccd, and min'argmin|, zE cxxDddb. The geometric interpretation of this is seen in the Figure 4-2. In the figure, minz and maxz are maximum and minimum for 'cx over ellipsoid ,EDd. Note that the center of the ellipsoid lies between minz and maxz Also observe that the shaded ellipsoid is the mi nimal ellipsoid containing one half of the ellipsoid divided w ith respect to 'cx through the center. Since ha lf of the ellipsoid is a convex body, say K, then it is the L-J Ellipsoid of K.

PAGE 86

73 Figure 4-2. When maximizing a linear function 'cx over an ellipsoid ,EDd, the center of the ellipsoid lies between minz and maxz So, K contains the ellipsoid obtained from its L-J ellipsoid by shrinking it n -times where n is dimension of the K. Hence, if ,EDd is the L-J Ellipsoid in n dimensions, and ellipsoid 2, E n D d is the ellipsoid formed by sh rinking the axes by a factor of n then ellipsoid 2, E n D d is contained in K. The 21 n factor in the L-J ellipsoid results from the fact that the eigenvalues are now 2n times smaller. The ellipsoid method works as follows. We start with an ellipsoid (it could be a ball) that contains P. The method generates a sequence of ellipsoids 1,...,kEE with centers 1,...,kdd iteratively. At each iteration, if th e center of the ellipsoid generated at that iteration doesn’t lie insi de of polytope, then its center point violates one or more constraints defining the polytope. For example, at the ith iteration, let ,iiiEDd be the d zmin zmax b -b c'zmax c'zmin

PAGE 87

74 ellipsoid centered at id. If the polytope |n PxAxa where A can be written as ''' 1,...,m AAA is not contained in iE, then id violates at least one of the m constraints, say the kth, sokika Ax for some 1km Thus, we know that P is contained in the halfspace kki AxAx. In the next iteration, a new smaller volume ellipsoid that contains “the inters ection of the halfsp ace that contains P and current ellipsoid” is generated. At this point we should poi nt out that there is a st rong relationship between the ellipsoid method and the L-J Ellipsoid. In each iteration the next ellipsoid is generated according to a hyperplane passing through the center of the current ellipsoid. Suppose that, at some iteration i the violating constraint is ki''Axcxcd. Then we know that the center of the current ellipsoid does not lie inside of the polytope (e.g., use Figure 4-3). Therefore, in the next iteration we are only c onsidering the half ellipsoid that contains the polytope, and for this purpose we are using the L-J Ellipsoid that contain that half ellipsoid, as it is a convex body. Now it al so makes sense why we covered maximizing linear functions over ellipsoids because the next ellipsoi d generated is going to go through between minz and maxz depending the location of the polytope. Theorem 4.10: Let ,EEDd be an ellipsoid in n, and let kAc be a non-zero vector in n. Consider the halfspace ''|n iH xcxcd and let 11 1in ddb, Then

PAGE 88

75 2 1 22 11in nn DDbb where 'Dc b cDc The matrix 1 iD is symmetric and positive definite and thus 111,iiiEEDd is an ellipsoid. Moreover, 1 iEHE and 1/21 1 n iVolEeVolE (Bertzimas and Tsitsiklis, 1997) Proof: Refer to Bertzimas and Tsitsiklis, 1997, p.367. Note that the new center 11 1in ddb lies on the hyperplane passing through maxz and minz. In fact, minz db, and therefore 1 i d is simply min 11in zd dd. Furthermore, the new ellipsoid is the L-J ellip soid of the half ellipsoid in the previous iteration. Different Ellipsoid Methods In the literature, the general methodology we covered above is referred as the “Central Cut” ellipsoid method. This is due to the fact that the cut goes right through the centroid of the ellipsoid, and exactly one ha lf of the ellipsoid is discarded at each iteration. Depending on how big a portion of elli psoid is eliminated in every iteration different cut mechanisms have been generated. Obviously, the central cut is not the only way of discarding a portion of the ellipsoid. As long as the cut reduces the volume and the next ellipsoid contains the polytope, the cu t is permissible. So, on the one hand, if the generated ellipsoid makes a bigger cut, this is considered a somewhat greedy cut, and the methodology is referred to as a “Deep Cut” el lipsoid method. On the other hand, if the cut shrinks the ellipsoid but not as much as the central cut, then it is referred as a

PAGE 89

76 “Shallow Cut” ellipsoid method. Method selection, as we will discuss later, is controlled by a parameter. As we mentioned, the ellipsoid method runs in polynomial time. It is very intuitive and correct that deep cut versions also runs in polynomial time. The idea is, in each of the cuts, the volume of generated el lipsoid decreases iteratively. The proof of complexity is rather complex and we will not include it here Blend et al. (1981), and Grtschel (1988) give a detailed complexity analysis of the method. The algorithm is illustrated in the Figure 4-3. Starting with a large ball containing the polytope, the next ellipsoi d generated fits the convex body tighter than the previous one. Figure 4-3. In every iteration the polytope is contained in a smaller ellipsoid. In the literature, there are other versions of ellipsoid methods, such as the “parallel cut” (Blend et al., 1981) method, when there is a parallel pair of constraints two simultaneous cuts can be made. The cuts are centrally symmetric a nd parallel, discarding areas on both side of the ellipsoid to generate a new and narrower one.

PAGE 90

77 Different Ellipsoid Methods’ Formulation Note that in the ellipsoid method, the hype rplane according to which we cut off the ellipsoid must go through the ellipsoid. That is, the hyperplane '|nH xcx must satisfy 'min'maxczcz where min zdb and max zdb. After some algebra we find that ''cdcDc is necessary for a cut. If ' cd cDc is a parameter quantifying location for the cut, it is clear that 11 We can think of as a bias factor such that if we make our cut right in the middle and the new formed ellipsoid contains the halfspace ''|n iH xcxcd. A more comprehensive version of Theorem 4.10 that takes cut locati on into account is given below. Theorem 4.11: Let ,EEDd be an ellipsoid in n, and let kAc be a non-zero nvector. Consider the halfspace '|nH xcx. 1,,,,|n iiEE DdcDdxcx, the L-J Ellipsoid for the next iteration is 1,iiEEDd, if 11/n 1111,iiiiEEDd if 1/1n where 11,ii Dd are 11 1in n ddb and 2 2' 1 221 1 111in n nn DDbb

PAGE 91

78 where 'Dc b cDc and ' cd cDc. The matrix 1i D is symmetric and positive definite and thus 111,iiiEEDd is an ellipsoid. Moreover, 1iEHE and for 'icd 1/21 1 n iVolEeVolE Proof: Refer to (Grtschel et al., 1988, p.71; and Bl and et al., 1981, p.1053) Notice in Theorem 4.10, for 'icd the cut is right through the center of the ellipsoid. In the ellipsoid met hod literature, the location of th e cut determines the type of the cut as well as modifications to the ellipsoi d method. The type of the cut is determined by the following: For 'icd the cut is through the center, and called “central cut”, 0 For 01 then we can make a deeper cut, and this is called “deep cut” For 1/0n then we include more than the half ellipsoid in our cut, and is called “shallow cut”. Note that the ellipsoid method assumes that all the coefficients in the system of inequalities are integer values. This only ha s importance from a theoretical point of view, as it helps to prove that the algorithm runs in polynomial time. The method is used to solve linear optimization problems by solving the dual and the primal together. As the feasible point is optimal, the existence of it automatically solves the problem. Shallow Cut Ellipsoid Method Although shallow cuts do not reduce the volume as fast as central or deep cuts, they still yield polynomial times. The shallow-cu t method is due to Yudin and Nemirovski (1976), and is a more sophisticated ve rsion of the central cut method.

PAGE 92

79 The central cut method terminates once the origin of the generated ellipsoid lies inside of the convex body. The idea with shallo w-cuts is that it can proceed further even when the point is found to be inside of the convex region. The method does not necessarily stop when the center of the ellips oid lies in the polytope. It stops when the ellipsoid 2, 1ME n D d lies completely inside of the polytope (Grtschel, 1988). As it can proceed further, it can be used to approximate the L-J ellipsoid. For a given polytope P the stopping criterion is me mbership of the ellipsoid 2, 1ME n D d in the polytope. If ellipsoid M E lies completely inside of P, then the method terminates and the ellipsoid is declar ed to be tough (Grtschel et al. 1988). To determine this all the constraints we check if 2 21ii iia n ADA AA is violated for any i. If so the method continues using cons traint i, otherwise it declares M E tough and terminates. In the next chapter we talk about comput ational efficiency of approximating the L-J Ellipsoid by using these ellipsoid methods as we ll as its implication on to fat-shattering dimension and genera lization error bound.

PAGE 93

80 CHAPTER 5 COMPUTATIONAL ANALYSIS FOR DOMAIN SPECIFIC KNOWLEDGE WITH SUPPORT VECTOR MACHINES In this chapter we compute the fat-sha ttering dimensions of different polytopes using various methods. In the first section, we give an overview of our analysis, how we generated sample programs and our testi ng methodology. In the second section, we perform numerical analysis for box constr aint and polytope boundi ng objects. For box constraints we compute the space diagona l and upper bounds on gamma. For polytopes we perform a series of anal yses with various ellipsoid methods and combinations, and compute a bound on the metric diameter of th e polytope. We compare our results to approximate L-J ellipsoid. Finally, we make a brief comparison between the luckiness framework and our methodology. Overview All of our computational analyses was perf ormed using Matlab 6.5 and run on a Pentium III – 900 Mhz PC under Windows XP. We random ly generated polytopes of different dimensions. Table 5-1 shows that computi ng all the vertices of a polytope is time consuming as the number of vertices grows ve ry rapidly in the numb er of dimension and constraints. In Table 5-1, we show the num ber of vertices is s hown as well as an upper bound on the number of vertices due to the formula by McMullen (1970) as the number of constraints and/or number of dimensions increase,. Note that the minimum number of vertices in n dimensions is n+1, and size of the vertex set depends on the number of constraints, and degeneracy (for example, if a vertex lies on more than n hyperplanes then

PAGE 94

81 number of vertices is lower than a vertex satisfying lying on onl y n many hyperplanes). Table 5-1. CPU time and number of vertices for polytopes Number of Constraints Dimensions CPU Time Vertices Upper Bound on Number of Vertices 5 2 <1 5 5 5 3 <1 6 6 10 2 <1 10 10 10 3 <1 16 16 20 3 1.76 36 36 40 3 5.528 76 76 50 3 9.11 96 96 10 4 1.832 27 35 20 4 4.99 82 170 40 4 13.64 196 740 20 5 28.46 170 272 40 5 100 572 1332 80 5 355 1456 5852 20 6 344 376 800 Our random polytope generation process can be summarized as follows. First, by using m -random weight vectors 1,...,mAA, we randomly generate a set of m -hyperplanes such that :iiihaAx. The bias factor, ia in this case can be thought as a a scale, and therefore in our analys es it is fixed at 1 for convenience. These hyperplanes are used to generate inequlities for hyperspaces as :iiiha Ax. Such a generation results in polytopes that always co ntain the origin as x0 is the trivial solution to the set of inequalities. In order to randomize the pro cess further, we moved the polytopes apart from the origin by using a random transfor mation vector depending on the diameter of polytopes. The amount of shift is determined by multiplying a random number drawn from the interval [0, 5] with the diameter of the polytope.

PAGE 95

82 Comperative Numerical Analysis for Box-Constraints and Polytopes Box constraints can be established if there exist an upper and a lower bound for every attribute. In Proposition 7, we provided a simple formula for the space diagonal ( SD ) as 2 2 1n ii iSDublb, where ilb and iub are lower and upper bounds corresponding to ith constraint. Polytopes versus Hyper-rectangles For some attributes, upper and lower bounds may not naturally exist to form a hyperrectangle but such attribut es may be bounded via constraints as combinations with other attributes. For example, in an intrusi on detection problem there may not be an upper bound on the connection duration attribute. Bu t in some protocols there may exist a relationship enabling one to assert an upper bound on the connection duration For example, if the protocol type is ICMP (In ternet Control Message Protocol), then the connection is known to be momentary. Then for two attributes connection duration and ICMP (binary) one could assert a constraint in the form 1durationMICMP. If one can formulate all these to get a bounded polyhedron then one can compute the diameter of the polytope instead of the SD of the hyper rectangle. Furthermore, even if all we know is that the attributes are upper and lower bounded allowing us to form a bounding hyper-rectangle, any additional knowledge in the form of linear combinations of other attributes may potentially help refine the size of the input space, and possibly reduce the diameter of the polytope. Generating Polytopes If a polytope is given in the form of a set of vertices, ther e are known polynomial time algorithms to approximte the L-J ellipsoid containing the polytope. In Chapter 4, we

PAGE 96

83 mentioned two studies (Rimon and Boyd, 1992, 1997; Kumar and Yildirim, 2004) for such ellipsoid generation. For example, the complexity of the -approximate algorithm in Kumar and Yildirim (2004) is 3/ Ond where n is number of vertices and d is the dimension. We use the -approximate L-J ellipsoid to serve as a benchmark for our results. Note that as the number of vertices grow, such algorithms become impractical or their applications can be limited by a very large for time considerations. For this reason, we computed -approximate L-J ellipsoids only for a limited number of cases. Ellipsoids are computed by using space dilation. Table 5-1 summarizes generated polytopes as well as their computed L-J ellipsoids. A total of 140 random polytopes are generate d. We used GBT 7.0 that implements a method to compute L-J ellipsoid with successive space dilation controlled by two parameters: iterations (set to 50,000) and accuracy (set to 810 ). Whenever one of the two bounds is reached sooner, the -approximate L-J ellipsoid is then formed, and algorithm stopped. Figure 5.1 shows the L-J approximation and the tightness of ellipsoid’s fit for four different parameters.

PAGE 97

84 Figure 5-1. The L-J approximation for A) 410 and 500 iterations, B) 410 and 1000 iterations, C) 410 and 5000 iterations, D) 410 and 50000 iterations. In Table 5-1, the polytope diameter is co mputed by finding the maximum of all the pairs of vertices. The column “L-J Diam” in the table is the diameter of the -approximate L-J ellipsoid. Note that the diameter of the el lipsoid is not necessarily the same as the diameter of the polytope. Although one could bui ld such cases, during our analysis none of the random polytopes had the same diameter as its L-J ellipsoid. Finally, the last column “% Diff” indicates the percentage difference between the polytope’s and L-J ellipsoid’s diameters. We observed that fo r the same dimension, the difference was disproportional to the number of constraints, with the exception of the “5 constraints in 2 dimensions” set, which was greatly influenced by one outlier case. A B D C

PAGE 98

85 Table 5-1. Summary of random polytopes generated in 2, 3 and 5 dimensions Number of Constraints Dims Number of Datasets Iters Accuracy Polytope Diameter L-J Diam % Diff 5 2 20 50000 810 12.48 15.61 25.08% 10 2 20 50000 810 3.10 3.47 12.27% 5 3 20 50000 810 24.25 34.25 41.24% 10 3 20 50000 810 5.07 6.57 29.44% 20 3 20 50000 810 3.15 3.53 11.93% 50 3 20 50000 810 2.47 2.69 8.57% 20 5 20 50000 810 9.07 12.10 33.48% In the next section we use the ellipsoid method to approximate the polytope diameter, and the L-J ellipsoid. Using Ellipsoid Method to Upp er Bound the Polytope Diameter In Chapter 4 we discussed the theory of the ellipsoid method. In this section we will apply it to approximate the L-J Ellipsoid. The method starts with a ball centered at the origin that is large enough to contain th e polytope and then with successive volume reducing iterations finds a feasible point. Finding such a ball, or being sure that a ball is guaranteed to contain the polytope is not very straight forw ard. The enclosing ball is found by using the theory of encoding length of integer numbers and matrices with integer data. The details of encoding length can be found in Grtschel et al. (1988). We previously noted that all the coefficients of polytope data matrix are integers (or can be assumed so since computers can only handl e rational numbers and rationals can be converted to integers using the greatest co mmon denominator). In short, the volume of every full dimensional such polytope can be bounded by a term that contains the description length of its coeffici ents. With the help of the theory it is shown that if the

PAGE 99

86 data for polytope Axa are all integers, a ball 22,2,nEnI Aa0 centered at the origin with a radius of 2,2nn Aa contains the polytope. In the following we start w ith the original central-c ut ellipsoid method which terminates once the feasible region is reached. In order to have a tig hter ellipsoid at the time the feasible region is hit, we propose some modifications to this method such as finding the maximum violated constraint, or preferring cu ts more orthogonal to the eigenvector corresponding to the largest ei genvalue of the ellipsoid’s characteristic matrix. Then we continue by illustrating how the shallow ellipsoid method can be used to approximate the L-J ellipsoid. Central Cut Ellipsoid Method In central-cuts, we make our cuts through the centroid of the ellipsoid. Figure 5.1. illustrates the methodology on a 3-dimensional polytope with 10 constraints that we randomly generated. The method starts with a ball that contains th e polytope. Iteratively the volume decreases. Note that, the diamet er does not necessarily decrease at every iteration even though the volume does. Also note that the improvement in terms of bounding the diameter is not very significant. The overall performance of using the central cut ellipsoid method is given in Table 5-2. In the table, note that a large number of iterations (Colum n: “Iters”) is not necessarily an indicator of a good approximati on as the diameter can get bigger. Best performance for central-cut was observed w ith 5x2 polytopes (5-constraint polytopes in 2-dimensions). We would also like to note that for some problems the smaller volume ellipsoids obtained by the method had actua lly larger diameters than the starting ellipsoids. For one problem in the dataset th e ending diameter was significantly larger

PAGE 100

87 and it caused the average diameter reduction to be negative for that set (10x3). Perhaps the most important observation is that ne ither a large reduction in volume (Column: “Volume Reduct”), nor a large number of itera tions necessarily indicate a substantial diameter reduction (Column: “Diam Reduct”). Overall, the centra l-cut method did not perform very well. Figure 5-2. The central-cut ellipsoid method illustrated on a 3-dimensional polytope which has a diameter of 3.77. A) First it eration: the polytope is contained in a large ball centered at the origin with a diameter of 37.7, B) Second iteration: new ellipsoid has a diameter of 39.98, and its volume is 84% of the initial ball, C) Third iteration: diameter is 42.41 and its volume is 71.1% of the initial ball, D) Eighth and the final iteration: diameter is 34.67, and volume is 30.4% of the initial ball’s volume. A B C D

PAGE 101

88 Table 5-2. The central cut ellipsoid method a pplication on 2 and 3 dimensional datasets. Number of Constraints Dims Iters Polytope Diam LJ Diameter Ending Diam Diam Reduct Volume Reduct 5 2 9.20 5.69 6.47 29.30 48.49% 77.88% 10 2 5.20 3.07 3.76 25.29 17.56% 52.99% 5 3 31.0024.63 34.71 177.89 27.77% 99.25% 10 3 16.406.06 7.33 60.77 -0.26% 87.13% 20 3 14.603.18 3.46 19.77 37.92% 87.20% 50 3 16.502.40 2.61 35.56 10.98% 89.87% Average 20.8% 82.39% Figure 5-3. Volume reduction does not necessa rily reduce the diameter. The reason behind the poor performance of cen tral cut is rather intuitive. The nave implementation searches for violated constr aints and picks the fi rst violated one and makes the cut according to that constraint. Neit her a selection criterion nor a cut pattern is utilized. Figure 5.3 shows sequences of ellipsoids for a problem of size 50x3. In the figure, one can observe that the most of the cu ts in the cut-sequence are done with respect

PAGE 102

89 to the same constraint, until that constraint is not violated. Even though such consecutive cuts result in volume reducti ons, they may also cause the diameter to increase. In the remainder of this section we will introduce a couple of different approaches to that improve these results. Maximum Violated Constraint Instead of picking any constraint, we pick the constraint that is violated the most. The amount of violation can be found by co mputing the distance between a violated constraint and centroi d of the ellipsoid: ' ia cd cc where ia cx is the violated constraint, and d is the center of the ellipsoid. Table 5-3. The central cut ellipsoid method application on 2 and 3 dimensional datasets according to maximum violated constraint selection Number of Constraints Dims Iters Polytope Diam LJ Diameter Ending Diam Diam Reduct Volume Reduct 5 2 8.00 5.69 6.47 32.60 42.69% 71.44% 10 2 3.40 3.07 3.76 30.46 0.70% 41.38% 5 3 25.7524.63 34.71 189.83 22.93% 96.96% 10 3 17.386.06 7.33 41.21 32.00% 87.38% 20 3 11.203.18 3.46 27.77 10.97% 79.58% 50 3 16.002.41 2.59 22.36 34.28% 94.03% Average 23.93% 78.46% When a maximum violated constraint is ta ken into account, diameter reductions for some of the problems were improved. Although the improvement was not consistent for all of the problems on hand, Table 5-3 suggest s that despite smaller volume reductions in the maximum violated constraint approach the diameter reductions were better than central cut’s reductions.

PAGE 103

90 Shallow/Deep Cuts During the ellipsoid method, for every viol ated constraint we calculated the amount of the violation. At each iteration, we form another ellipsoid that contains the feasible area, the polytope. We know that with the d eepest cut we find the L-J ellipsoid for the feasible area whose boundary is drawn by the violated constraint and the ellipsoid. The computational results for deep cuts is give n in Table 5-4. Using deep cuts, smaller volume ellipsoids are reached within the same or less number of iterations. Therefore computational effort as well as number of ite rations required is not more than shallower cuts. Table 5-4. The ellipsoid method with deep cuts for 2 and 3 dimensional datasets Number of Constraints Dims Iters Polytope Diam LJ Diameter Ending Diam Diam Reduct Volume Reduct 5 2 7.20 5.69 6.47 17.79 68.72% 95.21% 10 2 5.80 3.07 3.76 16.93 44.80% 88.04% 5 3 18.0024.63 34.71 81.01 67.11% 99.88% 10 3 12.006.06 7.33 22.19 63.39% 97.43% 20 3 4.40 3.18 3.46 17.28 45.72% 95.92% 50 3 7.00 2.41 2.59 14.73 65.29% 98.61% Average 59.17% 95.85% In this document we refer to the deepest cut as a deep cut and everything else as shallow. In order to find out a good valu e for shallowness numerically, we assign a parameter that controls shallowness for every cut. In Chapter 4, we defined to be such a parameter. However coming up with an value that will work well for all datasets is not possible as the value of depends on the amount of violation. As the maximum value for (max ;1/1 n ) that could be assigned depends on the amount of violation, we cannot univer sally test for the best value. Instead we form a percentage grid max1/ n for at each iteration to observe performance level at various levels. The results are shown in Table 5-5. We set grids at .05 intervals and test our

PAGE 104

91 dataset at each value. In the table, as 100% corresponds to deep cut and greediness, the average of best performing percentages is named “average greediness” (Column: Avg Greed”). Except for a few cases we found th at being greedy is the best approach computationally. This also makes sense in that with deep cuts the same constraint is not violated twice in a row as the ellipsoid formed is the L-J ellipsoid of the constraint and the remainder of the ellipsoid that contains the polytope. In the beginning of this section we indicat ed that consecutive cuts result in volume reductions, but they may also cause the diamet er to increase. However, we believe that the main reason is that as the amount of pe netration is larger with deep cuts, when shallower cuts are made before achieving that magnitude of penetra tion a feasible point may be found and the method may be termin ated before reaching that magnitude. An illustration is given in Figure 5.4. Figure 5-4. A) A series of shallow cuts with respect to one constraint until it is not violated. The final ellipsoid generated in the figure has a diameter of 384. B) A deep cut with respect to the same c onstraint. The ellipsoid generated with the cut has a diameter of 330 and its volume is about 56.9% of the final ellipsoid in A. A B

PAGE 105

92 Table 5-5. With deep cuts method converges faster and generate d ellipsoids are of smaller size. Proceeding After a Feasible Point is Found Numerically, the results suggest that deep cuts are able to reduce the diameter more than shallow ones. This is also illustrated on Figure 5.4 Even in terms of serving the original ellipsoid method’s purpos e, that is, determining feas ibility, taking shallow cuts does not make much sense, as one can reduce computational effort by simply eliminating larger pieces of the ellipsoid. However, taking shallow cuts prove themselves useful when a feasible point is reach ed yet we would like to cont inue to shrink the ellipsoid further. In other words, we can use shallow cu ts if we would like to find a point that is not only feasible but also deep in the polytope. That is, even after hitting the feasible region the method proceeds to find another LJ ellipsoid containing the feasible region. Grtschel et al. (1988) defi ne the framework as finding the minimum volume ellipsoid containing a point in th e polytope that is deep enough to satisfy 1 1 n '''cxcdcDc. A point that satisfies this constraint is calle d a tough point, otherwis e called a weak point. Number of Constrs Dims Iters Polytope LJ Diam Ending Diam Diam Reduct Volume Reduct Avg Greed 5 2 7.40 5.69 6.47 17. 72 68.85% 95.05% 0.99 10 2 6.75 3.07 3.76 11. 89 61.23% 87.46% 0.93 5 3 16.25 24.63 34.71 94.48 61.64% 97.26% 1 10 3 12.50 6.06 7.33 21. 74 64.13% 97.39% 0.99 20 3 5.40 3.18 3.46 15. 39 51.65% 95.47% 0.97 50 3 7.00 2.41 2.59 14. 73 65.29% 98.61% 1.00 Average62.13% 95.20% 0.98

PAGE 106

93 Figure 5-5. The shallow cut method can continue even after the feasible region is found. The largest ball is the starting ball center ed at the origin. The next ball is the ball whose center is feasible. The smalle st ball that contains the polytope is the one obtained by using shallow cut method. From above, as we discover that deep cuts help to penetrate further, we used deep cuts to enter the feasible region, then we proceeded with shallow cuts to find a “tough” point in the polytope (Figure 55). Basically, for every constrai nt we determine if they are tough, and if not we form a new ellipsoid according to that constraint. When the new ellipsoid is formed, we set to its maximum possible. Table 5-6. gives results for such approach. Especially for lower dimensions the diameters that we found (Column: “Ending Diam”) are good approxima tions to L-J Diameters.

PAGE 107

94 Table 5-6. Proceeding after a feasible point is found by randomly choosing a violated constraint and assigning maximum value that can be assigned Number of Constraints Dims Iters Polytope LJ Diameter Ending Diam Diam Reduct Volume Reduct 5 2 214.405.69 6.47 7.62 86.60% 98.84% 10 2 270.003.07 3.76 4.43 85.56% 98.05% 5 3 366.0024.63 34.71 38.77 84.26% 99.98% 10 3 549.006.06 7.33 9.60 84.16% 99.74% 20 3 674.203.18 3.46 6.40 79.90% 99.30% 50 3 694.202.41 2.59 6.14 86.46% 99.80% 20 5 1043 8.49 11.45 10.59 91.18% 99.99% Average 85.45% 99.39% We then modified the method for selecti ng the violated constr aint by adopting one of the methods we tried above. For every cons traint, we picked the constraint with the largest possible value in hopes of deeper penetrati on. As we can see in Table 5-7 by cutting according to the cons traint with the largest value we are able to slightly improve the results. Table 5-7. Proceeding after a feasible point is found by choosing the most violated constraint and assigning maximum value that can be assigned Number of Constraints Dims Iters Polytope LJ Diameter Ending Diam Diam Reduct Volume Reduct 5 2 214.40 5.69 6.47 7.62 86.60%98.84% 10 2 270.00 3.07 3.76 4.43 85.56%98.05% 5 3 366.00 24.63 34.71 38.77 84.26%99.98% 10 3 549.00 6.06 7.33 9.60 84.16%99.74% 20 3 674.20 3.18 3.46 6.40 79.90%99.30% 50 3 694.20 2.41 2.59 6.14 86.46%99.80% 20 5 1043.008.49 11.45 19.10 82.55%1.00E+00 Average 85.45%99.39% Fat Shattering Dimension and Luckiness Framework In this section we compare our appro ach with other methodologies for finding generalization error bounds for [SVM’s] learni ng. So far we talked about VC-Dimension based generalization errors due to Va pnik and Chervonenkis (1971). SVM learning

PAGE 108

95 proposes using a maximum margin hyperplane to separate classes from each other. In the generalization error bound performances table, Table 3-1, the VC-dimension based error bounds are tighter than other bounds however th ey do not support the idea behind SVM, as they only require separability of classes. In the other two bounds the distance of the fu rthest point from the origin is one of the determinants of the fat shattering dime nsion as well as the generalization bound. The first bound assumes that the entire input space X is contained in a ball of radius R In order for such an assumption to be true, the input space must be bounded. In that case our approach can be utilized to center the input sp ace in the origin so that its diameter is minimized and the radius is bounded from above. This reduces the fat-shattering bound F f at as well as the genera lization error bound. Let us illustrate our results on one of our randomly generated 3-dimensional polytopes formed by intersecting 5 halfspaces (F igure 5.6). Assume that our input space is contained in such a polytope. The diamet er of the polytope is 39.67, and the minimum ellipsoid that we found is 54.57 which is 3.7% bigger than an -approximate L-J ellipsoid diameter of 50.59. The polytope has 6 vertices The vertex-origin distance for the vertex furthest from the origin and for the vertex closest to the orig in are 63.07 and 58.69, respectively.

PAGE 109

96 Figure 5-6. The -approximate L-J ellipsoid for the polytope In this scenario, the smallest ball that contains any training set has at least a diameter of 58.69. Similarly, the samallest ball that contains any training set has a diameter of 63.07 at most, and the largest achievable margin is therefore 39.67 / 2 = 19.84. The VC-dimension is 4, is set to 0.10, and the error bound based on VCdimension is 0.012 for a sample size of 10,000. Table 5-8 illustrate s how incorporating domain knowledge may help bounding the errors The accuracy of the table is only sufficient enough to compare domain incorporati on effects. We know that the smaller the fat shattering dimension F f at is, the larger the chance of achieving a tighter error bound exists. In Table 5-8, part A illustrates the case where domain knowledge is not taken into account. In this case, the radius that contains all the sample points is at least 58.69 (rows

PAGE 110

97 1 through 5), and the margin is at most 19.84. The radius of the ball contaninig all the points is 63.07 only if the maximum possible ma rgin is achieved (rows 6 through 9). For a sample size of 10,000, table suggests th at the luckiness framework bounds are much looser. Part B contains the results with domain-specific knowledge taken into account. R is 27.29 (half of 54.57, the diameter of th e minimum ellipsoid found). In part B, the bounds based on fat shattering dimension and X is contained in a ball of radius R are tighter than those in part A due to the shift to the origin. Also, the results for the luckiness framework in part B are tighter than those in part A. However, if the input space is already included in a polytope one does not need luckiness framework. Rows 7 through 9 and 16 through 18 are given to show numeri cally how tight the error bounds based on luckiness are. Table 5-8. The performance comparison for a 3-dimensional input space with 5 constraints.. Part A illustrates the ca se with domain knowledge not taken into account, and Part B with domain-spe cific knowledge taken into account R l Bound based on F f at and X is contained in a ball of radius R Luckiness Framework Row 58.69 3 10000 N/A N/A 1 58.69 6 10000 N/A N/A 2 58.69 9 10000 0.95 N/A 3 58.69 12 10000 0.53 N/A 4 58.69 15 10000 0.34 N/A 5 63.07 19.84 10000 0.22 N/A 6 63.07 19.84 7838159 0.00 N/A 7 63.07 19.84 46236744 0.00 0.99 8 A 63.07 19.84 10000000 0.00 3.27 9 27 3 10000 N/A N/A 10 27 6 10000 0.61 N/A 11 27 9 10000 0.28 N/A 12 27 12 10000 0.16 N/A 13 27 15 10000 0.11 N/A 14 B 27 19.84 10000 0.07 N/A 15 27 19.84 7838159 0.00 0.99 16

PAGE 111

98 Table 5-8 Continued R l Bound based on F f at and X is contained in a ball of radius R Luckiness Framework Row 27 19.84 46236744 0.00 0.25 17 B 27 19.84 50000000 0.00 0.23 18 Our approach has two main potential benefits compared to the luckiness framework. The first one is indirect. When one or more of the attributes are not bounded, the input space cannot be contained in any ball of finite radius. However the radius of the training set is finite. A possible approach is to find the minimum enclosing ball (ellipsoid) for the training points and linea rly transform the input space so that the training set is centered at the origin. The luck iness framework due to Shawe-Taylor et al. (1998) does not suggest any linear transforma tion on the input space. This approach does not alter the framework, but the generalizat ion bound would be tighter due to a smaller radius. The second benefit emerges if the input sp ace is contained in a polytope. In this case the luckiness framework will not be n eeded. Assume that the radius of the ndimensional polytope P is R and the upper bound we obtained by approximating the L-J ellipsoid is R We know that 1 1 R R n as the concentrical ellipsoid formed by shrinking our L-J approximation lies completely inside of P. Table 3-1 indicates that a bound depending on R is numerically tighter than a bound obtained through luckiness framework.

PAGE 112

99 CHAPTER 6 SUMMARY AND CONCLUSIONS In this research, after over viewing the literature on ge neralization error bounds and support vector machines, we proposed a nove l method to incorporate domain-specific knowledge in support vector machines. Generalization error bounds in SVM are in the form of probably approximately correct learning bounds. In Chapter 2, we started with the initial pac -learning bound due to Valiant (1984) and continued with the furt her development of e rror bounds since then. We summarized these bounds using our notation and then developed new generalization error bounds for SVM learning. Towards the end of Chapter 2, we related the fatshattering dimension and error bounds to SVM, and we briefly overviewed SVM learning technique. In Chapter 3, we noted that in order to enhance learning, domain knowledge may be taken into account. This fact is also motivated by the NFL theorems. We bounded the input space first with box-constraints formed by upper and lower bounds on the attributes, and then we considered a more general case where the input space is bounded by a polytope. In each case we showed that the error bounds can be enhanced by simply finding the metric diameter of these convex bodies containing the input space. Although finding the space diagonal of a hyper-rectangl e is simple, we observed that finding the metric diameter of the polytope is rather difficult (convex maxi mization) problem. We proposed using the diameter of minimal c ontaining ellipsoids (L-J ellipsoids) for polytopes as an upper bound on the metric diameter.

PAGE 113

100 In Chapter 4 we reviewed the theory be hind the ellipsoid method and its variants. The ellipsoid method can be modified to a pproximate L-J ellipsoids. In Chapter 5 we tested our approach on a vari ety of randomly generated problem sets. First we computed approximate L-J ellipsoids for each of the randomly generated polytopes by using several alternative approaches. Then we computed hypothetical er ror bounds for those problem sets with and without using our methodology. By using our methodology we were able to improve the error bounds significantly. Briefly, we have laid a framework to incorporate domain-specific knowledge in SVM. To the best of our knowledge, this study is the first study that incorporates domain knowledge directly from the entire set of attributes regardless of the labeling. The domain knowledge tightens the error bound for learning. Tighter bound can be interpreted as increased confidence, and/or sufficiency of a smaller sample set for the learning task. We observed that often th e error bounds under the luck iness framework for SVM are too loose to provide any us eful insights about the sample and learning. When domain knowledge is utilized not only the need for the luck iness framework becomes unnecessary, but also the error bounds depe nding on the fat shattering dimension are tighter. Our study also has several limitations and therefore several res earch opportunities. Perhaps the most important one is that if ke rnels are used, the attributes in the feature space change, and so the bounding polytope may not remain a polytope after the mapping, negating the use of our methodology. These limitations can be addressed by considering the propertie s of the feature spac e in an attribute independent manner. For example, as the polytope constraints may not exist, the polytope cannot be formed.

PAGE 114

101 Instead an analysis with the luckiness framew ork on the convex hull of the sample set in the future space can be carried out. Another limitation is that the interrelationships among the attributes may not necessarily be linear and c onstructing a bounding polytope may not be trivial for some domains. Other ways to capture domain knowledge must be investigated. More importantly, our study addresses domain knowledge only in the form of boundary conditions. Doubtlessly, there is a need fo r studies to investigate other ways of incorporating domain knowle dge in SVM learning. Another limitation is that sometimes not all of the attributes can be bounded, and the input space can only be included in an unbounded polyhe dron. In that case one or more of the attributes remains unbounded. Therefore, our methodology doesn’t directly apply and the need for luckiness framework, despite its limited value, remains. Intuitively, if some of the attributes ca n be bounded then error bounds tighter than those in the luckiness frameworks may be derived. An immediate extension of this document is to study the unboundedness of the input space for some variables by modifying the luckiness framework to tighten the error bounds by using the existing constraints. Perhaps the most striking but non-trivial extension to this work would be to combine our methodology to enhance SVM l earning. During our studies we discovered several potential application domains for wh ich our methodology could prove useful. As a future research project we would like to apply our methodology on an application domain by tailoring SVM learning according to our methodology to enhance learning. The challenge is that domain knowledge we co nsider is not to st rong and demanding as a labeled knowledge set as F ung et al. (2001)’s study.

PAGE 115

102 In conclusion, we believe more resear ch should focus on incorporating domain knowledge in learning. Encoding prior know ledge about a domain into a learning problem will increase the confidence of the learner, and probably and more importantly the resulting accuracy of the induced concepts.

PAGE 116

103 LIST OF REFERENCES Alon, N., Ben-David, S., Cesa-Bianchi, N., Haussler, D., “Scale-sensitive Dimensions, Uniform Convergence, and Learnability”, Journal of the ACM Vol. 44, No 4 (1997), p.615-631. Anthony, M., Bartlett, P., “Functi on Learning from Interpolation”, NeuroCOLT Technical Report Series, NC-TR-94-013, (1994). Retrieved June 20, 2005 from ftp://ftp.dcs.rhbnc.ac.uk/pub/neuroco lt/tech_reports/1994/nc-tr-94-013.ps.Z Augier, M. E., Vendele, M. T., “An Interview with Edward A. Feigenbaum”, Department of Organization and Industrial Sociology, Working Paper, nr2002-16 (2002). Avis, D., Devroye, L., “Estimating the Nu mber of Vertices of a Polyhedron”, Information Processing Letters Vol. 73 (2001), p.137-143. Aytug, H., He, L., Koehler, G. J., “Ris k Minimization and Minimum Description for Linear Discriminant Functions”. Working Pape r, University of Florida, Gainesville, August 5, 2003 Retrieved June 20, 2005 from http://www.cba.ufl.edu/dis/research/03list.asp Bartlett, P., Kulkarni, S.R., Posner, S.E., “` Covering Numbers for Classes of Real-Valued Function,” IEEE Transactions on Information Theory Vol. 43, No. 5 (1997), p.1721-1724. Bartlett, P., Shawe-Taylor, J ., “Generalization Performance of Support Vector Machines and Other Pattern Classifiers”. In Schlkopf B., Burges, J.C., Smola, A.J. editors, Advances in Kernel Methods-Support Vector Learning MIT Press, (1999), p.43-54. Bertzimas, D., Tsitsiklis, J. N., “I ntroduction to Linear Optimization”, Athena Scientific Massachusetts, (1997). Bland, R. G., Goldfarb, D., Todd, M. J., “The Ellipsoid Method: A Survey”, Operations Research Vol. 29, No. 6, (1981), p.1039-1091. Blekherman, G., “Convexity Properties of th e Cone of Nonnegative Polynomials”, to appear in Discrete and Computational Geometry arXiv preprint math.CO/0211176 (2002). Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M., “Learnability and the VapnikChervonenkis Dimension”, Journal of the ACM Vol. 36, No. 4, (1989), p.929-965.

PAGE 117

104 Cristianini, N., Shawe-Taylor, J., “An In troduction to Support Vector Machines and Other Kernel-Based Learning Methods”, Ca mbridge University Press, Cambridge, UK, (2000). Cristianini, N., Shawe-Taylor, J., Lo dhi, H., “Latent Semantic Kernels”, Journal of Intelligent Information Systems Vol 18, Issue 2/3 (2002), p.127-152. Devroye, L., Gyrfi, L., Lugosi, G., “A Pr obabilistic Theory of Pattern Recognition”, Springer-Verlag, NY, (1996). Ehrenfeucht, A., Haussler, D., Kearns, M ., Valiant, L.G., “A General Lower Bound on the Number of Examples Needed for Learning”, Information and Computation Vol. 82, No. 3, (1989), p.247-261. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., “The KDD Process for Extracting Useful Knowledge from Volumes of Data”, Communications of the ACM Vol. 39, No: 11 (1996), p.27-34. Freddoso, A., J., Lecture Notes on Introducti on to Philosophy, Retrieved June 20, 2005, http://www.nd.edu/~afreddos/courses/intro/hume.htm Fung, G., Mangasarian, O., Shavlik, J., “Knowle dge-based Nonlinear Kernel Classifiers”, Data Mining Institute, Co mputer Sciences Department, University of Wisconsin, Madison, Wisconsin. Technical Report 03-02, (2003). Retrieved June 20, 2005 from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/03-02.pdf Fung, G., Mangasarian, O., Shavlik, J., “Knowledge-based Support Vector Machine Classifiers”, Data Mining In stitute, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin. Technica l Report 01-09, (2001). Retrieved June 20, 2005 from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/01-09.pdf Grtschel, M., Lovazs, L., Schrijver, A., “Geometric Algorithms and Combinatorial Optimization”, Algorithms and Combinatorics: Study and Research Texts Vol. 2, Springer-Verlag, Berlin, (1988). Grnwald, P. G., “A Tutorial Introduction to Minimum Description Length: Theory and Applications”. To appear in “Advances in Minimum Descript ion Length: Theory and Applications” (Edited by P. Grnwal d, I.J. Myung, M. Pitt), MIT Press (Forthcoming, Autumn 2004), (2004), Retrieved June 20, 2005 from http://homepages.cwi.nl/~pdg/ Gurvits, L., “A Note on a Scale-Sensitive Di mension of Linear Bounded Functionals in Banach Spaces”, Theoretical Computer Science 261 (2001), p.81-90. Hristev, R. M., “The ANN Book”, ( 1998), Retrieved June 20, 2005 from ftp://math.chtf.stuba.sk/pub/vlado/ NN_books_texts/Hritsev_The_ANN_Book.pdf

PAGE 118

105 Hush, D., Scovel, C., “Fat-Shattering of Affine Functions”, Los Alamos National Laboratory Technical Repor t LA-UR-03-0937, (2003). Iudin, D., B., Nemirovskii, A. S., “Infor mational complexity and efficient methods for solving complex extremal problems”, translated in Matekon: Translations of Russian and East European Mathematical Economics Vol 13 (1976), p.25-45. Joachims, T., “Text Categorization with S upport Vector Machines: Learning with Many Relevant Features”, Proceedings of the European Conference on Machine Learning (ECML), Springer, (1998). Joachims, T., Cristianini, N., Shawe-Tayl or, J., “Composite Kernels for Hypertext Categorisation”, Proceedings of the Internationa l Conference on Machine Learning (ICML) (2001). John, F., “Extreme Problems with Ine qualities as Subsidiary Conditions”, Studies and Essays Presented to R. Courant on his 60th Birthday Wiley Interscience, New York, (1948). Khachiyan, L. G., “A Polynomial Al gorithm for Linear Programming”, Soviet Mathematics Doklady, 20, (1979), p.191-194. Kohavi, R., Provost, F., “Glossary of Terms”, Machine Learning, Vol. 30, Issue 2/3 (1998), p.271-274. Kolmogorov, A., Tikhomirov, V., “entropy and capacity of Sets in Function Spaces”, Translations of the American Mathematical Society Vol.17 (1961), p.277-364. Kumar, P., Y ld r m, A., E., “Minimum Volume Enclosing Ellipsoids and Core Sets”, to appear in Journal of Optimization Theory and Applications (2004) Lavallee, I., Duong, C. P., “A Parallel Proba bilistic Algorithm to Find “All” Vertices of a Polytope”, Technical Report, Institut Nati onal de Recherche en Informatique et en Automatique, Research Report No: 1813, (1992). Levin, A. Y., “On an Algorithm for the Minimization of Convex Functions”, Soviet Math. Dokl., 6, (1965), p. 286-290. Liao, A., Todd, M. J., “Solving LP Problems Via Weighted Centers”, SIAM Journal on Optimization Vol. 6, No. 4, (1996), p.933-960. Linial, N., Mansour, Y., Rive st, R. L., “Results on L earnability and the VapnikChervonenkis Dimension”, Information and Computation, Vol. 90 (1991), p.33-49. Madden, M. G., Ryder, A. G., “Machine Lear ning Methods for Quantitative Analysis of Raman Spectroscopy Data”, Proceedings of SPIE, the International Society for Optical Engineering Vol. 4876, (2002), p.1130-1139.

PAGE 119

106 Mangasarian, O., “Knowledge Based Linear Programming”, Data Mining Institute, Computer Sciences Department, Universi ty of Wisconsin, Madison, Wisconsin. Technical Report 03-04, 2003. Retrieved June 20, 2005 from ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/03-04.pdf McMullen, P., “The Maximum Number of Faces of a Convex Polytope”, Mathematika XVII, (1970), p.179-184. Mitchell, T., “Machine Learning”, McGraw Hill International, NY, (1997). Nesterov, J. E., Nemirovskii, A., “Inter ior Point Polynomial Algorithms in Convex Programming”, SIAM, Philedelphia, (1994). Niyogi, P., Poggio, T., Girosi, F., “Incorpora ting Prior Information in Machine Learning by Creating Virtual Examples”, IEEE Proceedings on Intelligent Signal Processing, Vol. 86 (1998), p.2196-2209. Rimon, E., Boyd, S., P., “Efficient Distance Computation Using the Best Ellipsoid Fit”, Technical Report Stanford University, February 10, (1992). Rimon, E., Boyd, S. P., “Obstacle Collisi on Detection Using Best Ellipsoid Fit”, Journal of Intelligent and Robotic Systems Vol 18, (1997), p.105-126. Rissanen, J., “Modeling by Shor test Data Description”, Automatica 14 (1978), p.465471. Sauer, N. “On the density of families of sets”. Journal of Combinatorial Theory, 13 (1972), p.145-147. Schapire, R. E., “The Strengt h of Weak Learnability”, Machine Learning Vol 5 (1990), p.197-227. Schlkopf, B. Burges, C., Vapnik, V., “Incor porating Invariances in Support Vector Learning Machines”, in C. von der Malsbur g, W. von Seelen, J. C. Vorbrggen, and B. Sendhoff editors, Artificial Neural Networks (ICANN’96), Springer Lecture Notes in Computer Science, Vol 1112, Berlin, (1996), p.47-52. Schlkopf, B., Simard, P., Smola, A., Vapnik, V., “Prior Knowledge in Support Vector Kernels”, In Jordan, M., Kearns, M., Solla, S. editors, Advances in Neural Information Processing Systems 10, Cambridge, MA, (1998), p.640-646. Shawe-Taylor J., Bartlett P. L., William son R. C., Anthony M., “Structural Risk Minimization Over Data-Dependent Hierarchies”, IEEE Transactions on Information Theory Vol. 44 (1998), p.1926-1940. Shor, N., Z., “Convergence rate of the gradient descent method with dilatation of the space”, translated in Cybernetics Vol 6, No 2 (1970), p.102-108.

PAGE 120

107 The American Heritage Dictionary of the English Language, Fourth Edition, Copyright by Houghton Mifflin Company, Boston, (2000). Valiant, L. G., “A Theo ry of the Learnable”, Communications of the ACM, Vol.27 (1984), p.1134-1142. Vapnik, N. V., Chervonenkis, A., Y., “O n the Uniform Convergence of Relative Frequencies of Events to Their Probabilities”, Theory of Probability and Its Applications Vol.16, No 2, (1971), p.264-280. Vapnik, N. V. Estimation Dependences Based on Empirical Data, Springer-Verlag, NY, (1982). Vapnik, N., V., Chervonenkis, A.Y., “The necessary and sufficient conditions for consistency in the empirical risk minimization method”, Pattern Recognition and Image Analysis Vol. 1, No:3, (1991), p.283-305. Vapnik, N. V., The Nature of Statistical Learning Theory, Springer-Verlag, NY, (1995). Vapnik, N. V., Statistical Learning Theory John Wiley & Sons, Toronto, CA, (1998). Wagacha, P., W., “Induction of Decision Tr ees”, (2003), Retrieved June 20, 2005 from http://www.uonbi.ac.ke/acad_depts/ics/cour se_material/machin e_learning/decision Trees.pdf Wikipedia, (2005), Retrieved June 20, 2005 from http://en.wikipedia.org/wiki/Baconian_method#Baconian_Method Whitley D., J. Watson. “A ‘no free lunch’ Tu torial”. Department of Computer-Science, Colorado State University, Fort Collins, Colorado, (2004). Wolpert, D. H., “The Supervised Learni ng No-Free-Lunch Theorems”, NASA Ames Research Center, MS 269-1 (2001). Wolpert, D. H., “The Existence of a Priori Distinctions Between Learning Algorithms”, Neural Computation Vol 8, Issue 7 (1996a), p.1391-1420. Wolpert, D. H., “The Lack of a Priori Distinctions Between Learning Algorithms”, Neural Computation Vol. 8, Issue 7 (1996b), p.1341-1390. Wolpert, D.H., Macready, W.G. “No Free Lunch Theorems for Search”, Santa Fe Institue, Technical Repo rt, SFI-TR-95-02-010 (1995). WordiQ Encyclopedia, (2005), Retrieved June 27, 2005 from http://www.wordiq.com/definition/Learning_theory

PAGE 121

108 Yudin, D. B., Nemirovskii, A. S., “Informa tional Complexity and Effective Methods for Convex Extremal Problems”, Matekon: Translations of Russian and East European Math. Economics Vol. 13, (1976), p.24-25. Zhou, Y., Suri, S., “Algorithms for Minimum Volume Enclosing Simplex in 3”, Symposium on Discrete Algorithms, Pr oceeidngs of the Eleventh Annual ACMSIAM Symposium on Discrete Algorithms, (2000), p.500-509.

PAGE 122

109 BIOGRAPHICAL SKETCH Enes Eryarsoy received the BS degree in Industrial Engineering from Istanbul Technical University in Turkey in the Spring of 2001. Since th e Fall of 2001, he has been a graduate assistant in the de partment of Decision and Information Sciences Department at the University of Florida. His academic interests include data mining, machine learning, and IS economics.