<%BANNER%>

Accountability and teacher attitudes

University of Florida Institutional Repository
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E20101118_AAAADU INGEST_TIME 2010-11-19T01:18:11Z PACKAGE UFE0014404_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES
FILE SIZE 88120 DFID F20101118_AACIWJ ORIGIN DEPOSITOR PATH miller_k_Page_09.jp2 GLOBAL false PRESERVATION BIT MESSAGE_DIGEST ALGORITHM MD5
7123d420d5c461dedc487f1251b76add
SHA-1
8cf4eba53e5eecfa2a777f7d3ac161bf96982f95
1524 F20101118_AACIRM miller_k_Page_45.txt
7dacdc142ca0bd33004e9929df27f1ab
ccc25b0059cf15715716911848303ea3ee0fcca0
111796 F20101118_AACIWK miller_k_Page_12.jp2
7d4e698f373e4732a6bcfc16792e4d5e
ea4d042ba15f9e8dbdc8704ca2fcfb13af6c6cf6
22108 F20101118_AACIRN miller_k_Page_25.QC.jpg
8acfe8b23d59c3946e3f96788ba03f39
21fbb195ae7f3ef7035d4cd016b151c456f74948
105376 F20101118_AACIWL miller_k_Page_13.jp2
6669cd4e1f5d79e01807d96a76f0ff50
d74c25f18a8d89640b45fc3697b977a21d74b076
1884 F20101118_AACIRO miller_k_Page_44.txt
3d7a8afcda3a3b2bc958830f282283dd
7bac3bb4de2c26de18187e0eef3cf5d6209fff06
111727 F20101118_AACIWM miller_k_Page_15.jp2
7a63163fd395fabc7de951d819ae70c5
eb1b6f6cedd82e0f089e3b844e18afa6ed0e4430
1728 F20101118_AACIRP miller_k_Page_40.txt
1719425d122c404b785bc27fe796a6a5
185123c16daadff0b3473faa0a9b689e9e47401b
91769 F20101118_AACIWN miller_k_Page_16.jp2
be2c77e0007fcc05ac9c019d8a4e3ee4
3acd9a4e516bb99412fd797ba9575f9dc8e24f24
19568 F20101118_AACIRQ miller_k_Page_24.QC.jpg
9fcc7a0493937e28a88d024567acb5ec
c4638be5bb10fe239f3b3e69ef4a60956717d203
103642 F20101118_AACIWO miller_k_Page_19.jp2
5c27d0901d7805abe69f4ca3456d4595
5528a1e8713546de7967472b1192a9753f8d4596
1053954 F20101118_AACIRR miller_k_Page_39.tif
b8bee5e2ffc0bb0fd106bd2c7fbf0248
1b1d10d3a6ff7e4fc59e8d451d77e283fc1a84cb
103924 F20101118_AACIWP miller_k_Page_20.jp2
50b543264eeecf4d2ad1f3ab0c4e93f7
f154918adf97bb33b1ab42487b7b027898d06882
47653 F20101118_AACIRS miller_k_Page_14.pro
ac206d1cf713bb80d593e48c0883002a
a39b94e12faeefd2d4de2ccedfbbce885941fbfa
108701 F20101118_AACIWQ miller_k_Page_21.jp2
5cc201049e5410e22a1c33b09e4beaaf
edcd55efac84b4924c41cae9fa808e4b8307e7f7
3329 F20101118_AACIRT miller_k_Page_06thm.jpg
20601969f60bf25160a48c35425e45a5
94dfaaa19cb4bce7b66a10166f1e64e1dc221fe0
96894 F20101118_AACIWR miller_k_Page_23.jp2
b073d5ac2c9db93f06759a81e9ef7e07
f54f703f9f358660c853d9767c58182d3b3ad9ca
36169 F20101118_AACIRU miller_k_Page_35.pro
260553e6b386bb459a2a323eeaee5e6c
2db6f7f86ecf95d097ce140d9af9bf8901b08795
107111 F20101118_AACIWS miller_k_Page_27.jp2
a5b70c1b079904591775ccea994ecb68
8b6c3589b579c1363353a0644c8c7c968f6267e4
16528 F20101118_AACIRV miller_k_Page_35.QC.jpg
c190af21708ebbac2bb668db8b2fd35b
4e8dfb5d0c61df5455291a45d3dedce8fc097c07
100351 F20101118_AACIWT miller_k_Page_28.jp2
f5906e3b85cebfc76561d3ae2c69d3a4
a3b69f6c0acf026a1fd1ee2b76097c881cf18a3f
74241 F20101118_AACIWU miller_k_Page_35.jp2
11ed8f1fd799e888a7f55cb4da291ad3
ada3788c0cb4aa4976025fcb8ad5b1676216ae78
1835 F20101118_AACIRW miller_k_Page_11.txt
fac67fa88f9f952ca43827ed1e108bfd
9a2a3bca42f4440921805f71b6ba9e7f890e2c55
111558 F20101118_AACIWV miller_k_Page_37.jp2
3fae28560d612f87216d1237ce7a653b
8d2ea2d6a29aa016228ae1df78021f4f1a315ce9
61732 F20101118_AACIRX miller_k_Page_16.jpg
c41302daa18bad731cff3b25ec0ca94b
4dd2aafc8329aae16704bc12a9b852673dd3e334
102766 F20101118_AACIWW miller_k_Page_38.jp2
fde03ac36a0ab1fcd982527a83a67b2e
ba81596041799ccc7dd7fe804245d583453c9fb9
2676 F20101118_AACIPA miller_k_Page_07thm.jpg
3ebcbd6ef9d8842c1eef42b3d9fbd1ef
b85bdf9cfe94bf7ecfc4f17f13e548266726cbbf
37416 F20101118_AACIRY miller_k_Page_50.pro
7e20f126300c6f54fac2b3618b961845
0d25874981397fce1d55fd72ceecde9068e7611f
91040 F20101118_AACIWX miller_k_Page_40.jp2
1c13c26ac27c2baa77eec54767352f80
e9b2934963a1960cefaba5b42156390d3b171aca
6290 F20101118_AACIPB miller_k_Page_19thm.jpg
7cc412c572fb606283737703e3edb3a5
f7409330f72bee743b1fc5201e75afd43294e748
74106 F20101118_AACIRZ miller_k_Page_12.jpg
25a94084bf2021bdf3c371f4fe1db510
19ece11e4e6bc57e824d9b408a84ad1b4d39f0a8
112676 F20101118_AACIWY miller_k_Page_41.jp2
14500a55be90cc0c61dfaabbfcc8b508
397cb3e29977e740c04372ca7e110bfff92b9205
77915 F20101118_AACIPC miller_k_Page_37.jpg
2b2612525d707268e152ab3e0221c2ab
2dfbf7ad5e8385fb55f084a870135a2e75500f5c
3660 F20101118_AACIUA miller_k_Page_51thm.jpg
d6647bad69f7f442000669657c14e4f3
c8bba8e52ce22c6ed9fa85f8e93652f485b5b6c2
112054 F20101118_AACIWZ miller_k_Page_42.jp2
dc6d7c260c0e6861109463d0b4017829
ae78351a8fd3d8b0282fbea9e43c4cc17098b28f
50807 F20101118_AACIPD miller_k_Page_31.jpg
b196298ecae8161d2c8f3867e48dd170
a6f453ff659670e1c4f72fb61035479adcd58da5
8675 F20101118_AACIUB miller_k_Page_07.QC.jpg
ecaeb74cbfd30022addf52543f09d4e0
9840851dc8931f47ee7f0096f4dc415f727b201c
24797 F20101118_AACIPE miller_k_Page_48.pro
941675cafa5d98554118f1e2eb034471
2e9f72bca96e2ddcd80e9a5b93e302b5db65e225
66794 F20101118_AACIUC miller_k_Page_28.jpg
c498753f50a8b4cb713fb241ca806fbb
5d5a088180fbe34164da80965eac1f80caec8972
65683 F20101118_AACIPF miller_k_Page_44.jpg
d628071d795a90191b596bce246b0777
ae5243a207aaf9c56ce0fb57a1b9dcac2fcae597
6714 F20101118_AACJBA miller_k_Page_42thm.jpg
c9278719aaafc921c8fad500b06e8976
2f52f74159071e22e40afacf6240c823c673354e
F20101118_AACIUD miller_k_Page_32.tif
a9ecd42299240f70971b28f36ea157e5
1b504144018ea326377b367f20f600c05fb7571c
6782 F20101118_AACIPG miller_k_Page_29thm.jpg
59ba44f203986bcf1efc8969eed0d8a4
b0babaea2d3b1fa3021154f58249ff3a78df5bdf
1895 F20101118_AACIZA miller_k_Page_20.txt
30e9b879772ddb192c7de5fada0598f1
3a1ef32340ff9761724bbdaea262dbd30177bdbf
22029 F20101118_AACJBB miller_k_Page_43.QC.jpg
bf122273b562a184b0da1c731f7bf2cc
d8ffb6e4ef35a1d9b7654366aecdd8e9be88b142
33347 F20101118_AACIUE miller_k_Page_48.jpg
db9ab98051ca9946080c6a932fc8369a
f5a09947971a8dfe55e253a5ed50a48c4d94851c
9694 F20101118_AACIPH miller_k_Page_03.jpg
95931b9fb458b59cd92d3987761ca2c0
195578e4ad4b14e932ba4ca277d461dab5d226f5
1987 F20101118_AACIZB miller_k_Page_21.txt
40d0e5bdb6db939b65fd3d2af2004ef4
f12f54a6eed3f7cfff26006e1bac0a2cab96cc86
6275 F20101118_AACJBC miller_k_Page_44thm.jpg
8fcada13b359df791973d9e625cfe2c2
4656d9070901499af029418ab6e56ac3593add18
71026 F20101118_AACIUF miller_k_Page_18.jpg
f881f0c569e4306d4fd03dae2872ffc5
e0d9421e0dfe411a8d2a110918993ce76458796c
17008 F20101118_AACIPI miller_k_Page_30.QC.jpg
c8b3c04ff4e98559ebc3fa9418dec86a
7081ade783fb8be7e1832a4475f05db9a8078bb1
2640 F20101118_AACIZC miller_k_Page_22.txt
d4f0f43a3668f5309a9105bebe01ef90
868d511da4f3ad00c57ea7912217177e8677932e
5336 F20101118_AACJBD miller_k_Page_45thm.jpg
e38de34021f04b53e696d6fde2b1ed5f
7522f3c2bf07f183b45caba2e2a95d1fe7436c09
1528 F20101118_AACIUG miller_k_Page_50.txt
48ea7098299badb3a4a4142fbfeaf77c
08576ec7d36befa286b3e56eed9e8ff651645ed7
55574 F20101118_AACIPJ miller_k_Page_45.jpg
51b9cb7b088c691655561b64574af837
dd0e728401ca98753578a10ff854c11a5f8c7e0e
2034 F20101118_AACIZD miller_k_Page_25.txt
044cacd3ffd5293edbd384f57016b44a
65ea76cbba3f1062163f99fe4040b28dcae374df
15765 F20101118_AACJBE miller_k_Page_46.QC.jpg
bf08a381aebb617b7081d4ccf90e4fd4
b951d55892d1db409133e55fcd6e0a08876a62e4
F20101118_AACIPK miller_k_Page_42.tif
55e987fcabc87bd89e08406adf69c7dd
01d9088dd61ff33773eaf5a86131227b01035cf8
2043 F20101118_AACIZE miller_k_Page_26.txt
94bc38a167169f26742023830caa6ae4
9f751ed11b1853e88fbef1c46c0a142b1373e76b
41793 F20101118_AACIUH miller_k_Page_16.pro
42077dd83d626d72b3484b8ddc343f34
1fec6a1102e984dc317800c2e3c455471ab8934c
70875 F20101118_AACIPL miller_k_Page_32.jp2
b51a1ff16c84ee70cba9beef7d0d178d
d218745febab0a85d160b9fa0e1cc6f691946a71
2131 F20101118_AACIZF miller_k_Page_27.txt
d5b55cba120373cffd0bcb119e435868
441d2e51c5d5d55aec0e7ca9cb8cbaaf7a727cf1
3384 F20101118_AACJBF miller_k_Page_48thm.jpg
aa9e549471c2f7fd05a70a78739fcbe9
af28e0a23de33c797447211bfb0fe005c33fb6f7
73271 F20101118_AACIUI miller_k_Page_42.jpg
b0461f45c842b85d0aa3a7a781d3caf0
40da1acea9b36aa1a1aea0efd8cb118275835e2f
F20101118_AACIPM miller_k_Page_41.tif
87b87c0ac0af9842020f048b10835d83
e38a93b6df8c9b37dce16319a7fb3a9a46cd0ec6
2047 F20101118_AACIZG miller_k_Page_29.txt
cda03f9ce6e3ae778315e3a873eee0d9
97ae46ef270ee709ee95a95014f5328072dc185c
23847 F20101118_AACJBG miller_k_Page_49.QC.jpg
e21a2aef6829c23df942c64cbf1a5108
4daa9df0aaaac5392c5895af57c929a67ba8f902
88286 F20101118_AACIUJ miller_k_Page_24.jp2
92372765a95f1dad9a0789126cb1e813
3c5ef7477401e06dbf46774ca689f8023177c644
1820 F20101118_AACIPN miller_k_Page_39thm.jpg
2de7b91cafd1db6ca0e8ec7a457e8f3d
21fec2f4d99f4f2b8d869c4bff5c32ca512ad158
1700 F20101118_AACIZH miller_k_Page_30.txt
6a7a76371ffb2ca165872491698cb629
85dfe2e7bf80968536b9d73ff9f0f7a5f3a5ef82
6716 F20101118_AACIUK miller_k_Page_37thm.jpg
0ffb4855370b905dac04f42f2200c4e5
d765f0cbb193ef54c72131e9695fad0a3e51c3ca
92 F20101118_AACIPO miller_k_Page_03.txt
cda2ad8358aad4490ab32c9eccdafb9a
d16bef807b657800ee89bba2307edd2d83567e6d
1577 F20101118_AACIZI miller_k_Page_32.txt
da77607ce27e5838dcd4aeea5ebd7629
4c11342536a30d042d335d168b45a6eed365902f
6559 F20101118_AACIUL miller_k_Page_41thm.jpg
0ff3c091b0baa0fe3fdaf592feb19fb2
11b0cc0c2aa3bb8c65cdb193a7501083c1dbc29d
F20101118_AACIPP miller_k_Page_43.tif
b47a42e75106818c0198f8d0ad1d3ae4
77e717f48565ec6c2b61e899aac45aaff994244b
1680 F20101118_AACIZJ miller_k_Page_33.txt
f7611079aac9f54f3566b71af695f336
69dc688dadbd5297a72d8d80f0c81868ad7ee701
99857 F20101118_AACIUM miller_k_Page_11.jp2
dbdc0ec2d641f260e21b43f5c919c6ed
f25fc609316423486fec2dfb72632bf4dffb81bb
52475 F20101118_AACIPQ miller_k_Page_37.pro
607ca55047a4dfab12c299fe4a35f213
a0dc1766cb0b280f81212d834aa464ae6f05c33b
1611 F20101118_AACIZK miller_k_Page_35.txt
a4dfc83eb7e5df5f1bc4c79eed1eb9a5
23fc87f356cdd2ad3e8332104fd2195e96931b35
37684 F20101118_AACIUN miller_k_Page_31.pro
863c325f66523b8405277dac873d5141
924b9e31676230d9a88f812d6bdd9b5e33d1adcf
49692 F20101118_AACIPR miller_k_Page_13.pro
9167de9e79d47280eb72415fd8934dad
f0f230eba99c6d64d3ea650a5ac079e05e70c15b
2067 F20101118_AACIZL miller_k_Page_41.txt
e1063c36649d9f42754715c50e371b2f
05148a15e048adf35b7f251d37e34efab6ac6e1d
57906 F20101118_AACIUO miller_k_Page_09.jpg
20c6c796c89b02b84c531aa1d0d15450
e198c22d164682b2ebec945488aac327b518ef41
1954 F20101118_AACIPS miller_k_Page_46.txt
a38e8d413a17e1f9181e29b9fb3e4797
4f635fb17ee49b3b42c51248924a7de0a5904022
2074 F20101118_AACIZM miller_k_Page_42.txt
7178ddb71919254d7bb64f7e66249626
08726a6f84dbb4cd12d1e1608262ac3ec1097d7f
F20101118_AACIUP miller_k_Page_34.tif
2da12a471f8fc386f76cf3cdbce90b4c
beeaf4f7d011c726a6d07b9414e4cb290e0875e4
88386 F20101118_AACIPT miller_k_Page_22.jpg
0581c63da3ccfe4b18bc1dc76864d482
54f560ee6eec3c2355db614339b279f0308611d7
1143 F20101118_AACIZN miller_k_Page_48.txt
55a7121dbdfcd764bc7ba82989a3325c
bc1240dcf444f8b5e3ef501b12027e1fa6e92901
84329 F20101118_AACIUQ miller_k_Page_47.jp2
bc218b0c7830325d94d8d43ae594c872
96d0313a27d04cc076c3bf7525ff33b89695ac55
2149 F20101118_AACIZO miller_k_Page_49.txt
1056a7794e5a21533aa76647b8852358
cbe5f1a7a38fa6c6a307f1683c71bbee83d760d3
F20101118_AACIUR miller_k_Page_45.tif
c9e661646f2504ee1c2378be7d5cd6b1
907b14725570002b9b82ba342b27c225db0cff86
39487 F20101118_AACIPU miller_k_Page_09.pro
05ff599107123d88e7593873eb36ff10
b173f2508b92401ddb4c9af9cd41e4f17af8de24
901 F20101118_AACIZP miller_k_Page_51.txt
0852d406278f46426e4fcc45c6731aef
29864ff4f2aca9e1429ab586b2c9d43f0c812f3c
25048 F20101118_AACIUS miller_k_Page_22.QC.jpg
a89bbc98a772e0d6e3cb8f71056267b5
34eb8bc1495b6a34e61dffcd73d6669267c2fbea
3405 F20101118_AACIPV miller_k_Page_02.QC.jpg
f0fe59a4d384b3e0aa8d7c83eacf465a
15e5e41fa1e1aa095c7edc7fb1345a79bb4e8427
151505 F20101118_AACIZQ miller_k.pdf
2e8fb72e84fd85b8853a020153451a95
cca53b0063629d05f64929a180af71916a8b02af
F20101118_AACIUT miller_k_Page_21.tif
17df97812e0e2a789b222d1b669a0e64
30f1cc017dc167f67dba1034c01d1b8d92ad1c03
6337 F20101118_AACIPW miller_k_Page_36thm.jpg
738a2a405b0188358f4f0f23a7665045
d74b0fa2f0bf53c409c0a094a0bc78964754d8d7
5043 F20101118_AACIZR miller_k_Page_50thm.jpg
b6a48e560f04a2d1f3f17752807d2d9e
a90c500e7fd0d6ac5cd3e8ccf5dc125a8dc892a3
6261 F20101118_AACIUU miller_k_Page_28thm.jpg
70f08a4b24ebf60abcd128bba8962bd6
a67ed3ae47787702f4933fec2d05cda1b6cf4355
5458 F20101118_AACIPX miller_k_Page_24thm.jpg
06b6eab4c65d2dd5a2575e94a86736e4
5e70c3516a3c8fa75e3457416686bb32478b2e16
19576 F20101118_AACIZS miller_k_Page_16.QC.jpg
3bf81f4fa252a3925f785489535abc5b
38016222d57099dc1356e76771d0b57a991deea5
34612 F20101118_AACIUV miller_k_Page_32.pro
6c5af006808834c3404a4c8c4a6a37e3
f66ba6183de46897393b05ba4e5fae6617635627
16747 F20101118_AACIPY miller_k_Page_04.QC.jpg
dca861d88518ffaa2863f0e4959a4a86
45d987ce40dcd1efdc4e4b0809b7c957bef462cf
17556 F20101118_AACIZT miller_k_Page_45.QC.jpg
60b0f0d1d22cf0a78ad27683da2c205a
bb91e5377a41e293a4b89025d6d175303dd13289
5723 F20101118_AACIUW miller_k_Page_47thm.jpg
ac58ecf4b06a21f553d0047c5f2c035e
af89012ffca03d3de90cbe8a83fc93b6a181b4bc
6709 F20101118_AACIPZ miller_k_Page_17thm.jpg
e652e8fe55a4e910dc19b710bfa9ff11
f4cac61d9a4cc709fd1f71b167e28a5a5bfd0506
18752 F20101118_AACIZU miller_k_Page_50.QC.jpg
d32cc62b3c1d6736988effe6045552b1
633bddf4fc723ec489279f3b17f822c538c3d9d6
16119 F20101118_AACIUX miller_k_Page_32.QC.jpg
b082e615fb559953fc8e6232bfcd1d80
5f8300ab8e8a07e139b0653ca4c4699848fe91e6
79258 F20101118_AACIZV UFE0014404_00001.xml FULL
01d0ecbe55da4be6e6bcd964fa6605e8
dea70d21f6e5140dc7f712f478d08d6a1c5309f5
51930 F20101118_AACIUY miller_k_Page_12.pro
8ed91f2b67c4f69333626f6a842ff46f
f3eda2d1b75d74b559993fedac61a93f905ab12b
1393 F20101118_AACIZW miller_k_Page_02thm.jpg
0ac2ba1b5be50468111c1d39fc1f9909
c2924344beb95c4aa88898be19c8c76242bc2c1b
F20101118_AACISA miller_k_Page_11.tif
e36afb680555085608e5afb6dc5f307f
2504b63d1c4b0670b230d435261363c004e9b44b
3015 F20101118_AACIZX miller_k_Page_03.QC.jpg
92d145147892ab5f74cb8d0d1d929e34
3e0f69e733f72b8a1c1ce7a98c6bde204985d6e7
F20101118_AACISB miller_k_Page_29.tif
df011e3aba08bd400aef5eb7f2328b64
c0ed60ede1ea567d53225c004cd2785f1a7ea20e
65546 F20101118_AACIUZ miller_k_Page_05.jpg
a4f796b6afa52a214e185a2a49cb9aff
c36f5d4e2107ae3ec84d05b0478b6f9e20b811b9
4731 F20101118_AACIZY miller_k_Page_04thm.jpg
123ebe58a00792a785453cd36b78b472
2fe048465c71f3044d4d317adfee01b27573714d
2045 F20101118_AACISC miller_k_Page_15.txt
8c5d697db4fb9242b250dfe7663179de
3afcaa6c5e4cd8028f6c66e81debb836d1054677
99819 F20101118_AACIXA miller_k_Page_44.jp2
9fb7681405ba229534725451a0ee4638
09d28a358b547174dff00092b0b10610189a9d49
17074 F20101118_AACIZZ miller_k_Page_05.QC.jpg
a1354b2d68810ad1873cc503c13620e8
cdd664f28b34ed095d32546b226ac0c6b04c3865
109345 F20101118_AACISD miller_k_Page_18.jp2
79210988dcc8e229cc6c6dfbffff35f2
e600b4eed5ea43cd9529d484569af33701563770
81845 F20101118_AACIXB miller_k_Page_45.jp2
11d1b4d1acf1c5d5bfdb6b5c3e54c258
abd066943ae53e55ffc61c74aabeb47bc8abc846
1705 F20101118_AACISE miller_k_Page_24.txt
61aa24315c073fd409d0418ea4695f9b
cc1742591e172f7df927d69cf4e67cb1992c49fd
70071 F20101118_AACIXC miller_k_Page_46.jp2
0015b387088ed559d65aca7ad5c14369
63ea604d10704dc96124bf16f4f21b54148bc4a8
954502 F20101118_AACISF miller_k_Page_50.jp2
89fa74a6936f9eecf80d63293cca2724
9c1cef3273f1c4d247eba1d8a9ea7580745b0e1e
F20101118_AACISG miller_k_Page_19.tif
58a2a24134f7f533641f6bbb50839483
e2811806f99079443e34ca557a11c8e78db50b6c
1297 F20101118_AACINJ miller_k_Page_04.txt
67e2c6990feaf6504bb8399bfbd66805
b21b651cb19bf248e44665dc634bd127b90df8f8
50308 F20101118_AACIXD miller_k_Page_51.jp2
52fee190876bf97f8faf411d9db7fad3
72de8851e1d0b77554201e843b44dd0025125681
6604 F20101118_AACISH miller_k_Page_22thm.jpg
dc79b77cd7f973fd582ee30697fc9900
404f6128c24270e3e5d557e86a572ea7c36b3eb1
6539 F20101118_AACINK miller_k_Page_38thm.jpg
28392b26ce306ed291bd93888bc5c44e
d739b771c4993e50995174f73774474392f7c622
F20101118_AACIXE miller_k_Page_01.tif
cbe73279627c17723600fda46163d6f1
2bd68d23009fe37c639c06c23782959ccefd2ded
16438 F20101118_AACISI miller_k_Page_31.QC.jpg
aeb4a8b7f5d2255e0ddb491cf3546b4a
b4d5fdac0a616457b7306d3cb30f2dba228486a5
F20101118_AACINL miller_k_Page_02.tif
5a6f631ade60e1df8f8447f0d9f53ceb
a95eb61ff27ed5d6845896441a47ca27ea87b592
F20101118_AACIXF miller_k_Page_03.tif
f72815ba053638815dd3fa05a83ae563
9340e7d224a500a4b82a5fbe26e17674f980d0a6
27232 F20101118_AACISJ miller_k_Page_01.jp2
9cfdca02ce681bc7af255333e335b11c
19a615c96949df3a8924716d9f3963faa447c920
23233 F20101118_AACINM miller_k_Page_18.QC.jpg
deacd7727bfeb16c362075c97011de0d
471049b2d895320a8e61389c53ec7a8ba0c2a4c6
25271604 F20101118_AACIXG miller_k_Page_05.tif
4ae037ad9e7091f1b70dbca9e0f4f064
a0011abe133532b35137fc88d7946601e041d1dc
1858 F20101118_AACISK miller_k_Page_23.txt
335a014a56658e0fa2f87607ae971886
153b14c8977b191f634c2a13f73f1470f695f777
21484 F20101118_AACINN miller_k_Page_11.QC.jpg
f28e1ba0fcc42c7da1f807dbf1b8029e
9682c6f0f78a08624a8bf1740d25af2caf83092e
F20101118_AACIXH miller_k_Page_07.tif
3ae3c8e6226e52d0827575c846308d77
398c6cb45a6c4064fc4a3982c89579daaeb271ef
6433 F20101118_AACISL miller_k_Page_18thm.jpg
977d7b8a721e7bde9e55a24da9898d7c
6f2d54e8897f679bddd039eaffd2685e30f3de14
110931 F20101118_AACINO miller_k_Page_29.jp2
3e61f0efe355618f2064aa2d0b99245d
79c6922293daa7cc1c2a36075ab14fdb6b006537
F20101118_AACIXI miller_k_Page_08.tif
61868817b6a972cf58b316bf32dbe0aa
ce273e463b0f1b00e5851b9a1d8317c5776a6fe6
71976 F20101118_AACISM miller_k_Page_21.jpg
a3277570bab4060906bde3d16d6a82bb
36306f681a40e9cba96a8f12370b5d23628cf1b5
1408 F20101118_AACINP miller_k_Page_02.pro
568b530066bfea9901f8261c4e388c20
03bbcf997c6608f304f5aa71650eaac59adef91a
F20101118_AACIXJ miller_k_Page_10.tif
259387dbe9ff57ae5e68587de6a37f3e
0e3fa831b72ea5ff3afa7275bd2accabb86029ce
18982 F20101118_AACISN miller_k_Page_09.QC.jpg
f619e7f1021798ecc5d3d22a23408c99
5e1471672fd169c8ef193aa6e24a15d68a8453c5
6396 F20101118_AACINQ miller_k_Page_49thm.jpg
519bd9ac8669276a4fe62237e10c9bcf
797d3cfb29a8eb422bc7e5763975e1ef560a868c
F20101118_AACIXK miller_k_Page_13.tif
f3f1e4590c8c449d804499c6bd7449e4
774da2026c18e0a893d2fcba8f945cadaa330c6d
6136 F20101118_AACISO miller_k_Page_43thm.jpg
a3c3e6af9549c42ac5938cb8e0c86b10
4a73fb08333db43d5abaee2f6cc4c62d59ac0030
2594 F20101118_AACINR miller_k_Page_01thm.jpg
abc094ad4c531d72d9d1793d2fb84517
583278c12dc33feedc015416f1dc118107469256
F20101118_AACIXL miller_k_Page_14.tif
802f35ddd5097f1cbc3054c44e4c78b2
846aea20f03a74ef45fb0768186d8c198bdb41f6
106098 F20101118_AACISP miller_k_Page_14.jp2
1d9497dc39bf2d29fa82ff5f8e3ec482
77fa9527cf4b969f4307ebc7aa0c14c6d9a24c96
F20101118_AACIXM miller_k_Page_15.tif
dbade15abf7ffee0c71bc083351a3299
80ab0798619ea45091f5cdb7c76cc8d0a700b9a3
24276 F20101118_AACISQ miller_k_Page_29.QC.jpg
95361f0c1f739be3050858dba67362a9
e43bd76a71b95fd7cbf07cf173b8593c8aa76e6a
4970 F20101118_AACINS miller_k_Page_34thm.jpg
968946ba4de8b203eb3dad4dc53e22e7
7eb2177c9ef8063878e9fbe034e3d5c1db610046
F20101118_AACIXN miller_k_Page_16.tif
a22ab2f94b471af411ac3f2559c0bfa4
3408bb39e2d3196c82fc77dae5b8c4e3826a51f9
69848 F20101118_AACISR miller_k_Page_34.jp2
aa91b0c47b729c3344088224902cf757
0a6b344af5074cc276ffbf8efbf7ad99003c98ad
832 F20101118_AACINT miller_k_Page_03.pro
e20f4ccf2b61668e28c85c39d9e117ef
1cade7f078ac9bd87a589f6eca1fd4e314237651
F20101118_AACIXO miller_k_Page_17.tif
ab11a00cd9d1ff2f7f0c00faa4a77d38
ee66c8c17679dc4639414da1af2c92e088061906
49975 F20101118_AACISS miller_k_Page_04.jpg
e2e59efcc8b16d0cc0b097b900211cb8
dc86ac071e695bb7b4b2092785768b97fefee702
6115 F20101118_AACINU miller_k_Page_02.jp2
225ca003b5809cda0297289800b9aa85
99a6bcf3b98e9991324c8232860e17af0814c380
F20101118_AACIXP miller_k_Page_20.tif
16baee56f50d3376368c1f01f2a20729
e5cd2a61b4b08498c2f03767b53ab632af8108c3
15185 F20101118_AACIST miller_k_Page_08.QC.jpg
31ce9219d16d072be13533c070b6d8fb
56a9aa15c6efdd84f58efee41cd38605ca6b129c
F20101118_AACINV miller_k_Page_09.tif
e20ed315f9464a9e9db45e19a619c49d
4067111253f69f063c4ea94140eef31e01e4a2aa
F20101118_AACIXQ miller_k_Page_31.tif
eb80f9eef37c29d1fcbce629c3d7184f
47bce97370e0369f772a42a1537f371b3d2d3b4d
47898 F20101118_AACISU miller_k_Page_20.pro
580b901d84a40e3e4b41953416f519e2
c582914e2302aaf5eaf6ee7109b4bbd54ca6944f
F20101118_AACINW miller_k_Page_24.tif
0ac7114965f3c70c81e991bf4ba2e20a
5e59545f8f4e64d7ec85c5d3108fb0ee07302028
F20101118_AACIXR miller_k_Page_33.tif
a3c044a0e88dac47761de2f7184b3098
f5eb2e132934c1f154f7a93833a5fdc782ddf1d5
107821 F20101118_AACISV miller_k_Page_26.jp2
99386dd08a4b56f09b043394a5e9149e
1b5720d2876332c296f176e66a33bb14f9dd27f5
96378 F20101118_AACINX miller_k_Page_25.jp2
398719f707b96ffcf52c381e5b097f13
c320f41aabe2e5193f5efe01dc7255a06e8972fb
F20101118_AACIXS miller_k_Page_36.tif
c20f296ea9646412829fd8cc891093e3
1c763aa860281c8fd4ba97b0fe323df303ae8efc
53019 F20101118_AACISW miller_k_Page_35.jpg
6ef45ad00f7bcb5511bd376810434e3b
96c9fb2fc708613aacab1b6f5515b79542418ea2
76541 F20101118_AACINY miller_k_Page_05.pro
948956430525e1ba605fb0c95edf1f6a
f093abf8d6632c19a921227c1240cb8e4a5f9dca
F20101118_AACIXT miller_k_Page_37.tif
09fa118df4f504a4df1f8730b3a8db2f
72110b77ad30f2488d240db63c4a3da84ac9ab21
10317 F20101118_AACINZ miller_k_Page_48.QC.jpg
e2ae477d2cedd190b0ca4c729dc1f649
ca53ed3f863585f3c1105da5e2c10b6c187f03ca
F20101118_AACIXU miller_k_Page_38.tif
a43b2b42663f523720b321789ee1c7a6
c1d03c3d5d81bd3ec72b14d2bbbea416052014f4
1974 F20101118_AACISX miller_k_Page_38.txt
72835b89d189040df7c98ffd379d5b75
c1371d03758eec967c08fc5ffff3051f8110d2b9
F20101118_AACIXV miller_k_Page_47.tif
bf02029cfb5d066c96dadd488f83c9fa
78080c9319fbc468bb3108a2caf9f9218d3d3c87
22505 F20101118_AACISY miller_k_Page_20.QC.jpg
3ad69dd2799825d3b0841d470a3cee96
0c1363ab1da8ff3664c78c973c9015fe8e0071f8
F20101118_AACIXW miller_k_Page_48.tif
8946bb9f47f1be1f4e38f69c91d0ad8e
4d1afbe94d9d02eb575c1184da91e3f089f56370
1945 F20101118_AACIQA miller_k_Page_19.txt
3a828c5200f38c785bd6d472843d493f
ceec42759e491e1c09d1c3c7d733e2f113fbb2b1
68183 F20101118_AACISZ miller_k_Page_36.jpg
96947e7d58fc7aefb8528cbed12ef9b0
94123e2ec4e02a629269b04cf8fe6759432d4d1a
31322 F20101118_AACIXX miller_k_Page_04.pro
8317583ef0b8a1dd2429d6d437784aa8
129f2ec1a70ca393f3dc34edb612f8f24f1e874d
49809 F20101118_AACIQB miller_k_Page_26.pro
78da64fe829abb1768f4d8157e067190
8d4a50a307b117bb1af68628651778fbcdfc2447
45793 F20101118_AACIXY miller_k_Page_06.pro
2dbf7e78fc9d5e20ceb254f1cef592a9
e04f9b7af3acdd391ead39c3141267b6c2c17bdb
F20101118_AACIQC miller_k_Page_26.tif
e1cd8eba1073e144bbe4f7add262abaa
46d51a1e36584db8ff6bbfea472bc7b21683c3ee
6556 F20101118_AACIVA miller_k_Page_14thm.jpg
cf62d72ef961213f7d7e7dec52db8fb8
e4313d72d64e967b5b0852a7490d3c596b36a8b6
51363 F20101118_AACIXZ miller_k_Page_15.pro
6832195e556a4ba4464583baebbfa850
d5eeba773d25bc148455f577e61318872363bd21
2275 F20101118_AACIQD miller_k_Page_47.txt
ec2b3abe36f94d53106bd01f15b870b0
6697141c17219e18f1f0847f2628675eefa226c9
1051972 F20101118_AACIVB miller_k_Page_49.jp2
f849a23696a3d069f8437df5435f6bc1
66f0b301a57bc81a8894b880f678bd1c34ba0c5f
23784 F20101118_AACIQE miller_k_Page_12.QC.jpg
71b3f5a67a9f16b91157da858f4a3384
06ac70c92956b09a68a70f9e86db36957f8bcf31
4707 F20101118_AACIVC miller_k_Page_03.jp2
5f1cef7e63c2cb5918736f360cbab4b8
ac782134389d3c8aa5f93c992084d8a0c4ba0ef8
F20101118_AACIQF miller_k_Page_25.tif
78c8c3e97a3060d9f02296411822d7a2
81ba500f51543a004b4a489287b494661b5df46d
39836 F20101118_AACIVD miller_k_Page_33.pro
b38375f52ed7300a9748512d777f8500
d0a000d789f07490537e2b55da7d784b39e77b0f
1615 F20101118_AACIQG miller_k_Page_34.txt
d44d9ba2fac663345af3ced38463d4e0
7fdb1e1ce74e3cb5336f51e4c52a7c59e360d238
15746 F20101118_AACIVE miller_k_Page_34.QC.jpg
77375fbabb2ecaf92aab1a8dca9e3c56
f6c9f14b1bb9954de388230a44bb145b2eee30ad
F20101118_AACIQH miller_k_Page_04.tif
21f053f1b6e3a59ad423c8243ace9387
09877cdccba6ad5438293e95e77816570cf12c7c
14211 F20101118_AACIVF miller_k_Page_39.jp2
64b4696e3b9d9ad2a0cb348ffb874f4c
6b6dc20086e7bdc652d15632d7ac4ee3c87ae7ff
1051980 F20101118_AACIQI miller_k_Page_06.jp2
33c608098e1a7533688a6acc9b0004e9
3d6ae4cdf935b6bc99bef09fb7cee32ea45914b1
F20101118_AACIVG miller_k_Page_51.tif
8e47f90352f42fa999c8859a51e48c79
1ef73b7543496745d5311a593207c39eb0f4f01a
F20101118_AACIQJ miller_k_Page_23.tif
28eac5ebebd9725481cb998b4ad4111a
2211adae9ab8046ed0437d8511b9ea837f6e358f
6603 F20101118_AACIVH miller_k_Page_12thm.jpg
35c5c242e369236101c5d1fd64cc09c1
527ddb23c8ae9ddd92a70cc87ea62aade181afcb
5343 F20101118_AACIQK miller_k_Page_32thm.jpg
e0c5a4915fad06e0a521416028f242c8
ee141a0a7f9bacedb820f723e244af32ff1e597c
229 F20101118_AACIVI miller_k_Page_39.txt
76d6e0f6409049777ca8ca70c6b0433a
fc67f9f928fc1a89983b002b4aee6eafaeaa8c35
42850 F20101118_AACIQL miller_k_Page_48.jp2
f363f230fe07e82fb6b700569864ade7
dd48bc42cdaba1d0cf38ab172f0ba7d596017959
100659 F20101118_AACIVJ miller_k_Page_43.jp2
59d48ee3c6ed52ea86fe6ce2a83db9bc
262335f24bdf740600e1e3a75f516f04d17a3cc9
40112 F20101118_AACIQM miller_k_Page_24.pro
84b4b0a33da2ad87fb5f9e6e8badf2ca
348f353acd0a6b42632e6e165983fbb99ceaf0d5
1338 F20101118_AACIVK miller_k_Page_08.txt
fdcb9802ca1dc792c4bd6ba33f23e882
b68c6959a009c93a2b88a37fe6a1b1fc430a47cf
1960 F20101118_AACIQN miller_k_Page_36.txt
aee18d30f96ff10b94f12fd9762b92c6
f63f55ed7e58c623da4b1f681df972b5af762aa9
61645 F20101118_AACIVL UFE0014404_00001.mets
fa967513cc6443ca0b5883c7ef1d1b8b
44cdfa080a7da4393710cbf5b036592518575901
36643 F20101118_AACIQO miller_k_Page_51.jpg
043167cc62aeebd22aeb8317888be8b9
c76053ae0bc13aa18b33ee5d5f18686253f916e6
132892 F20101118_AACIQP miller_k_Page_22.jp2
8e3622affc23ba259672f3b9ef2f0839
033622667ea6db91af4c4bed670ecb843b1ce20b
19043 F20101118_AACIQQ miller_k_Page_47.QC.jpg
bf4c4ee95ab1b3560051852cd845a556
220d49720728c256a106dda80cf15ae1ed0125e6
25805 F20101118_AACIVO miller_k_Page_01.jpg
0ba6540c072c7291f691cf7d79232dc3
ea223c5cb5df567b5d3ce5717c78d1b31197499b
F20101118_AACIQR miller_k_Page_30.tif
65791dc67aeb47a67256c1268dc262ef
eaed61daee4c451d09270e949be3e01de3692e7c
44748 F20101118_AACIVP miller_k_Page_06.jpg
9a50b70cad0456510baeb1a7ed20c2c7
076395c12f270b11dba9c2ac6c368ab530c89f38
9557 F20101118_AACIQS miller_k_Page_01.pro
a1f45de9c14c3f842079d5457335e4a0
1ebfd4557336919aec4d184d1bdfcc53e836e856
28803 F20101118_AACIVQ miller_k_Page_07.jpg
1f7c63068739c4b7f6256343c071a7d7
76f09ecc9bbc11141b6d313749d9552833ce3915
1724 F20101118_AACIQT miller_k_Page_31.txt
beb904e3e7cf384d5033fc9f1eb399f1
8b295a355c16b7e8659e6cb4eafbe0b318731d11
67138 F20101118_AACIVR miller_k_Page_11.jpg
f7cb130e3118c904f05f809143377dbb
fccb8839c7bd583b998e944ca8fbe9dff31a5fb7
F20101118_AACIQU miller_k_Page_40.tif
2cfdaf5be3ca8000705292ece76c575a
96d63fe637f07fe386bd27624104f1680ed84c71
69855 F20101118_AACIVS miller_k_Page_13.jpg
02ce550fb55024113368a7630abdd491
9563c19f6997656658ecf92546839b6d3ca48e45
68789 F20101118_AACIVT miller_k_Page_14.jpg
4b0cfe20e58a7467825ac767ee50626c
328d6565254bb6c48186ce8b5648d8d5eb4323bc
6120 F20101118_AACIQV miller_k_Page_10thm.jpg
394a68cb51d5cd12c371951ce25430c4
bcc8077d4fbf0af111feb14a39b9fccd7a8da5d4
68493 F20101118_AACIVU miller_k_Page_20.jpg
572aeb4e14b12ad841db1a60736a1bfc
18961916c8e216192ca22ea3686c1cd27b7c2ba1
21374 F20101118_AACIQW miller_k_Page_07.pro
95251563e0c452bf371eb8e9e01c7eb4
d4179ed3bec5728931928cd909a429cede3cbe2a
59779 F20101118_AACIVV miller_k_Page_24.jpg
084a44b108c4d982d7bd6a4eef857656
da8551d7ac4970936dfbf44cc4728a06e546a6ff
77143 F20101118_AACIQX miller_k_Page_30.jp2
131a28497f9d395c1a919021ae9f6124
4e535c2ed23cf0e271970395788c3b502a667eaf
67017 F20101118_AACIVW miller_k_Page_25.jpg
6d55b1c3b891117699454a3d5d8ddf6e
0f572bf6671c4e824987ce784cd8add9246382f6
6485 F20101118_AACIOA miller_k_Page_21thm.jpg
4f8e22b622148610f6be962e19e8ff28
f7bd1625d4d16b3a0863f1d8c2e86b82e439275f
67052 F20101118_AACIQY miller_k_Page_43.jpg
dddc7171cd1180ce8f7ae0b06ccfbe92
48f3fa557f6a2ad85f06e73b303cada17511c515
70590 F20101118_AACIVX miller_k_Page_26.jpg
4ac7172fbb2cfa85e4078c74507325ed
435e9e1c1b6a0687ab4c14f3e273459859cfa073
1649 F20101118_AACIOB miller_k_Page_09.txt
9d8a6afa784b91a32aa74c49c8d6914c
91c16689c4a55bfc0444a2f5ba79049af740a70e
49242 F20101118_AACIQZ miller_k_Page_34.jpg
04a32be2adf33f11288789b2f0067f5d
e97fd98e1fd3e178e42e88eb7f3908ba77063bba
73545 F20101118_AACIVY miller_k_Page_27.jpg
c8d3a2cbffe2ba5f721793f9014b5e38
b7686c0af896edaebfd71366785dfa3c3e1b4d1e
22550 F20101118_AACIOC miller_k_Page_14.QC.jpg
1681202a464b3e45da045f1ef0737a38
54a025c9d6f70b25e2b40428df4c8174279d9699
76334 F20101118_AACIVZ miller_k_Page_29.jpg
aca4bd3d7a77fd129ffaf9026dccd777
61f3fcd23d38ec7ba6630551a748bce94784327b
1845 F20101118_AACIOD miller_k_Page_10.txt
d4f3eac333d5cb273b524764d370efd8
d846e248882ca7d518601a8a71af8574f784930c
73541 F20101118_AACITA miller_k_Page_15.jpg
08d3974b3b290d56375f7fc6e8a91efb
436f3e436f3b230cd767afdddaedb88fad6129f6
F20101118_AACIOE miller_k_Page_50.tif
060b400f13e510854c8314683b9df89e
fd453c9e822a183e95901b9a6aeaa550e4cc49b2
45451 F20101118_AACITB miller_k_Page_10.pro
d0e1597e5bfa7b0d355014260c4ecaff
070d1b0f3e9cc1ae776d40a7bf38b2826a5c73a1
98311 F20101118_AACIOF miller_k_Page_10.jp2
b637570cd67418f8e60e6a700bc2763a
ea4e7a00711b4c87c0e3635a6c40b24b9398fefd
4771 F20101118_AACJAA miller_k_Page_05thm.jpg
1804d1eb1860aed63598f49d4834fd6a
3fa683ba5d4ece2632264b68d348304266c23c0d
50242 F20101118_AACITC miller_k_Page_18.pro
1ba17a1f5a370a8d01adb349a92344b8
ec2d4a15c73914f498b040027369572ab390b356
21518 F20101118_AACIOG miller_k_Page_44.QC.jpg
5d63d0337516f839d5455b02d86bce0d
e3b3a94457efc64d9a6aa8e9ed2a5d3c4d3af16b
57372 F20101118_AACIYA miller_k_Page_17.pro
98c2affbaa1af8f1b35a1e459ae94cd2
6a5873ceae6397e6c2f020d4a4c67b194ccb4340
4379 F20101118_AACJAB miller_k_Page_08thm.jpg
8f4a2e9765a614c04e4eb54f1750ce5a
efdf05225d6959d24be0d554351a003830cadf96
4999 F20101118_AACITD miller_k_Page_35thm.jpg
8b7cbdd6025a564d03ae3ab928c427bb
afe6a4679b930b9b41988ff555d1df9658eff0fa
19941 F20101118_AACIOH miller_k_Page_40.QC.jpg
b2e39d74cd25978901109b40c1fc82cc
b788dc9a7182938c79370290a4b1dd4c26ead6f1
50310 F20101118_AACIYB miller_k_Page_21.pro
1e85f4397453146b066301cf889c50b4
fb1db1020a3d677dfa785a56629e71b7bae4f16c
5417 F20101118_AACJAC miller_k_Page_09thm.jpg
5cd087e417705789bdeeaa2db7c51279
75e0e5623df862b3434df77cebbf3822c288449f
78435 F20101118_AACITE miller_k_Page_17.jpg
497b1fc545b4f73ae846fa99898b41e0
d2a3602c27b74ec3a3e65d9aef883496983e1a14
46257 F20101118_AACIOI miller_k_Page_46.pro
78f6db7a0897d21d6dda8e85e11660c0
a25d9d38c3598239e08dafacbd62316a83491977
46033 F20101118_AACIYC miller_k_Page_25.pro
4ba507602f3b6e1374e75b85b7e44bcf
4a698875af7de9f9f068cfb52ec0c01f1417f9a7
5941 F20101118_AACJAD miller_k_Page_11thm.jpg
e5b6d66be266af19eec9c2a57879286d
c408283e32c616259dbb9eaabedb5139d8e1da67
F20101118_AACITF miller_k_Page_35.tif
417c46d30739bcd9710cf839b4673dd2
12728dfca1ae84432249185ad29c875c8b33a2d9
74231 F20101118_AACIOJ miller_k_Page_31.jp2
0e405a49f8bf4e9f3ff7d0f29382e05f
460f321cc1dab99510b883fd7d6be1c6eeaa6021
51026 F20101118_AACIYD miller_k_Page_27.pro
e18fe96b3871386c1f2eeaf5d769eb7b
f881f6739bebaf74ffacdf0f298f61ccc93a2069
70044 F20101118_AACITG miller_k_Page_19.jpg
929e0ec0f2573f2b70bb9b4865b01b51
a9da01c3a161d539ad7c090632cec3ffd1a3f402
F20101118_AACIOK miller_k_Page_46.tif
d8aa9e03537db98909b6fa6a425871b5
8d83967ba41fc8aab46e8538871a5f176dd89028
45610 F20101118_AACIYE miller_k_Page_28.pro
dbbb52d02e57f3f77a3b5eeffce0c5e9
1595fc25f4b20a47e0090df01d8b68d70f780c68
22944 F20101118_AACJAE miller_k_Page_13.QC.jpg
ceddf1b7aa1e37d07f500e4175a36d82
17e2f6e2fb9ec938918a8d5533a887f941f38c43
12250 F20101118_AACITH miller_k_Page_51.QC.jpg
678d96dbe21d431f2e842a4b46102b6e
9acbdfaeb7f67a6d003f97fa6a6b2c7ac217e77d
F20101118_AACIOL miller_k_Page_27.tif
8530f758ddf46d1cef477154ed23595c
084cb1f9a4aea2c9c1f9e1e41b9155cd669acc81
38048 F20101118_AACIYF miller_k_Page_30.pro
f43661524ad39aea28aa8bc8ddfabd48
1f0ffa681113e14ec1dfacb54a5090afc85afd0e
6516 F20101118_AACJAF miller_k_Page_13thm.jpg
8e11eb02d485e27ed0019645bf22bdc1
361acf15efcc750681577d444af8e74abce1347f
51898 F20101118_AACITI miller_k_Page_29.pro
f9b45b829d59c22092eeb799d2523a31
4eef7432ab81cc442550928c90552848bf4a447c
10602 F20101118_AACIOM miller_k_Page_02.jpg
7f3bd86ad0a8b9ba53410415bbd0c287
669458bb6525441b435cb8cd73ac6b25e62cc026
47750 F20101118_AACIYG miller_k_Page_38.pro
a3b771b19011e9604479a2b23beae5aa
5fc4ab7d08b3dbbb9777662655e3cea6a4bd87dc
24334 F20101118_AACJAG miller_k_Page_15.QC.jpg
f172019a06d9a4c2506c22282d168ea7
178ea8b3a89e797921f3df3d349a04fdc5a2daf9
66463 F20101118_AACITJ miller_k_Page_10.jpg
b7893514ba487d509e81b43ce307cbb5
33573e78cdfabd50499d292277880a43ee3d5a61
18996 F20101118_AACION miller_k_Page_23.QC.jpg
754d2b64e04de728196811a73abcb4c3
f788d2c8aea042518f29755068519d8d0d74ba0f
4770 F20101118_AACIYH miller_k_Page_39.pro
3d56a2366383bd3626e8477a70add52b
b20182be709c3a12d143213adb13ce807ce16870
6697 F20101118_AACJAH miller_k_Page_15thm.jpg
5f86efc183765947220c28dda8ddff62
386797f5a6663c79a2a19217d62374c469aa133d
F20101118_AACITK miller_k_Page_18.tif
8765826e4ef366da9e953603133658b8
3edf31b0a2a8be0560f794f8cefbd045c437a0a5
11635 F20101118_AACIOO miller_k_Page_06.QC.jpg
0e6f252ab5a784f1cdb7934d0e415b81
73bbec841e653b67fcb02db3407d57e9a7ffe998
40900 F20101118_AACIYI miller_k_Page_40.pro
2b07247fad87b20583fb4ed74d783f46
661bb715851d41817b93cf99763786a97d05770b
5827 F20101118_AACJAI miller_k_Page_16thm.jpg
e4bd3a9f786fcf8772c317472adc658d
a2d62228e13cb44aaf6e9420572d54d24f6b1c92
4947 F20101118_AACITL miller_k_Page_46thm.jpg
632d8a35bc39d11b6d79f1339c5d82d3
2e7d83884d821732d6f5fa2a0d4169563f12fc77
52824 F20101118_AACIYJ miller_k_Page_41.pro
af0dd3bc6e1e7a70ab24d7c0097ada0c
d35a5e253d09c31c18d0bc8f7693641553be4345
24614 F20101118_AACJAJ miller_k_Page_17.QC.jpg
73b7056239d39451f1c607c8b45b2aa1
9e97eee9a0f2d8176d037f9c733c9f0a1fb944e5
F20101118_AACITM miller_k_Page_22.tif
df6dad28ca2d34428b33780bcbdbfe25
ca080db5b41fd5bf7728885266036e4643a0613e
62664 F20101118_AACIOP miller_k_Page_23.jpg
a0b3f3b68020c5468d86e530b7b830e6
1089d7ef0cc2025619f9bc7efbc371d55a2a64eb
52790 F20101118_AACIYK miller_k_Page_42.pro
aee63849b285c3e42ac745b320d4740e
31f760980188e003ece78ffdd475bb9d6bbc5619
22454 F20101118_AACJAK miller_k_Page_19.QC.jpg
4a5ee1b64c4c1476efa6f78f26c90bdc
a9631de975a62c4b182aa5a27187dc98ddc36c5a
45064 F20101118_AACITN miller_k_Page_23.pro
316ef0c790dc8f18f1d026e83ccfc962
12a3484f14c88745ad19cdea092b1e25b9d80c2a
65206 F20101118_AACIOQ miller_k_Page_50.jpg
d770f55e7a67369fb6e4109b061ee235
a07bb9906a0d7481e429730b32b603b9b78c6cb5
45715 F20101118_AACIYL miller_k_Page_43.pro
42245282bfce2b545bcc688481ddf875
4f3c04b8bd008d4345484fc66aa495af3a17d82e
6309 F20101118_AACJAL miller_k_Page_20thm.jpg
76b9abf08b457616cabb5feb1f76ba64
bc0c368316fcb92d68477e87340bcae0de0c95ef
3205 F20101118_AACITO miller_k_Page_05.txt
d8b3cdc012bb7e182c5c1afdf23651e3
7290b184b531b479f0bf6a33e1f1645e335c4500
84914 F20101118_AACIOR miller_k_Page_33.jp2
8b835a1a950f1feb21a1f919c08bbf46
fc7ccacaebd129f6f34f076ca324930e5c73b01b
46004 F20101118_AACIYM miller_k_Page_44.pro
17815d682b0332ad64da4946a1fff129
3bf15ef05ec89ef503daeeabd4908fc5a264c9bf
5052 F20101118_AACJAM miller_k_Page_23thm.jpg
0c7766b0f865178a2d9de6b64abbe878
83aa86e95eb55c601726415eeec443ef8a6b589f
29419 F20101118_AACITP miller_k_Page_08.pro
fb0df64f53b9354b30b8e3c55ae579a3
dac80794cd9f4eefb7d01b83a06d22b7f104a043
F20101118_AACIOS miller_k_Page_28.tif
5358f398f88e875f460d116a1911d994
4d7e4deda69c47d6db17fb0c2bd6d77960f343fc
37202 F20101118_AACIYN miller_k_Page_45.pro
7e0f22dbe5dc4dcf6013a2fbc7d2592d
ad60d9092b568d2b3bf779808eb94e6d714940a3
6220 F20101118_AACJAN miller_k_Page_25thm.jpg
2692c563c0a8b5cff4cef3d961279c20
fb6b9901c0738876e25c689b2335d8e9f9d2fbf0
658681 F20101118_AACITQ miller_k_Page_07.jp2
a09bdc9cd6ebaec1dfff3dca4898efaa
ede7b81b478d2875ddcf48c3acdf9334958e48d9
55336 F20101118_AACIYO miller_k_Page_47.pro
4f3fa5459f5e8e48a193521915535614
9e7c6cdf6761e7a06759659c54982d096aed3215
23607 F20101118_AACJAO miller_k_Page_26.QC.jpg
377f7afe96be13656e4af2e3857f9cbf
1f6051fbcb97203f45aabcebe2910edfeee72cf2
46160 F20101118_AACITR miller_k_Page_36.pro
7b3af4ea288fe26fb177dbcb648b7fca
2e6ef4681e476def555d093f08e27a1593ca3d39
7733 F20101118_AACIOT miller_k_Page_01.QC.jpg
340112bbb679069e14e212a4e4f084d2
96609f66478d47aa53352e336eb727b96ab4ca29
52932 F20101118_AACIYP miller_k_Page_49.pro
1ca52d37917419b8b452bbe8526cbe90
0d1e8c2c25541bb787e3149b12323d2eca74c062
6593 F20101118_AACJAP miller_k_Page_27thm.jpg
443e83b4161649e9d344840259c8f0e7
937e1144a3aaa943030dd2d0294d2caeec935bbc
23908 F20101118_AACITS miller_k_Page_21.QC.jpg
a875606f9fd6f2109267bb27cd2d0d07
dc02ea3fe22b8a3dd7ed6803a5a85ccb93b16878
1304 F20101118_AACIOU miller_k_Page_03thm.jpg
7b7c5b69895923cd66798cbeeda53732
5ecc9e620fd35145b6310d5e15111de12e4ee7de
21419 F20101118_AACIYQ miller_k_Page_51.pro
0ef5441563121ea25af9c0c144fcaea5
c3dd17f02d6e891aded36fcbabd334106d38ff4f
21784 F20101118_AACJAQ miller_k_Page_28.QC.jpg
25f2b638b1b5d347e5e80a3d18649d08
49a2b11b85319a840eaf0f507a111a0f6fdb650f
49096 F20101118_AACITT miller_k_Page_19.pro
0c7859a1075d2f2ea22030b990cc4aeb
6b6a72259db4c0648b0e3a9fc6b4763ba6ef7d86
1814 F20101118_AACIOV miller_k_Page_43.txt
953fe588d3871f80dac81d9a787eb941
1ef82434881edc05f42da8299983f55b81ede331
487 F20101118_AACIYR miller_k_Page_01.txt
0b7592b4181d6e651578bd41d40918c9
5093ab1c2992327ca4805cf7c37e53539ad679b0
5177 F20101118_AACJAR miller_k_Page_30thm.jpg
05f42ff5ace0dc52d8334ce93d4ef583
e0aec7d22c8e9e44b20616975a799ffdee1b4c5a
35649 F20101118_AACITU miller_k_Page_34.pro
4825da95672e8a239b9034f990be829d
80fce68b7d17053c30117b06d9001b8783b53a59
F20101118_AACIOW miller_k_Page_49.tif
0b9821fb27d513b741ef577ff0d24751
0a0370c0509db62b6a8c182ab0544b15f4b055e9
1872 F20101118_AACIYS miller_k_Page_06.txt
f8d46ba160aa279a6139bc3170a37260
ee600ee06c01ce2321f85d2ed3bbae7142914721
5411 F20101118_AACJAS miller_k_Page_31thm.jpg
7b6be14d6f0167f58078cd4656c8f868
d8e1b11466fbb986587e46d52456a9ee4e66ef73
70271 F20101118_AACITV miller_k_Page_38.jpg
31b73ce1bfecd3d000517908970db204
08a8564408e2fa917eed30838059ad7dda245264
F20101118_AACIOX miller_k_Page_12.tif
3bd3e4fedaf3a35cbe8326e1a094ad26
abec6c0a75a177225de34a2ac2d6adf85487a807
912 F20101118_AACIYT miller_k_Page_07.txt
1755bfc3856961965c3a38cafc92b227
f5eafa6a2297f074551fed2b8d66c72d40fa92ab
5422 F20101118_AACJAT miller_k_Page_33thm.jpg
ec5322a643d648e067f00b7d28dadcbd
ebded55d28526005d414bbc9e7ed834011b46668
127 F20101118_AACITW miller_k_Page_02.txt
198cb06ba0952e5aa2c2c16bfa7eab3e
59fcc0b2b5a169a73dfd1cc407bd56284bd24e02
F20101118_AACIOY miller_k_Page_06.tif
f5b7ab3569d2a8f78c975a736e84ca78
b161bac7c400633b532828210f918da9dbd41913
2044 F20101118_AACIYU miller_k_Page_12.txt
4a17b9c4747fe6448314dd66634b28d8
c40f073d595f7dc57b9fedbc6f08767df50bff03
22491 F20101118_AACJAU miller_k_Page_36.QC.jpg
09676b61ed39bda0b3ae73e2c9a05f4c
4e841b387a03452e8efe95ee0d67627404c1594b
46019 F20101118_AACITX miller_k_Page_11.pro
29e26e07456e5238b8f6ecdcb49851e2
58afc772b2a4280a8c822e5ab1e7c3b91cb7218d
21718 F20101118_AACIOZ miller_k_Page_10.QC.jpg
0686c3ed8425ec15b633af69af221dd9
40bf3a8956073f9321a7f9d31b3bd7dd8e994ebc
F20101118_AACIYV miller_k_Page_13.txt
32f74a98d78008854e0e36ef92457121
c7ea95f6883cedfd7a90c63349efbb0ef200912b
24436 F20101118_AACJAV miller_k_Page_37.QC.jpg
3004c9eb41de486db910e48a0dbcb9a2
d2a9e440f53a5e179fa0c2f3b2dbd6b84240b0e9
F20101118_AACIYW miller_k_Page_14.txt
2916ccdbcf141848938653b27829a7c0
3a838eab75a69f6457ac36148316251d4d333637
5335 F20101118_AACJAW miller_k_Page_39.QC.jpg
3667aca15b921ebefe4d0f996a1c6c82
ea60256bdee8b5023d3e06be9e6d2167305e6e32
F20101118_AACIRA miller_k_Page_44.tif
3257f186f9e62ab5c96dc734ef91f649
e3686bb1d5a63af660c7217577eaba4526cae5e6
6513 F20101118_AACITY miller_k_Page_26thm.jpg
93c029248f4d10122df4bbfa428f67b1
950f87027a71c055cf72563c8565ed3587e22203
1765 F20101118_AACIYX miller_k_Page_16.txt
7f64c0f48c7b66339df5dca742326858
546d5db76cf46844a67273c2d7f289903369e097
5775 F20101118_AACJAX miller_k_Page_40thm.jpg
636c3e53bfa54c0deb3bd41c31e20c04
a4ed925d58e8daedfececa0189bd2c1fcc059802
99285 F20101118_AACIRB miller_k_Page_36.jp2
1a691b92af51a1d6e04854241e113245
3997cf55c7752dbd5c50ef6ec1442db021d0ef94
23518 F20101118_AACITZ miller_k_Page_27.QC.jpg
c1592464371b59dd472a5fdf8cc59a09
68791e1d915b716c709ed33a701f3265e78b7b6a
23917 F20101118_AACJAY miller_k_Page_41.QC.jpg
44cc5a2f703fe94cd97366365384e9f2
543f2fe4de76b2a3be35e7f8e58c0f1c248c472c
18233 F20101118_AACIRC miller_k_Page_33.QC.jpg
5628fda6ed458c0c618a9c7c3e22b284
f4f4884c74bad6d3a0ada0fb39f50c8d52274f42
2310 F20101118_AACIYY miller_k_Page_17.txt
8438c084c9fc10cb88fe93f883329463
17631a87f7d4a58d11febf86a728b6cc6a06da6a
24772 F20101118_AACJAZ miller_k_Page_42.QC.jpg
c2fa2ce6457ade1ded84b0dd61d7066d
d32011a7dffba7892e24f852f9c4576b297d3956
57518 F20101118_AACIRD miller_k_Page_33.jpg
1e42e3393eb7cb5e2205462c11bf70c6
e40bebbb3ac61553fcb8f9330800ad19937109d4
52788 F20101118_AACIWA miller_k_Page_30.jpg
f4a11714d88144f53dded7f47f72afc2
ae6111386494ca00f87490cc4f08feddfcf42cad
2005 F20101118_AACIYZ miller_k_Page_18.txt
25532dbaebcea28324d785bca82f884e
bdfcee0e62fc1461b6f28c12061bab39eda5e1d1
15465 F20101118_AACIRE miller_k_Page_39.jpg
2e3ec62b24de470304e4f79f11c196b1
3fadaf3d25a70e1bf909de3894d128e941567fbf
49693 F20101118_AACIWB miller_k_Page_32.jpg
f6fe6e0276b29757c88fbc0d840e1576
10952909c5eea3993163c15488f42364053ee343
23156 F20101118_AACIRF miller_k_Page_38.QC.jpg
1771a427c24e3d4357b3e00105c00fbf
d6dd961f81d6b1a6c3d2c078842e491d0a12a3eb
59921 F20101118_AACIWC miller_k_Page_40.jpg
88fb5f775de8cada9fdb2d8c2390e298
549ec615cb964c0387d86d74eb231eb1ee2d17e5
49839 F20101118_AACIRG miller_k_Page_08.jpg
a2be558e66bb11b850c177ffa58646a3
7ce330e65129a134ff728403adccb2cacd7354ef
73744 F20101118_AACIWD miller_k_Page_41.jpg
4201115c9d999e3eba1c9e39abcd2e42
3ea413945d47284ccb9e7112aa08446a77ac87bf
F20101118_AACIRH miller_k_Page_28.txt
73b097e8aa024d64a7ebac48f12055c0
5ecf5ed3925937985f894217741e5f1224f06aca
49589 F20101118_AACIWE miller_k_Page_46.jpg
12506fa3efb4c76e1a46df826ef609f0
cfeecd0af9803fe39e4f53e792d720ad7e0f9b73
1051978 F20101118_AACIRI miller_k_Page_05.jp2
0c5eae71258861a3bf154e2fe33d1cc8
7d17d750d25d61d27a91e3e588af035a3d714398
60315 F20101118_AACIWF miller_k_Page_47.jpg
97d84c2bbe1e852e2f92ec2ce5731cff
c342d14f43c60861dbdd6895eee91cdeb9f868f4
2225 F20101118_AACIRJ miller_k_Page_37.txt
25c812fdab0d8663702ba0077a1ba1cf
c55df6600ce0e4ea9349fb5f805e9ee6ad1bb991
86470 F20101118_AACIWG miller_k_Page_49.jpg
1daf39659dde2147e06305ef5631ae5a
b3b057daa0ae504152b49ae0cc1276de9ebe1fd8
120891 F20101118_AACIRK miller_k_Page_17.jp2
e4601575825f67186dd4385b2222846b
d6f82749dc2bc6dfe1accd6b44f66b60537d6bc2
70945 F20101118_AACIWH miller_k_Page_04.jp2
3585fa134e2213bd989dc79d7009ac14
52a0298cd1105fcda78267881724ef36b8deb9dc
68109 F20101118_AACIWI miller_k_Page_08.jp2
f90988a40ceab6a5e3d8990ee5d7bac4
f1f32a19a245d78e8a0ff3772b42de02ba11da39
64278 F20101118_AACIRL miller_k_Page_22.pro
7ea825b46134789a755ab71a92eeb0ff
288a25d60f6af3dcfc17451a120f0bdabb9b9033



PAGE 1

ACCOUNTABILITY AND TEACHER ATTI TUDES: CONSEQUENTIAL VALIDITY EVIDENCE FOR FLORIDA’S EDUCAT IONAL ACCOUNTABILITY SYSTEM By KATHRYN ELIZABETH MILLER A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS IN EDUCATON UNIVERSITY OF FLORIDA 2006

PAGE 2

Copyright 2006 by Kathryn Elizabeth Miller

PAGE 3

To the LaFrance women

PAGE 4

iv ACKNOWLEDGMENTS I would like to thank my committee member s (Dr. M. David Miller and Dr. Anne Seraphine) for guiding me through each semest er, fielding all my questions throughout my graduate career. I would also like to thank my fellow students (Jann Macinnes, Jenny Bergeron, and Janna Underhill) for all the st udy groups they pioneered. If not for their complete dedication to academics, my experi ence would have been greatly diminished. I would also like to thank Elai ne Green and Linda Parsons fo r keeping me sane during my teaching assistantships. Many other people have offered support or motivation, or simply inspired me in innumerable ways. Special thanks go to Joshua Marland, Crystal Calkins, Janna Baumann, Samuel Hanna, Sally King, Mico Adorno, Laura McCoy, Amy Godfrey, Allison Knowlton, Andrew Brunelle, Philip Moring, and Anthony Herman. I would also like to thank my mother, Jacquie Hernandez, for her unconditional love and support. I thank my brother, Steven Miller, and my sist er, Melissa Miller. They make life fantastic and challenge me at every turn. My Aunt, Gise le Andrade, has always been a role model of mine. Her strength is the motivating force in my life.

PAGE 5

v TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii ABSTRACT.....................................................................................................................vi ii CHAPTER 1 INTRODUCTION........................................................................................................1 National and Stat e Accountability................................................................................2 No Child Left Behind Act of 2001........................................................................2 Adequate Yearly Progress as Determined in Florida............................................4 Florida’s A+ Plan..................................................................................................5 Sunshine State Standards.......................................................................................6 Florida Comprehensive Assessment Test..............................................................6 Effects of Testing on Teachers.....................................................................................7 2 LITERATURE REVIEW.............................................................................................8 Validity Argument........................................................................................................8 Validity Issues in High-Stakes Testing.......................................................................10 Construct-Irrelevant Variance.............................................................................10 Test Preparation...................................................................................................12 Sources of Unreliability for the No Ch ild left Behind Accountability Designs..13 Positive Consequences of High-Stakes Testing.........................................................14 3 METHODS.................................................................................................................16 Respondents................................................................................................................16 Materials.....................................................................................................................1 7 Procedure....................................................................................................................18 Analysis Approach......................................................................................................18 4 RESULTS...................................................................................................................22 Research Question 1...................................................................................................27 Research Question 2...................................................................................................28

PAGE 6

vi Research Question 3...................................................................................................29 Research Question 4...................................................................................................30 5 DISCUSSION.............................................................................................................32 Discussion of Findings...............................................................................................32 Research Question 1............................................................................................32 Research Question 2............................................................................................32 Research Question 3............................................................................................33 Research Question 4............................................................................................34 Implications of the Descriptive Statistics............................................................34 Limitations of this Analysis........................................................................................36 Response Rate.....................................................................................................36 Sampling Issues...................................................................................................36 Suggestions for Future Research................................................................................36 Closing Remarks.........................................................................................................37 ACCOUNTABILITY AND TEACHER A TTITUDES SURVEY INSTRUMENT........38 LIST OF REFERENCES...................................................................................................41 BIOGRAPHICAL SKETCH.............................................................................................43

PAGE 7

vii LIST OF TABLES Table page 3-1 School Demographic Information...............................................................................17 3-2 Reliability Statistics for New Variables......................................................................19 4-1 Descriptive Statistics for Survey Statements...............................................................22 4-2 Percent Agreement Statisti cs for Survey Statements..................................................25 4-3 Descriptive Statistics for New Variables.....................................................................28 4-4 Descriptive Statistics for New Variables by GRADE.................................................29 4-5 Summary Statistics for Simple Regression.................................................................30

PAGE 8

viii Abstract of Thesis Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Ma ster of Arts in Education ACCOUNTABILITY AND TEACHER ATTI TUDES: CONSEQUENTIAL VALIDITY EVIDENCE FOR FLORIDA’S EDUCAT IONAL ACCOUNTABILITY SYSTEM By Kathryn Elizabeth Miller May 2006 Chair: David Miller Major Department: Educational Psychology Consequences that arise from the Florida Comprehensive Assessment Test (FCAT), in accordance with the No Child Le ft Behind Act of 2001 (NCLB), impact the overall validity of the accountability desi gn established by Florida. In developing a validity argument for uses and interpretations of the FCAT, 75 teachers employed by six different schools responded to a survey as certaining their opinions on NCLB, Florida’s A+ Plan, the Sunshine State Standards, and the FCAT. Data were analyzed looking at the areas of accountability as separa te entities and the attitudes to wards each area. Data also were analyzed to examine differences acros s teachers, to uncove r factors that may influence a teachers’ view of accountability.

PAGE 9

1 CHAPTER 1 INTRODUCTION The No Child Left Behind Act of 2001 (NCL B) marked the beginning of a new era in education, where each state became respons ible for creating a system of educational accountability. Accountability refers to the pr ocess of holding school districts, schools, teachers, and students responsible for learning. An accountability system is a structure for making decisions and applying consequences based on information collected from assessments. Decisions and consequences th at result from accountability systems range from allocation of funds, to third-grade pr omotion. The high-stakes nature of these accountability designs necessitates a thorough examination of the validity and the assessments they encompass. Our purpose was to gather information that can be used toward an argument for validity. Validity is an overall appraisal of the degree to which an assessment’s use and interpretation are adequate and appropriat e (Messick, 1995). A validity argument is made by collecting empirical evidence and pr oviding theoretical rationales for the uses and interpretations (Haertel, 1999). We focused on providing evidence for a vali dity argument for Florida’s use of the Florida Comprehensive Assessment Test (F CAT). Developing a validity argument is multifaceted and includes examining consequenc es that arise from high-stakes testing. Our study mainly addressed the new accountabi lity legislation and its consequences for teachers in the state of Florida.

PAGE 10

2 To better illustrate Florida’s accountability system, factors that contribute to it or are included in it are outlined in the next secti ons. NCLB is of interest because it is the new legislation that must be adhered to by each state. In fulfilling NCLB, Florida’s system includes measuring Adequate Yearly Progress (AYP), the A+ Plan, the Sunshine State Standards (SSS), and the FCAT. National and State Accountability No Child Left Behind Act of 2001 NCLB is at the center of e ducational accountability in ev ery state. NCLB is the new version of the Elementary and Secondary Education Act writte n into law in 1965 and provides billions of dollars in federal fundi ng for various educational programs (USDOE, 2006a). The purpose of NCLB is to ensure that every child in America is able to meet the high learning standards of the state the ch ild resides in. The act though intricate and complex, is founded on four basic principles; stronger accountability, increased flexibility and local control, more options for pare nts, and emphasis on proven teaching methods (USDOE, 2006b). NCLB aims at improving ed ucation all over the United States and raising the bar for what is deemed accepta ble learning. Goals of NCLB are numerous, specific, and lofty. The goals most pertinen t (No Child Left Behind, 2002) to our study are listed below. All students will reach high standards, at a minimum attaining proficiency or better in reading and mathematics by 2013-2014. By 2013-2014, all the students will be proficie nt in reading by the end of the third grade. All limited English proficiency (LEP) stude nts will become proficient in English. All students will be taught by highly qualified teachers.

PAGE 11

3 All students will be educated in learning e nvironments that are safe, drug-free and conducive to learning. All students will graduate from high school. NCLB requires that each stat e develop its own accountability system that is valid, reliable, and meets all requirements outlined in the act. The degree to which each system is valid and reliable is individually esta blished by each state. The 2005-2006 school year marked the deadline for testing all student s grades 3-8 in mathematics and reading, annually. Science must be included in th e testing regime by th e 2007-2008 school year, at least once during elementary, middle, a nd high school. All assessments must be aligned with the content sta ndards established by the stat e. All students must be proficient by 2013-2014. Each state determ ines its own guidelines for proficient status (Lane, 2004). The 2002-2003 school year marked the deadline for each state to furnish annual report cards of their pr ogress. The report cards in clude information on student achievement by district and s ubgroup. Minority students, stud ents with disabilities, LEP students, and children from low-income familie s are all included in the annual report cards (No Child Left Behind, 2002). Florida’s accountability system, in fulf illing NCLB, includes AYP, Florida’s A+ Plan (school grades), individua l student progress towa rds (or consistent proficient levels of) mastery on the FCAT, and a return on invest ment. Return on investment is a measure that relates dollars spent to student achievement (FDOE, 2005b). These elements are designed to provide a cohesive and extensiv e representation of a school’s performance and are made available for parents, e ducators, and members of the community.

PAGE 12

4 Adequate Yearly Progress as Determined in Florida “Adequate Yearly Progress measures the pr ogress of all public schools, and school districts toward enabling all students to meet the state’s academic achievement standards” (FDOE, 2005b, p.1). AYP targets the performan ce of every subgroup and aims to ensure that in 1 year’s time, students are learning 1 year’s worth of knowledge as delineated in the content standards. Subgroups are crea ted on the basis of race or ethnicity, socioeconomic status (SES), disability, and English proficiency. St ates are required to define AYP for the state, school districts, and schools in a way that will fa cilitate all students to meet the state’s achieve ment standards by 2014 (FDOE, 2005b). Florida uses the FCAT to ascertain each student’s level of proficiency, a necessity for making AYP. There are five possible achi evement levels one can attain from the single FCAT score. The levels range from 1 to 5. Level 1 is below basic, Level 2 is basic, Levels 3 and 4 are proficient, and Leve l 5 is advanced. All st udents scoring a 3 or above are considered proficient for classification purposes. Florida also has a separate assessment for students with disabilities who would not be able to earn a standard diploma. The Florida Alternate Assessment Report (FAAR) also uses a 5-point scale to determine a student’s proficienc y level of the SSS. The FAAR scale is as follows; Levels 0 or 1 are below basic, Level 2 is basic, Le vel 3 is proficient, and Level 4 is advanced (FDOE, 2005b). For a school in Florida to make AYP, 95% of all students and all identified subgroups must partake in the FCAT or alte rnative assessment (w hen applicable). A subgroup must include at least 30 students to be included in the AYP calculations. All goals must be met by 2014 and a blue print for the progression toward that goal must also be agreed on and met annually (FDOE, 2003). For example, if Florida declares that 68%

PAGE 13

5 of students will be proficient in mathematic s by 2007, then that goal must be realized to make AYP. Additionally, there must be a 1% increase in the per centage of students proficient in writing. If the annual objectives for reading or mathematics are not met by subgroups in a school or district, AYP can stil l be met if the percen tage of nonproficient students decreased by 10% from the previous y ear. It is not possi ble to make AYP if (under Florida’s A+ Plan) a school receives a D or F (FDOE, 2003). Florida’s A+ Plan Florida’s A+ Plan is a grad ing system for schools: A is the highest grade a school can receive and F is the lowest (the A+ plan uses a traditional gradi ng scale of A, B, C, D, and F). To make AYP, a school must first receive a grade of C or higher (FDOE, 2005d). A = 410 points or more, meet AYP of bottom 25% in reading, gains for bottom 25% are within 10 points of gains for all st udents, and 95% of eligible students are tested B = 380 points or more, meet AYP of botto m 25% in reading within 2 years, and 90% of eligible students are tested C = 320 points or more, meet AYP of botto m 25% in reading within 2 years, and 90% of eligible students are tested D = 280 points or more and 90% of eligible studen ts are tested F = Fewer than 280 points or less than 90% of eligible students tested. A school can earn points if their students do well on the assessments or improve from the previous year. Schools earn one point for every percent of students scoring 3, 4, or 5 in mathematics. Schools also earn will al so receive one point for every percent of students scoring a 3, 4, or 5 in reading. One point is also given for each percent of students who score a 3 or above on the writi ng assessment. For each percent of students who gain one achievement level and for student s who maintain a level of 3 or above, one

PAGE 14

6 point is awarded. One point is awarded for each percent of students in Levels 1 or 2 demonstrating more than one year’s growth. On e point is awarded for each percent of the lowest performing readers (bottom 25%) maki ng learning gains from the previous year (FDOE, 2005d). Sunshine State Standards The SSS were approved by the Board of Education in Florida in 1996. The standards provide expectations for student performance and achievement. The standards were written in seven subject areas and aimed to allow flexibility in curriculum, catering to the needs of different schools. In recent years, changes to the SSS were made to better accommodate new accountability legislation. Gr ade level expectations for the major subject areas were added and are guidelines for the FCAT. The subj ect areas outlined in the SSS are music and fine arts, foreign langua ge, language arts, mathematics, science, social studies, physical edu cation, and health (FDOE, 2005e). Florida Comprehensive Assessment Test The FCAT is the only test administered stat ewide that is designed to align with the SSS. The FCAT determines a students’ level of achievement at each grade level making it the primary AYP determinate. The FCAT has two major components, a normreferenced test (NRT) and a criterion-refere nced test (CRT). Th e NRT currently being used as part of the FCAT is the Stanford 10. It is used to compare in dividual students in Florida to national norms (FDOE, 2005c). NRT’s are designed to maximize response variance and artificially spread the scores (Kohn, 2000). Reading and mathematics are the only subjects measured in the NRT. The CRT is designed to measure a students’ level of mastery of the SSS in reading, writing, mathematics, and science. These sc ores are not designed to be used for

PAGE 15

7 comparison, but as a comprehensive exam measuring knowledge gained inside the classroom. All students in grades 3-10 ar e required to take the FCAT Reading and Mathematics. Students in grades 4, 8, and 10 also take the FCAT Writing and grades 5, 8, and 11 take the Science portion of the FCAT (FDOE, 2005c). The FCAT is a high-stakes assessment because of the consequences attached to the scores (Kohn, 2000). The results impact grad e to grade promotion, funds allocated to schools, high school graduation, teacher rewa rds, and how the school is viewed by the community. Intuitively, the stakes attached to the FCAT put many teachers under an extreme amount of pressure to increase st udent performance. The pressures felt by teachers and their opinions have not been extensively studied. Our study aims to examine the consequences of high-stakes testing, for teachers. Effects of Testing on Teachers The fundamental rationale for our study is to give part in the validation of Florida’s accountability system as a whole by examining teacher opinions. Our study examines opinions for all areas in the accountability design, focusing primarily on the uses and interpretations of the FCAT. Validation is esse ntial because of the consequences attached to FCAT scores. AYP and school grades ar e directly determined by FCAT scores. Schools that do not make AYP have serious c onsequences. To illustrate, after 5 years of failing to make AYP a school will be identifi ed for restructuring. Restructuring entails implementing “significant alternative governance actions, state takeover, the hiring of a private management contractor, converting to a charter school, or significant staff restructuring” (FDOE, 2005b). These and ot her consequences are the driving force behind our study. Teacher opinions about the validity of each area of accountability will be surveyed, specifically FCAT conten t, SSS, Florida’s A+ Plan, and NCLB.

PAGE 16

8 CHAPTER 2 LITERATURE REVIEW The significance of a methodical examin ation of validity is emphasized in measurement literature and widely studied by assessment specialists. Proper validation methodology and threats to validity are out lined throughout this literature review providing a foundation for our study. Validity Argument The concept of forming a validity argument is that “validation should be a process of constructing and evaluating arguments for and against proposed test interpretations and uses” (Haertel, 1999, p. 5). In a meeti ng of the National Council on Measurement in Education (NCME) in 1999, President Edward Haertel explained common flaws in the validation of an assessment’s use. Planning a validity argument is often done by going down a checklist. Checking off items shows what has been accomplished and leaves little room for discovering evid ence against the intended inte rpretation (Haertel, 1999). According to Cronbach, “the task of validation is not to uphold a test, practice, or theory. Ideally, validators will prepare as debaters do. Studying a topic from all angles, a debater grasps the arguments pro and con so well that he or she could speak for either side” (Cronbach, 1988, p.3). Haertel points out that though validation, in practice, may be flawed, few people are willing to investigate or change the uses and interpretations of tests (Haertel, 1999). Validity is the degree of appropriateness and adequacy for the intended use and interpretation of an assessment. According to Messick (1995), “Valid ity is not a property

PAGE 17

9 of the test or assessment as such, but rather the meaning of the test scores”. Validity is a unitary concept and evidence must be collected from different perspectives. There are six aspects of validity to consider when developing a validity ar gument emphasizing content, substantive, structural, gene ralizability, external, and cons equential basis for construct validity (Messick, 1995). A brief overview of wa ys to collect evidence for each aspect of validity is followed by major validity i ssues surrounding high-s takes testing and validation practices in general. Collecting evidence for the content aspect of validity includes content relevance, proper representation of learning objectives to be measured and items, and over item quality. Collecting evidence for the substantive aspect of validity refers to test respondents engaging in the proper mental processes required by each assessment task. Collective evidence for the structural aspect of validity includes the extent to which the internal structure of the assessment, individual items, and scoring rubrics align with construct domain of interest. Collecting evidence for the generalizability as pect of validity studi es the extent to which score interpretation s generalize to and across different population groups, settings, and tasks. Collecting evidence for the external aspect of validity is the process of using already established tools and practices to judge the quality of the new assessment or system. Collecting evidence for the consequential as pect of validity consists of discovering the consequences, both actual and potential. We were concerned primarily with conse quential evidence, namely the effects of high-stakes testing on teachers After teachers receive the results, they can lend judgment to the comparability of the test scores and th e students abilities during class time. Also, teachers see the direct effect s of testing on students befo re, during, and after testing times, hence affording another viewpoint on the authority of the FCAT’s use. The scores generated from the FCAT are comprised of the knowledge acquired in class and many

PAGE 18

10 other factors. The FCAT CRT measures speci fic domains of knowledge acquired in class, factors contributing to the students score ex ternal to each domain are forms of error (Haladyna & Downing, 2004). Potential contribu ting factors to error are discussed in the next sections. These are important consider ations for our study because teachers are observers and contributing sources, of error. Validity Issues in High-Stakes Testing When high stakes are attached to an asse ssment, test developers, school officials, and decision makers ensure several aspects of validity. The reliability indices will undoubtedly be above .9 or .95. The content meas ured in the assessments will be directly drawn from the standards set forth by the st ate and taught in every classroom. Despite the attention placed on validation, numerous problems arise in high-stakes testing. The next sections describe some validity issues examined in our study. Construct-Irrelevant Variance Construct-irrelevant variance (CIV) is systematic error variance or bias. An examination of the contributing sources of CIV is important for our study because teachers are instructors, test preparers, and test administrators. Teachers have the propensity to impact CIV in a myriad of ways, thereby shaping validity. Lord and Novick (1968, p.43) describe “sys tematic error as an undesirable change in true score”. Systematic error correlates to both true and observed scores because each individual within the group is either affected on unaffected by the CIV (Haladyna & Downing, 2004). To illustrate, a student scores a 130 on an IQ test that has a standard error of measurement (SEM) of 3. According to classical test theor y, the SEM is derived mathematically, consistent across test take rs, and accounts for random error (Crocker & Algina, 1986). Other factors may have been measured systematically into their score

PAGE 19

11 having nothing to do with the construct of inte lligence. Anything besides the construct of interest possibly measured with the construct is CI V (Haladyna & Downing, 2004). Contributing sources of CIV can be specific to an individual or group. An example of systematic error that is constant for an en tire group is a rater that is more strict than their colleagues. If the rater administering the IQ test scores t oo stringently, they are contributing to systematic error, measured with the true score of the student. Every student assessed by that rater will be at a slight disadva ntage. Also, when there are multiple forms of a test, it is always possible that one form will be slightly more difficult than the rest. The entire group receiving the more difficult form will have a score that is an underestimate of their true score for that particular construct. Likewise, an entire group may have an easier form and their results will be an overestimate of their true ability (Haladyna & Downing, 2004). The other type of error that occurs systematically is spec ific to individuals. Perhaps the most common source of CIV is read ing comprehension (Haladyna & Downing, 2004). This occurs when the students’ ability to read the question affects their answer. For example, a student may know the answer to a question about the solar system, but because of the vocabulary in the question, are unable to answer. This is especially problematic for LEP students (Abedi, 2004). If the student would score higher on an identical form of the test written in their na tive language, CIV is affec ting their results. Understanding CIV is important for our study because of the implications for FCAT results. When interpreting FCAT scores, it is important to consider all potential elements measured in the raw score. Ev ery student has different innate abilities,

PAGE 20

12 motivations, and distractions measured in thei r results. It is importa nt to acknowledge and minimize error, thereby increasing validity. Test Preparation Test preparation is recommended by assessm ent specialists. Preparation influences the error variance in test results. Sou nd preparation includes providing examples of different item formats, motivating students, teaching students to use time effectively, making educated guesses, and so on. Students th at are properly prepared for a test will do better than students without pr eparation. However, it is possi ble to prepare students too much for an exam. The only way to prevent CI V is for each district, school, and educator to uniformly prepare their students according to the guidelines provided to them in the testing manuals (Haladyna and Downing, 2004). Beyond uniformly preparing students, Ha ladyna and Downing (2004) discuss the ethical issues which arise from high-stakes test preparation. They address specific issues including curriculum developed on the basis of test content as opposed to content standards established by the state, providing students with similar or identical items, or anything that may narrow the intended curric ulum. High stakes tests like the FCAT are designed to draw a representativ e sample from a larger domain and assess it. Students should be taught all of the domain (or cont ent standards) and not overly exposed to information that is more likely to be on the FCAT. If the construct is an ability (rathe r than a domain of knowledge), different problems may occur. The FCAT Writing is a writing assessment administered each year to students in grades 4, 8, and 10. If stude nts are taught to write in accordance to the FCAT Writing rubric and are not exposed to other styles of writing, it would be an

PAGE 21

13 example of construct-irrelevant easiness (Haladyna 2004). The score from the writing assessment will give an inflated estimate of the students writing ability. Sources of Unreliability for the No Child left Behind Accountability Designs In the previous sections, po ssible threats to the validity of specific assessments, high-stakes testing in general, and accountabi lity designs as a whole were discussed. For a viable degree of validity to ex ist, some reliability (or cons istency) must be present. Reliability is most commonly examined as a property of an assessment and not for an entire accountability design (Hill & DePascale, 2003). The NCLB act requires each subgroup with in a school to make AYP. Twelve states have established a cut-o ff group size that they deem reli able. The cut-offs for those 12 states range from 10 to 75 students, with a median of 30. Florida requires a minimum of 30 students in a subgroup to be counted (FDOE, 2003). The cut-offs are in place to ensure results collected yield reliable inform ation about a subgroup. For example, if three Native-American students attend one school, it is impossible to get any reliable information from their test results. A general reliability rule is the more information (test results) the higher the reliability. NCLB requires all subgroups make AYP. One subgroup can cause an entire school to fail, reinforcing the need to ensure the reliability (and validity) of the accountability design. The recommended number of students re quired for each subgroup is much higher, than in practice. Hill and Depascale (2003) suggest that roughly 300 students would be adequate. This would encompass very few s ubgroups, greatly diminishing the validity of the accountability design. To reiterate, for the results to be reliable, the number of students needed would be far larger than mo st subgroups. Only tes ting larger subgroups

PAGE 22

14 diminishes the validity of the accountability design and negates the entire purpose of NCLB. Positive Consequences of High-Stakes Testing The general sentiment towards high-stakes testing in measurement literature is unenthusiastic, but there are pos itive effects of testing. Our study examined intended, unintended, positive, and negative consequences of testing and their effects on teachers. Positive consequences of accountability examined in our study are outlined below. Cizek (2001) described the following 10 conseque nces in an article about unintended consequences. Professional DevelopmentProfessional development for educators has been “spotty, hit or miss, of questionable rese arch base, of dubious effectiveness, and thoroughly avoidable” in the past and someti mes at present. However, professional development is becoming increasingly better over time. The new accountability policies and “Principles of High-Quality Professional Development” established by the Department of Education are ensuri ng teachers are constantly gaining new knowledge and expertise in their subject areas. AccommodationThe new federal legislation requires that all st udents be tested. All students must be assessed and accomm odated. Extra attention is given to students who need it and much focus has been brought to students who may have been overlooked in the past. Cizek menti ons a research study where disadvantaged students, who had some history of faili ng, reported that their teachers began to focus more attention on them after th e high-stakes testing and accountability program was established. Knowledge About TestingThe constant submersion in high-st akes testing has aided in educating teachers on test cont ent, consequences, and construction. Teachers understand the entire practice of testing more now than ever. This can affect how well they write tests, grade ex ams, develop rubrics, and their assessment practices in general at the classroom level. Collection of InformationSchool district s have become more conscientious about their data collection practices. Use of InformationThe accountability movement is in full swing which means finding information about te st scores, funding, spending, graduation rates, and the like, is as easy to pull up over the internet as your favorite reci pe. This information is all used to improve programs and allocate funds where needed.

PAGE 23

15 Educational Options-In additi on to traditional public sc hools, parents and students often have the option of charter school s, magnet schools, and home schooling. Accountability Systems-Cizek argues that high-stakes tests are often the foundation for accountability systems and that account ability in its connotation today is because of high-stakes testing. Educators Intimacy with their Discipline-The idea behind this c onsequence is that educators chosen to be involved with cont ent or test development will be immersed in discussion about the content and it will trickle down to the local level. Quality of Tests-Tests today are “highly re liable, free from bias, relevant and age appropriate, higher order, tightly relate d to important and public goals, time and cost efficient, and yielding remarkably consistent decisions” according to Cizek (2001). Increased Student LearningThe primar y goal and intended consequence of highstakes testing is to increase student learning. There is re search that shows a positive relationship between the presence of high stakes testing and st udent scores on the International Assessment of Educational Progress in Cana da. In addition there are other studies that show favorable results for high-stakes testing. Collecting consequential evidence for valid ity is the primary focus of our study. Suggestions made by Cronbach (1988), Messic k (1995), and Haertel (1999) for forming a validity argument will be ensued throughout our study. Also, the survey instrument was developed and analyzed based on the inform ation and validity cautio ns provided in the articles by Haladyna & Downing (2004), Ab edi (2004), Hill & Depascale (2003), and Cizek (2001).

PAGE 24

16 CHAPTER 3 METHODS Respondents The sample of 75 teachers were returned from the experimentally accessible population of 261 teachers employed in six elementary schools (School A: n = 30, School B: n = 50, School C: n = 55, School D: n = 33, School E: n = 52, School F: n = 41) from two school districts, one in centr al Florida and five in north-central Florida. The original protocol was to investigate six elementary sc hools from the same district. These schools were selected based on their accountability success (school grade in the A+ plan) and AYP status from 2004. Permission to survey teachers was sought from schools receiving grades of A, B, C, or D. There were no F sc hools in this school dist rict. A representative sample of schools was sought to compare the views of teachers from schools of varying success with Florida’s accountability system. The theoretical premise behind choosi ng schools receiving both high and low grades was to gain a lucid depiction of the c onsequences of testing at the classroom level and the opinions of teachers from dissimilar schools in relation to each other. For example, schools with a poor accountability record (low grades), may place more stress on teachers to improve their students’ FCAT scores. Also, teachers from schools having no success with accountability may be more ap athetic than teachers from schools with established success. Likewise, teachers from successful schools may be under constant pressure to improve or maintain high FCAT scores.

PAGE 25

17 Five out of the six schools approved the protocol. The school that declined was replaced by a school with a similar account ability record (School C). However, the school is located in a differe nt school district. Table 3-1 displays demographic and accountability information for each school (F DOE, 2005a). Seventy-five (28.7%) surveys were returned within the allotted timeframe [School A: n = 15 (50%), School B: n = 9 (18%), School C: n = 10 (18.2%), School D: n = 5 (15.2%), School E: n = 22 (42.4%), School F: n = 14 (34.2%)]. All teachers from kindergarten through fifth grade were asked to participate including Exceptional Student Education (ESE), Gifted, English for Speakers of Other Languages (ESOL), Physical Education (P.E.) Art, Music, and Speech. Administration was not asked to participate. Table 3-1 School Demographic Information School Location Grade Grade Grade Total SES Minority AYP 2005 2004 2003 Students % % Status A N. central A A B 219 54 31 Provisional B N. central D D C 183 93 97 Not met C Central A A A 396 39 37 Met D N. central D 88 92 95 Not met E N. central B B A 329 41 39 Provisional F N. central D C B 177 87 84 Not met *SES is based on percentage of student s eligible for free and reduced lunch Materials A survey instrument was developed to determine teacher opinions of accountability at the national and state levels, mainly the e ffects of the new laws on themselves and their students. In addition to meas uring opinions held by teachers, this survey was developed to address certain validity con cerns that are influenced by t eachers in terms of gathering

PAGE 26

18 consequential evidence for a validity argument, such as teaching to the test (i.e., teachers will be asked to what extent they stress materi al that is likely to show up on the FCAT). The survey has three parts and consists of three questions and 34 statements with a corresponding 5-point Likert scale. The scal e is from “strongly disagree” to “strongly agree” and contains a neutral point. The first section compri ses statements (items 1-11) about the NCLB act, Florida’s A+ Plan, and general items about Florida’s accountability design. Part two ascertains opi nions (items 12-34) pertaini ng to Florida accountability design on a less macro level, particularly th e SSS and the FCAT. Teachers have a more intimate relationship with the SSS and the FCAT so more items (and of greater detail), were included in this sec tion. The third section contains three open-ended questions inquiring professional information about the pa rticipant (Appendix s hows entire survey). Procedure Once permission was given by each school, the surveys were hand-delivered, along with an invitation to participate and a self -addressed stamped envelope for each teacher. Surveys were color-coded by school for iden tification purposes. P ackets containing the above mentioned items were placed in teacher mailboxes by school personnel for teachers to examine at their leisure. Teachers we re given written instructions to return the surveys within a specified timeframe, appr oximately 2 weeks, on average for each school. The length of time it took to gain permission from schools varied extensively causing the packets to be delivered on differe nt days between the months of August and October in 2005. Analysis Approach The design was based on establishing five independent variables, or five separate areas of accountability that are in the realm of a teachers’ expe rtise. The five branches of

PAGE 27

19 interest are (corresponding new variable labels are in pare nthesis): (1) No Child Left Behind Act of 2001 (SUMNCLB) (2) Florida’s A+ Plan (SUMAPLAN) (3) the Sunshine State Standards (SUMSSS) (4) the Flor ida Comprehensive Assessment Test (SUMFCAT) (5) the subsections of the FCAT (SUMFCATSECT). Each variable was created by summing the responses of like items on the survey instrument. Grouping different items to formulate new variables s ecures a more reliable measure of an overall attitude towards a specific subject. In addi tion, a summated score for each participant was calculated and used in the analysis. The summated score (AVERAGE) was used as a comprehensive measure for each individual’ s stance on accountability and derived from items that specifically addressed an attitude Cronbach’s alpha was computed for each new variable as a measure of reliability (Table 3-2). Table 3-2 Reliability Statistics for New Variables New Variable Statements* N Cronbach’s alpha SUMNCLB 2, 3, 4 72 .903 SUMAPLAN 5, 6, 8 68 .729 SUMSSS 12, 13, 14 70 .743 SUMFCAT 15, 16, 21 69 .826 SUMFCATSECT 31-34 60 .912 AVERAGE 2-6, 8, 12-16, 21 63 .848 *Statements found in Appendix The primary function of the survey a nd purpose of our study was to uncover teacher opinions at definite levels of account ability. Teachers were surveyed in hopes of them lending a unique perspective on the validity of Florida’s educational accountability design and the consequences of high-stakes testing. A secondary focus of our study was to examine differences within the sample a nd uncover factors contri buting to the beliefs held by each teacher. In theory, teachers from different schools (i.e., instructing diverse subpopulations of students) should have very different experience s with the practices

PAGE 28

20 measured in the survey. The aim of looki ng at teachers as subpopul ations was to uncover variations in opinions that can be directly influenced by the working/teaching environment. The rationale for subdividing teachers by school was to gain an understanding of how the conse quences of high stakes test ing affect teachers, from dissimilar schools, in varying respects. Some variables used in our study occur na turally as a function of sampling or the demographic information provided by the teach ers. The variables of interest SCHOOL (school where respondent teaches), GRADE (sc hool grade in the A+ Plan), YEAR (the number of years the respondent has been teaching), INSIGHT (item 6: the grade attached to each school gives parents insight into how well that school is operating), and IMPACT (item 9: I have an impact on the grade my school receives). SCHOOL and GRADE were analyzed as categorical variables on a nominal scale. YEAR, IMPACT, and INSIGHT are quantitative variab les, on an interval scale. Research Question 1: Teachers’ opinions of accountability will be significantly different at each of the four areas of accountability: (1) NCLB (2) A+ Plan (3) SSS (4) FCAT Using the variables created from the existi ng data set, analyses were performed to check the overall atti tudes towards each accountability br anch by the entire sample of teachers. SPSS was used to run six nondirectional pair-wis e dependent samples t -test to test this hypothesis. A Bonferroni adjustme nt will be made to control for family-wise Type 1 error rate (alpha = .05/6). Rejection of the null hypothesis for an individual t -test indicates that there is a statistically si gnificant difference between the two areas of accountability.

PAGE 29

21 Research Question 2: GRADE will be a co ntributing factor to the responses on IMPACT, INSIGHT SUMNCLB, SU MAPLAN, SUMSSS, and SUMFCAT Planned complex contrasts were performe d to check for mean differences, where schools with a grade of A and B will be contra sted to schools with a grade of D on each of the six variables of in terest. IMPACT and INSIGHT were selected based on the empirical know-how that teachers from unde rachieving schools would have different opinions on items that specifi cally address their school gr ade (i.e., teachers with low scoring students are less likel y to attribute their student s’ and schools failures to themselves). Mean differences are of in terest for SUMNCLB, SUMAPLAN, SUMSSS, and SUMFCAT because it builds on the fi rst research question by breaking down opinions of each area of accountability acr oss teachers by grade. A Bonferroni adjustment was made to control for the family-wise Type 1 error rate. Research Question 3: YEAR will have a linear relationship with AVERAGE A simple linear regression was conducted to test whether the tw o variables have a linear relationship. If the simple model is accepted YEAR can be used, in part, as a predictor for overall teacher attitudes. Research Question 4: Teachers will rate the subsections of the FCAT statistically higher than they rate the FCAT as a whole A directional pair-wise dependent samples t -test will be performed to test this research question. Rejection of the null hypothe sis will indicate that teachers rate the subsections (i.e., mathematics, science, readin g, and writing) higher than the FCAT in its entirety, in terms of being an adequate meas ure of a student’s level of mastery. Type 1 error rate will be set at alpha = .05. This test is being conducted to verify teacher attitudes towards the FCAT. In theory, teachers could rate the FCAT as an indicator of a students’ level of mastery lower than if it was broken down into subsections.

PAGE 30

22 CHAPTER 4 RESULTS The descriptive statistics of the measures (i.e., statements on the Likert scale) included in the overall sample are shown in Ta ble 4-1. The mean score is the average of all responses for a particular item in terms of the scale of the item response. For example, because the responses are on a 5-poi nt Likert scale, an average response of 1.58 suggests that on average the responses fell somewhere between “1-strongly disagree” and “2-disagree”. A mean of 3.1 indi cate the response fe ll slightly above “3-neutral”. It is widely accepted and often recommended (i.e., if the Likert scale has at least five points it can be considered continuous) to analyze this data as if it were interval, technically it is ordinal data. Table 4-1 Descriptive Statistics for Survey Statements Statements N Mean SD In general, the Florida accountability 73 2.73 1.00 system works well Goals set forth by the NCLB Act will 73 2.21 1.01 most likely be actualized. The NCLB Act has an overall positive 73 2.62 1.05 impact on the United States. The NCLB Act has an overall positive 74 2.57 1.07 impact on Florida. The A+ Plan holds schools accountable 70 3.09 1.10 for their students learning. The grade attached to each school helps 74 2.15 1.12 give parents insight on how well that school is operating

PAGE 31

23 Table 4-1 (continued) Statements N Mean SD Students from low performing schools 74 2.97 1.09 should be able to tran sfer to another school. The A+ Plan helps motivate teachers and 71 2.41 1.33 administrators. I have an impact on the grade my school 74 3.87 0.91 receives. Administrators have an imp act on the grade 73 3.86 0.89 their school receives. The student body has an impact on the 73 4.22 0.95 grade their school receives The SSS adequately outlines the curriculum 72 3.76 0.86 content at each grade level. The SSS lay the foundation for a broad 72 3.67 0.96 curriculum. All the SSS will be taught at one point or 70 3.63 0.97 another. The FCAT measures the SSS well. 69 2.75 0.96 The FCAT assesses the most important 69 2.59 1.02 material at each grade level. Florida is enacting the NCLB Act 70 2.47 1.02 appropriately with the FCAT. The high stakes attached to the FCAT are 74 1.99 1.17 necessary. The FCAT would still be taken seriously 73 3.03 1.01 if there weren’t consequences for students. The FCAT would still be taken seriously if 72 3.64 1.03 there weren’t rewards for teachers. The FCAT is a good indicator of the student’s 72 2.68 0.96 level of mastery for required curriculum.

PAGE 32

24 Table 4-1 (continued) Statements N Mean SD I have control over how my students perform 68 2.87 0.99 on the FCAT. I spend extra time in class stressing material 67 3.96 0.99 that is likely to show up on the FCAT. I spend more time going over test taking skills 65 3.83 1.11 now than before the FCAT was established. The FCAT has an overall positive impact 74 2.35 1.07 on Florida. The FCAT has an overall positive impact 73 2.44 1.12 on my school. The FCAT has an overall positive impact 70 2.24 1.12 on my students. Item types (e.g. multiple choice) used on the 69 2.84 1.07 FCAT, are the most appropriate type for each learning objective. Prompts used in the writing section of the 67 2.76 1.07 FCAT are adequate for measuring the student’s overall writing ability. Other subjects (e.g. social studies, art) 70 2.36 1.35 should be included in the FCAT. The FCAT measures the most important 64 3.56 .0.94 concepts in Reading. The FCAT measures the most important 63 3.22 0.99 concepts in Writing. The FCAT measures the most important 64 3.42 0.94 concepts in Mathematics. The FCAT measures the most important 60 3.05 0.96 concepts in Science.

PAGE 33

25 A Likert scale gives the option of using a neutral point. The ne utral point allows respondents that are apathetic towards the to pic, an opportunity to answer fairly. Unfortunately, using a neutral point can cause the survey results to sway towards the middle and the results often appear to be in significant. Part of our study intended to uncover indifferent attitudes towards certain aspects of educationa l accountability. For this purpose a neutral point was used and the percent agreement and disagreement for each statement are shown in Table 4-2. The percentage reported is the valid percent, which does not take into account missing data (i.e., the percent is out of the people who responded to that particular item) Table 4-2 Percent Agreement Statistics for Survey Statements Statement % Agree % Disagree In general, the Florida accountability 17.8 39.8 system works well Goals set forth by the NCLB Act will 9.5 64.4 most likely be actualized. The NCLB Act has an overall positive 17.8 52.1 impact on the United States. The NCLB Act has an overall positive 17.6 54.0 impact on Florida. The A+ Plan holds schools accountable 32.8 27.2 for their students learning. The grade attached to each school helps 12.2 67.6 give parents insight on how well that school is operating Students from low performing schools 29.8 25.7 should be able to tran sfer to another school. The A+ Plan helps motivate teachers and 24.0 57.7 administrators.

PAGE 34

26 Table 4-2 (continued) Statement % Agree % Disagree I have an impact on the grade my school 67.5 8.1 receives. Administrators have an impact on the grade 72.6 9.6 their school receives. The student body has an impact on the 79.4 5.5 grade their school receives The SSS adequately outlines the curriculum 66.7 7.0 content at each grade level. The SSS lay the foundation for a broad 62.5 11.1 curriculum. All the SSS will be taught at one point or 57.2 10.0 another. The FCAT measures the SSS well. 21.7 44.9 The FCAT assesses the most important 17.3 49.2 material at each grade level. Florida is enacting the NCLB Act 17.1 52.9 appropriately with the FCAT. The high stakes attached to the FCAT are 12.2 70.3 necessary. The FCAT would still be taken seriously 32.9 27.4 if there weren’t consequences for students. The FCAT would still be taken seriously if 62.5 11.2 there weren’t rewards for teachers. The FCAT is a good indicator of the student’s 19.5 44.4 level of mastery for required curriculum. I have control over how my students perform 22.1 32.3 on the FCAT. I spend extra time in class stressing material 77.6 11.9 that is likely to show up on the FCAT.

PAGE 35

27 Table 4-2 (continued) Statement % Agree % Disagree I spend more time going over test taking skills 67.7 12.3 now than before the FCAT was established. The FCAT has an overall positive impact 13.5 55.4 on Florida. The FCAT has an overall positive impact 17.8 54.8 on my school. The FCAT has an overall positive impact 12.9 61.4 on my students. Item types (e.g. multiple choice) used on the 21.7 31.8 FCAT, are the most appropriate type for each learning objective. Prompts used in the writing section of the 23.9 37.3 FCAT are adequate for measuring the student’s overall writing ability. Other subjects (e.g. social studies, art) 20.0 55.7 should be included in the FCAT. The FCAT measures the most important 50.0 7.8 concepts in Reading. The FCAT measures the most important 34.9 14.2 concepts in Writing. The FCAT measures the most important 45.3 12.5 concepts in Mathematics. The FCAT measures the most important 25.0 21.7 concepts in Science. Research Question 1 Teachers’ opinions of accountability will be significantly different at each of the four areas of accountab ility: (1) NCLB (2) A+ Plan (3) SSS (4) FCAT. A series of six non-directional dependent sample t -tests were performed to test for significant

PAGE 36

28 differences at each level of accountability. Th e means, standard deviations, and sample sizes for the variables of interest are show n in Table 4-3. A Bonferroni adjustment was made to control for the family-wise Type 1 error rate. The mean difference for SUMNCLB and SUMAPLAN was not statistically significant t (65) = -.205, p = .803. The mean difference for SUMNCLB and SU MSSS was statistically significant t (69) = .9.399, p = .000. The mean difference fo r SUMNCLB and SUMFCAT was not statistically significant t (68) = -1.696, p = .095. The mean difference for SUMAPLAN and SUMSSS was statistically significant t (63) = -8.616, p = .000. The mean difference for SUMAPLAN and SUMFCAT was not statistically significant t (63) = -.1.127, p = .264. The mean difference for SUMSSS and SUMFCAT was statistically significant t (67) = 8.417, p = .000. SUMSSS differed significantly from every other group, indicating that teachers rate the SSS differently than the ot her areas of accountabi lity. All other group differences were not stat istically significant. Table 4-3 Descriptive Statistics for New Variables Variable N Mean SD SUMNCLB 72 2.48 0.96 SUMAPLAN 68 2.55 0.96 SUMSSS 70 3.69 0.76 SUMFCAT 69 2.68 0.85 SUMFCATSECT 60 3.30 0.86 AVERAGE 63 2.88 0.66 Research Question 2 GRADE will be a contributing factor to the responses on INSIGHT, IMPACT, SUMNCLB, SUMAPLAN, SUMSSS, and SUMFCA T. Planned complex contrasts were performed to test this hypothesi s. The Bonferroni test was us ed for testing the statistical significance of the simple effects. Schools that received a grade of A or B were combined

PAGE 37

29 and contrasted with schools that received a grade of D. The Bonferroni technique requires family-wise alpha (.05) to be divi ded by the number of contrasts (six). The means, standard deviations, and sample si zed for each variable, broken down by school grade are shown in Table 4-4. The contra st of A and B schools with D schools was statistically significant for INSIGHT, t (57.92) = 3.716, p = .000. The contrast of A and B schools with D schools was statisti cally significant for IMPACT, t (52.35) = 2.985, p = .004. The contrast of A and B schools with D sc hools was not statisti cally significant for SUMNCLB, t (69) = -0.923, p = .359. The contrast of A and B schools with D schools was statistically significant for SUMAPLAN, t (45.56) = 2.830, p = .007 The contrast of A and B schools with D schools was not st atistically signif icant for SUMSSS, t (67) = 0.661, p = .511. The contrast of A and B schools with D schools was not statistically significant for SUMFCAT, t (61.67) = 0.215, p = .830. When appropriate, equal variances were not assumed based on Levene’s te st for homogeneity of variances. Table 4-4 Descriptive Statistics for New Variables by GRADE A Schools B Schools D Schools Variable N Mean Std. Dev N Mean Std. Dev N Mean Std. Dev INSIGHT 25 2.92 1.15 21 1.95 0.74 28 1.61 0.92 IMPACT 25 4.36 0.64 21 3.81 0.98 28 3.46 0.88 SUMNCLB 24 2.74 1.04 21 2.03 0.56 27 2.59 1.03 SUMAPLAN 23 3.30 0.87 18 2.17 0.50 27 2.16 0.89 SUMSSS 23 3.83 1.00 20 3.63 0.51 27 3.61 0.70 SUMFCAT 23 2.91 0.97 19 2.46 0.80 27 2.64 0.75 Research Question 3 YEAR will have a linear relationship with AVERAGE. A simple regression analysis was conducted to examine the de gree of association between the outcome variable AVERAGE and the explanatory vari able YEAR. The simple model yielded an R of .082 and was statistically significant, F (1, 57) = 5.076, p = .028, suggesting that

PAGE 38

30 the amount of years teaching (YEAR) is jointl y associated with 8.2% of an individuals summated score on the accountability su rvey (AVERAGE). The adjusted R for the reduced model was .066. Table 4-5 reports th e unstandardized regression coefficients ( b ), the standardized regression coefficients (), the observed t statistics, and the squared semi-partial correlations ( r). The interpretation of the uns tandardized regression coefficient for any explanatory variable is a function of the s cale of measurement of that va riable. The interpretation of the regression coefficient for a continuous va riable can be made in terms of rate and direction of change. The regres sion coefficient indicates the e xpected unit change in the outcome variable for each unit change in any explanatory variable, while holding the others constant. For example, YEAR is a continuous variable with an unstandardized regression coefficient of b = .014. This suggests that each unit increase in YEAR (i.e., number of years teaching) results in an average .014 unit increase in AVERAGE (i.e., more positive view on modern accountabi lity). Even though an increase of .014 in AVERAGE is statistically significant, it may not be practically significant. Table 4-5 Summary Statistics for Simple Regression Variables b Std Error t p r Intercept 2.635 .124 21.332 .000 Year .014 .006 .286 2.253 .028 .082 Research Question 4 Teachers will rate the subsections of the FCAT statistically higher than the FCAT as a whole. A directional dependent samples t -test was performed to check for mean differences between SUMFCAT and SUMFCATS ECT. The means, standard deviations, and samples sizes of interest are shown in Ta ble 4-3. The difference in the mean response

PAGE 39

31 for SUMFCAT and SUMFCATSECT was statistically significant t (59) = -7.670, p = .000. Reject the null hypothesis. SUMFCATS ECT was rated significantly higher than SUMFCAT.

PAGE 40

32 CHAPTER 5 DISCUSSION Discussion of Findings Research Question 1 The average teacher responses for each of the four major areas of accountability (NCLB, A+ Plan, SSS, and FCAT) were compar ed to one another. The SSS were rated higher than every other area of the accountability design. Al l other comparisons were not statistically significant, indi cating that teachers on averag e rate NCLB, A+ Plan, and FCAT approximately the same. Teachers may have rated the SSS higher than the others because the SSS are the only accountability topi c without direct consequences attached. When teachers lend their opinion, it is probably difficult to separate the area they are assessing and the consequences attached to it. In other words, respons es on the quality of the FCAT as a measurement instrument, include the negative feelings towards the consequences. They rate it as a “bad test” i ndependent of the quality of the test because of the negative consequences associated with it. In addition, teachers work with the SSS more closely and most likely know more about SSS than the other areas examined in our study. Teachers appear to have a firmer gr asp on the SSS hence rendering their depiction of the SSS more accurate than the areas they have been exposed to less. Research Question 2 Schools with a grade of A and B were comb ined and compared to all the D schools on each area of accountability (NCLB, A+ Plan, SSS, and FCAT) in addition to IMPACT (I have an impact on the grade my school recei ves) and INSIGHT (the grade attached to

PAGE 41

33 each school gives parents insight into how we ll that school is operating). The purpose of this hypothesis was to test whether teachers from successful schools differed from unsuccessful schools in their opinion on the br oad accountability areas, in addition to the two survey items that tapped into how mu ch control a teacher felt they had on their schools grade. Also, we examined if teacher s felt the grade accurately depicted their school. The contrasts were statistically significant fo r INSIGHT, IMPACT, and SUMAPLAN. That is, teachers from schools with differing grades rated the variables associated with school grades different but not other variables associated with different aspects of accountability. Teachers from school s with a low grade rate their ability to impact their schools grade less than teachers from a higher achieving school. Teachers from D schools responded less favorably than the others on the au thority of the school grading systems ability to let the public how well that school is actually operating. Underachieving schools also rated the A+ Plan lower than the A and B schools combined, probably because the A+ Plan has a more negative effect on them. This is similar to the previous research question wh ere teacher responses on the quality of the school grading system are affected by th e consequences attached. Intuitively, underachieving schools have more negative cons equences thus a more negative attitude towards the A+ Plan. All the other contrasts were not statistically significant. Research Question 3 Teachers were asked how many years they have been teaching. This data was used to test whether there was a linear relations hip between years as a teacher and overall attitude toward accountabili ty. There was a small, but statistically significant relationship. The model shows that with each unit increase in years teaching there is a .014 unit increase in overall attitude toward acco untability. To illustrate, after 20 years of

PAGE 42

34 teaching, there is only a .28 (on a 5-point Likert scale) increase in overall attitude. This is not practically significant, in that very small and meaningless changes are observed until the difference in time teaching is very large. However, this rela tionship may have been statistically significant because specific items yielded different responses across teachers with more or less experience. This could be an indicator of awareness by teachers and a thorough understanding of accountability. Perh aps teachers that have been teaching longer understand accountability better and are able to rate it more accurately. Research Question 4 Opinions held by teachers on the FCAT as a whole were compared to the subsections of the FCAT. Teachers rated the FC AT lower than the sections that comprise it. One possible reason for this is, teachers responded about the consequences of the FCAT instead of the quality of the instru ment whereas when responding to individual subject areas are able to focus on just the qua lity. In other words, there are not direct consequences attached to the FCAT subsections and teachers may be able to answer more fairly. Another possibility for this discre pancy is that teachers do not really know whether or not these are good testing instrument s. In lay terms, most teachers rated the FCAT as “bad” and the subsecti ons as “indifferent”. It wa s not that they praised the FCAT subsections, their opinions were neutral. This could be because they did not know if each section was a good indicator of a student ’s level of mastery on a subject area. Implications of the Descriptive Statistics In addition to the validity implications of the four research questions addressed in our study, the descriptive statistics contribute greatly to the validity argument. In general, evidence for consequential validity of Flor ida’s use of the FCAT collected during our study is not favorable. Results state that only 21.7% of teachers surveyed agree that “the

PAGE 43

35 FCAT measures the SSS well”, 19.5% of t eachers agree that “the FCAT is a good indicator of the student’s le vel of mastery for required curriculum”, and 12.9% of teachers agree that “the FCAT has a positive impact on my students”. Teachers were asked to make an evaluative judgment of the quality of the FCAT. As previously stated, results were not fa vorable. The importance may not be how the teachers rated the FCAT, but why they rated it that way. The approach taken throughout our study was to use teachers as a tool to gauge the validity of the FCAT’s use, hence lending part in a validity ar gument for Florida’s entire accountability design. The overall sentiment towards every part of accountability measured in our study, with consequences directly attached, was disapproving or at be st, indifferent. It seems likely that the consequences of the FCAT directly contribute to a teacher’s evaluati on of the quality of the FCAT as a measurement tool. The results show discrepancies in FCAT ratings. In contrast to the 19.5% of teachers who agreed that the FCAT was a “good indicator of the student’s level of mastery for required curriculum”, 50%, 34.9% 45.3%, 25% of teachers agreed that the FCAT measured the most important concep ts in reading, writing, mathematics, and science, respectively. This disc repancy could be a result of the text in the items or the propensity of teachers to underrate the quality of the FCAT based on past experiences. Teachers may have negative feelings towards the assessment because they disagree with the consequences attached to it (70.3% of teachers disagreed with “the high-stakes attached to the FCAT are necessary”). Further analysis is needed to draw conclusions on this topic.

PAGE 44

36 Limitations of this Analysis Response Rate The response rate was only 28%, usually co nsidered unacceptable. People that respond to surveys have differing characterist ics than those who do not return surveys (i.e., respondents tend to be educated and fe male). Our studies sample was homogeneous (all elementary school teachers, educated, primarily female) perhaps helping to reduce the error (i.e., error may be le ss than if the sample was more heterogeneous). However, it is possible that the members of the sample who opted not to participate are more apathetic towards accountability issues. Th e reasons for declining participation are unknown, contributing significantly to th e limitations of this analysis. The response rate also varied across sc hools. Response rate ranged from 50% through 15.2%. Only five surveys were re turned by one of the schools, yielding an unreliable representation of that school. An attempt was made to correct for this error by analyzing similar schools with like responses. Sampling Issues One of the schools was from a different di strict. The implications are mixed. Some results were generalizable across district s. However, the school in central Florida differed on many items to a demographically si milar school in north central Florida. Reasons for these deviations are not known. Th is could be an indicator of the pressures placed on teachers directly from prin cipals or district officials. Suggestions for Future Research As mentioned, the issues discussed in our study did not genera lize entirely across districts. Further samples from districts and schools across Florida need to be sampled in order to gather more information on the cons equences of high-stakes testing. Another

PAGE 45

37 possible angle for future research is examini ng the factors that can contribute to teacher attitudes on accountability. Our study examined the role of length of time as a teacher and school grade for each respondent on accountability opinions. Additional studies should delve deeper into the demographi cs, motivations, and backgrounds of their participants. Notably, 77.6% of respondents indicated that they “spend extra time in class stressing material that is lik ely to show up on the FCAT”. A closer look as to what, according to teachers, constitutes “extra time” a nd the plausible implications that has on the validity of the assess ments interpretation would ma ke for an interesting study. Closing Remarks It is hopeful that our stud y gives insight into some of the effects of high-stakes testing on teachers. The NCLB act places more pressure on teachers than ever before in terms of student achievement. At times, it seems that teachers are being held too accountable for their students learning. Many te achers expressed their frustrations in the margins of the survey explaining that much of what they are held accountable for is beyond their control, and factors like parental support and innate ab ility are more likely contributing to the child’s success, than their efforts. NCLB is still relatively new and only time will tell the exact benefits and repe rcussions of the new accountability designs.

PAGE 46

38 APPENDIX ACCOUNTABILITY AND TEACHER ATTITUDES SURVEY INSTRUMENT I. The following statements address you r thoughts on the No Child Left Behind (NCLB) act, Florida’s A+ Plan, and Florida’s accountab ility system in general. Please indicate your level of agreement with each statement. strongly disagree strongly agree 1. In general, the Florida accountability system works well……………………………................. 1 2 3 4 5 2. Goals set forth by the NCLB Act will most likely be actualized……………..……….......... ...................... ... 1 2 3 4 5 3. The NCLB Act has an overall positive impact on the United States………………… ........................ ....... 1 2 3 4 5 4. The NCLB Act has an overall positive impact on Florida…………………………….................... ................ 1 2 3 4 5 5. The A+ Plan holds schools accountable for their students learning………………….......... ...................... .... 1 2 3 4 5 6. The grade attached to each school helps give parents insight on how well that school is operating.................... 1 2 3 4 5 7. Students from low performing schools should be able to transfer to another school….................................. 1 2 3 4 5 8. The A+ Plan helps motivate teachers and administrators………………………… ......................…. 1 2 3 4 5 9. I have an impact on the grade my school receives…...1 2 3 4 5 10. Administrators have an impact on the grade their school receives........................ ............................ ..... 1 2 3 4 5 11. The student body has an impact on the grade their school receives............. ......................... ...................... ....... 1 2 3 4 5

PAGE 47

39 II. The following statements concern th e Sunshine State Standards (SSS) and the Florida Comprehensive Assessment Test (F CAT). Please indicate your level of agreement with each statement. strongly disagree strongly agree 12. The SSS adequately outlines the curriculum content at each grade level ............................. .................. 1 2 3 4 5 13. The SSS lay the foundation for a broad curriculum.... 1 2 3 4 5 14. All the SSS will be taught at one point or another...... 1 2 3 4 5 15. The FCAT measures the SSS well...................... ....... 1 2 3 4 5 16. The FCAT assesses the most important material at each grade level........ ............................. ........................ ... 1 2 3 4 5 17. Florida is enacting the NCLB Act appropriately with the FCAT.................................................................. 1 2 3 4 5 18. The high stakes attached to the FCAT are necessary.. 1 2 3 4 5 19. The FCAT would still be taken seriously if there weren’t consequences for student s......................... ........... 1 2 3 4 5 20. The FCAT would still be taken seriously if there weren’t rewards for teachers.................................... 1 2 3 4 5 21. The FCAT is a good indicator of the student’s level of mastery for required curriculum................................... 1 2 3 4 5 22. I have control over how my students perform on the FCAT...................................................................... 1 2 3 4 5 23. I spend extra time in class stressing material that is likely to show up on the FCAT........................................ 1 2 3 4 5 24. I spend more time going over test taking skills now than before the FCAT was established..................... 1 2 3 4 5 25. The FCAT has an overall positive impact on Florida..1 2 3 4 5 26. The FCAT has an overall positive impact on my school...................................................................... 1 2 3 4 5 27. The FCAT has an overall positive impact on my students............................................................................. 1 2 3 4 5

PAGE 48

40 strongly disagree strongly agree 28. Item types (e.g. multiple choice) used on the FCAT, are the most appropriate type for each learning objective........................................................................... 1 2 3 4 5 29. Prompts used in the writing section of the FCAT are adequate for measuring the student’s overall writing ability................................................................................ 1 2 3 4 5 30. Other subjects (e.g. social studies, art) should be included in the FCAT...................................................... 1 2 3 4 5 The FCAT measures the most important concepts in: 31. Reading.................. 1 2 3 4 5 32. Writing................... 1 2 3 4 5 33. Mathematics.......... 1 2 3 4 5 34. Science.................. 1 2 3 4 5 III. In the following s ection please tell us about yourself. What grade level do you teach?______ __________________ _______________ ______________ If applicable, what subject do you teach ?______________________ _______________________ In what school year did you begin teach ing? __________________ _______________ _________

PAGE 49

41 LIST OF REFERENCES Abedi, J. (2004). The No Child Left Be hind Act and English language learners: Assessment and accountability issues. Educational Researcher, 33(1), 4-14. Cizek, G. (2001). More unintended cons equences of high-stakes testing. Educational Measurement: Issues and Practice 20(4), 19-27. Crocker, L., Algina, J. (1986). Introduction to classica l and modern test theory New York: Wadsworth. Cronbach, L. (1988) Five perspectives on validi ty argument. In H. Wainer & H. I. Braun (Eds.), Test Validity. (pp. 3-17). Hillsdale, NJ: Erlbaum. (as cited by Haertel, E. (1999) Validity arguments for high-stakes testing: In search of the evidence. Educational Measurement: Issues and Practice, 18(4), 5-10.) Florida Department of Education [FDOE]. (2003). Consolidated state application Accountability Workbook for State grants unde r Title IX, Part C, Sec. 9302 for the Elementary and Secondary Education Act (Pub. L. No. 107-110.) March 26. Florida Department of E ducation [FDOE]. (2005a). 2004-2005 School Accountability report. Last retrieved March 2006. Available online at: http://schoolgrades.fldoe.org/ Florida Department of Education [FDOE]. (2005b). Fact Sheet: NCLB and Adequate Yearly Progress Last retrieved March 2006. Available online at: http://www.fldoe.org/NCLB/FactSheet-AYP.pdf Florida Department of E ducation [FDOE]. (2005c). FCAT Web Brochure. Last retrieved March 2006. Available online at: http://www.firn.edu/doe/sas/fcat/fcatpub1.htm Florida Department of Education [FDOE]. (2005d). Grading Florida Public Schools 2004-2005. Last retrieved March 200 6. Available online at: http://firn.edu/doe /schoolgrades/pdf /schoolgrades.pdf Florida Department of E ducation [FDOE]. (2005e). Sunshine State Standards Last retrieved March 2006. Available online at: http://www.firn.edu/doe/c urric/prek12/index.html Haertel, E. (1999). Validity arguments for highstakes testing: In search of evidence. Educational Measurement: Issues and Practice 18(4), 5-10.

PAGE 50

42 Haladyna, T., Downing, S. (2004). Construct-ir relevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17-26. Hill, R., DePascale, C. (2003). Reliability and No Child Left Behind accountability designs. Educational Measurement: Issues and Practice, 22(3), 12-21. Kohn, A. (2000). Burnt at the high stakes. Journal of Teacher Education 51(4), 315-327. Lane, S. (2004), Validity of high-stakes asse ssment: Are students engaged in complex thinking? Educational Measurement: Issues and Practice, 23(3), 6-14. Lord, F., Novick, M. (1968). Statistical theories of mental test scores Reading MA: Addison Wesley. (as cited by Haladyna, T., Downing, S. (2004). Constructirrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17-26.) Messick, S. (1995). Standards of validity a nd the validity of sta ndards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5-8. No Child Left Behind Act of 2001, Public Law 107-110, §115 Stat.1425, 107th Congress (2002). U.S. Department of Edu cation [USDOE]. (2006a). Budget OfficeU.S. Department of Education Last retrieved on Marc h 2006. Available online at: http://www.ed.gov/about/overvie w/budget/index.html?src=az U.S. Department of Edu cation [USDOE]. (2006b). No Child Left Behind. Last retrieved on March 2006. Available online at http://www.ed.gov/nclb/landing.jhtml

PAGE 51

43 BIOGRAPHICAL SKETCH Kathryn Miller received a Bachelor of Science degree in psychology from the University of Central Florida (Orlando) in 2003. She enjoyed the quantitative aspect of research psychology and decided to minor in statistics. Afte r graduating, she enrolled as a graduate student at the Un iversity of Florida, majori ng in research and evaluation methodology in the Department of Educationa l Psychology. While in graduate school she was fortunate to get the opportunity to be a Graduate Teaching Assistant under Dr. David Miller for the course, Assessment in Ge neral and Exceptional Education, where she instructed students on proper assessment procedures. After graduation, Kathryn hopes to relocate to Boston and work on a research team that investigates issues related to health and medicine.


Permanent Link: http://ufdc.ufl.edu/UFE0014404/00001

Material Information

Title: Accountability and teacher attitudes : consequential validity evidence for Florida's educational accountability system
Physical Description: Mixed Material
Language: English
Creator: Miller, Kathryn Elizabeth ( Dissertant )
Miller, M. David. ( Thesis advisor )
Seraphine, Anne ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2006
Copyright Date: 2006

Subjects

Subjects / Keywords: Educational Psychology thesis, M.A.E
Dissertations, Academic -- UF -- Educational Psychology

Notes

Abstract: Consequences that arise from the Florida Comprehensive Assessment Test (FCAT), in accordance with the No Child Left Behind Act of 2001 (NCLB), impact the overall validity of the accountability design established by Florida. In developing a validity argument for uses and interpretations of the FCAT, 75 teachers employed by six different schools responded to a survey ascertaining their opinions on NCLB, Florida s A+ Plan, the Sunshine State Standards, and the FCAT. Data were analyzed looking at the areas of accountability as separate entities and the attitudes towards each area. Data also were analyzed to examine differences across teachers, to uncover factors that may influence a teachers' view of accountability.
Subject: accountability, assessment, comprehensive, consequences, florida, test, validity
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 51 pages.
General Note: Includes vita.
Thesis: Thesis (M.A.E.)--University of Florida, 2006.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0014404:00001

Permanent Link: http://ufdc.ufl.edu/UFE0014404/00001

Material Information

Title: Accountability and teacher attitudes : consequential validity evidence for Florida's educational accountability system
Physical Description: Mixed Material
Language: English
Creator: Miller, Kathryn Elizabeth ( Dissertant )
Miller, M. David. ( Thesis advisor )
Seraphine, Anne ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2006
Copyright Date: 2006

Subjects

Subjects / Keywords: Educational Psychology thesis, M.A.E
Dissertations, Academic -- UF -- Educational Psychology

Notes

Abstract: Consequences that arise from the Florida Comprehensive Assessment Test (FCAT), in accordance with the No Child Left Behind Act of 2001 (NCLB), impact the overall validity of the accountability design established by Florida. In developing a validity argument for uses and interpretations of the FCAT, 75 teachers employed by six different schools responded to a survey ascertaining their opinions on NCLB, Florida s A+ Plan, the Sunshine State Standards, and the FCAT. Data were analyzed looking at the areas of accountability as separate entities and the attitudes towards each area. Data also were analyzed to examine differences across teachers, to uncover factors that may influence a teachers' view of accountability.
Subject: accountability, assessment, comprehensive, consequences, florida, test, validity
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 51 pages.
General Note: Includes vita.
Thesis: Thesis (M.A.E.)--University of Florida, 2006.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0014404:00001


This item has the following downloads:


Full Text












ACCOUNTABILITY AND TEACHER ATTITUDES: CONSEQUENTIAL VALIDITY
EVIDENCE FOR FLORIDA'S EDUCATIONAL ACCOUNTABILITY SYSTEM












By

KATHRYN ELIZABETH MILLER


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF ARTS IN EDUCATION
UNIVERSITY OF FLORIDA


2006

































Copyright 2006

by

Kathryn Elizabeth Miller




































To the LaFrance women















ACKNOWLEDGMENTS

I would like to thank my committee members (Dr. M. David Miller and Dr. Anne

Seraphine) for guiding me through each semester, fielding all my questions throughout

my graduate career. I would also like to thank my fellow students (Jann Macinnes, Jenny

Bergeron, and Janna Underhill) for all the study groups they pioneered. If not for their

complete dedication to academics, my experience would have been greatly diminished. I

would also like to thank Elaine Green and Linda Parsons for keeping me sane during my

teaching assistantships.

Many other people have offered support or motivation, or simply inspired me in

innumerable ways. Special thanks go to Joshua Marland, Crystal Calkins, Janna

Baumann, Samuel Hanna, Sally King, Mico Adorno, Laura McCoy, Amy Godfrey,

Allison Knowlton, Andrew Brunelle, Philip Moring, and Anthony Herman. I would also

like to thank my mother, Jacquie Hernandez, for her unconditional love and support. I

thank my brother, Steven Miller, and my sister, Melissa Miller. They make life fantastic

and challenge me at every turn. My Aunt, Gisele Andrade, has always been a role model

of mine. Her strength is the motivating force in my life.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ................................................................................................. iv

LIST OF TABLES ......... ... ........... ... ............. ......... .............. .. vii

A B S T R A C T .......................................... .................................................. v iii

CHAPTER

1 IN TR OD U CTION ............................................... .. ......................... ..

N national and State A accountability ..................................................................... ........2
No Child Left Behind Act of 2001 .. ......................... .......................................2
Adequate Yearly Progress as Determined in Florida ........................................4
F lorida's A + P lan .............................. ....................................... .............. ... ... .5
Sunshine State Standards........................................................................... 6
Florida Com prehensive A ssessm ent Test.................................... .....................6
Effects of Testing on Teachers .......................................................7

2 L IT E R A TU R E R E V IE W .................................................................. .....................8

V alidity A rgu m ent .............. .. .......................................................... .. .............. 8
V validity Issues in H igh-Stakes Testing.................................... ....................... 10
Construct-Irrelevant V ariance ........................................ ......... ............... 10
T est P rep aratio n .......... ... ...... ...................................... .................................. 12
Sources of Unreliability for the No Child left Behind Accountability Designs.. 13
Positive Consequences of High-Stakes Testing ...................................... ...............14

3 METHODS ....................................................... .............. ........... 16

R e sp o n d en ts ......................................................................................16
M a te ria ls ........................................................................................................1 7
P ro c e d u re ....................................................................................................... 1 8
A analysis A pproach................................................... 18

4 R E S U L T S .............................................................................2 2

R e search Q u e stio n 1 ...................................................................................................2 7
R research Q question 2 ........................ .................. ................ ............28



v









Research Question 3 ..................................................... ............ 29
Research Question 4 ..................................................... ............ 30

5 D ISC U S SIO N ............................................................................... 32

D discussion of Findings ................................................. ............ 32
R research Q question 1 ........................ .. ...................... .. ...... .... ..... ...... 32
R research Q question 2 .......................................................................... 32
R research Q question 3 ........................ .. ...................... .. ...... .... ..... ...... 33
R research Q question 4 ........................................................................... 34
Implications of the Descriptive Statistics ............. ............................................34
Lim stations of this A nalysis......................................................... ............... 36
R esp on se R ate ................................................... ................ 3 6
Sampling Issues ................. ................. .......................... .... ........36
Suggestions for Future R research ........................................ .......................... 36
Closing Remarks .................................... ............................... .........37

ACCOUNTABILITY AND TEACHER ATTITUDES SURVEY INSTRUMENT ........38

L IST O F R E F E R E N C E S ........................................................................ .....................4 1

B IO G R A PH IC A L SK E TCH ..................................................................... ..................43











LIST OF TABLES

Table pge

3-1 School D em graphic Inform ation .................................................................... ... ..17

3-2 Reliability Statistics for New Variables ........................................................19

4-1 Descriptive Statistics for Survey Statements ........................... ..... ...........22

4-2 Percent Agreement Statistics for Survey Statements ...............................................25

4-3 Descriptive Statistics for N ew Variables.................................. ....................... 28

4-4 Descriptive Statistics for New Variables by GRADE...............................................29

4-5 Summary Statistics for Simple Regression ...................................... ............... 30















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Arts in Education

ACCOUNTABILITY AND TEACHER ATTITUDES: CONSEQUENTIAL VALIDITY
EVIDENCE FOR FLORIDA'S EDUCATIONAL ACCOUNTABILITY SYSTEM


By

Kathryn Elizabeth Miller

May 2006

Chair: David Miller
Major Department: Educational Psychology

Consequences that arise from the Florida Comprehensive Assessment Test

(FCAT), in accordance with the No Child Left Behind Act of 2001 (NCLB), impact the

overall validity of the accountability design established by Florida. In developing a

validity argument for uses and interpretations of the FCAT, 75 teachers employed by six

different schools responded to a survey ascertaining their opinions on NCLB, Florida's

A+ Plan, the Sunshine State Standards, and the FCAT. Data were analyzed looking at the

areas of accountability as separate entities and the attitudes towards each area. Data also

were analyzed to examine differences across teachers, to uncover factors that may

influence a teachers' view of accountability.














CHAPTER 1
INTRODUCTION

The No Child Left Behind Act of 2001 (NCLB) marked the beginning of a new era

in education, where each state became responsible for creating a system of educational

accountability. Accountability refers to the process of holding school districts, schools,

teachers, and students responsible for learning. An accountability system is a structure for

making decisions and applying consequences based on information collected from

assessments. Decisions and consequences that result from accountability systems range

from allocation of funds, to third-grade promotion. The high-stakes nature of these

accountability designs necessitates a thorough examination of the validity and the

assessments they encompass. Our purpose was to gather information that can be used

toward an argument for validity.

Validity is an overall appraisal of the degree to which an assessment's use and

interpretation are adequate and appropriate (Messick, 1995). A validity argument is

made by collecting empirical evidence and providing theoretical rationales for the uses

and interpretations (Haertel, 1999).

We focused on providing evidence for a validity argument for Florida's use of the

Florida Comprehensive Assessment Test (FCAT). Developing a validity argument is

multifaceted and includes examining consequences that arise from high-stakes testing.

Our study mainly addressed the new accountability legislation and its consequences for

teachers in the state of Florida.









To better illustrate Florida's accountability system, factors that contribute to it or

are included in it are outlined in the next sections. NCLB is of interest because it is the

new legislation that must be adhered to by each state. In fulfilling NCLB, Florida's

system includes measuring Adequate Yearly Progress (AYP), the A+ Plan, the Sunshine

State Standards (SSS), and the FCAT.

National and State Accountability

No Child Left Behind Act of 2001

NCLB is at the center of educational accountability in every state. NCLB is the

new version of the Elementary and Secondary Education Act written into law in 1965 and

provides billions of dollars in federal funding for various educational programs (USDOE,

2006a). The purpose of NCLB is to ensure that every child in America is able to meet

the high learning standards of the state the child resides in. The act, though intricate and

complex, is founded on four basic principles; stronger accountability, increased flexibility

and local control, more options for parents, and emphasis on proven teaching methods

(USDOE, 2006b). NCLB aims at improving education all over the United States and

raising the bar for what is deemed acceptable learning. Goals of NCLB are numerous,

specific, and lofty. The goals most pertinent (No Child Left Behind, 2002) to our study

are listed below.

* All students will reach high standards, at a minimum attaining proficiency or better
in reading and mathematics by 2013-2014.

* By 2013-2014, all the students will be proficient in reading by the end of the third
grade.

* All limited English proficiency (LEP) students will become proficient in English.

* All students will be taught by highly qualified teachers.









* All students will be educated in learning environments that are safe, drug-free and
conducive to learning.

* All students will graduate from high school.

NCLB requires that each state develop its own accountability system that is valid,

reliable, and meets all requirements outlined in the act. The degree to which each system

is valid and reliable is individually established by each state. The 2005-2006 school year

marked the deadline for testing all students grades 3-8 in mathematics and reading,

annually. Science must be included in the testing regime by the 2007-2008 school year,

at least once during elementary, middle, and high school. All assessments must be

aligned with the content standards established by the state. All students must be

proficient by 2013-2014. Each state determines its own guidelines for proficient status

(Lane, 2004). The 2002-2003 school year marked the deadline for each state to furnish

annual report cards of their progress. The report cards include information on student

achievement by district and subgroup. Minority students, students with disabilities, LEP

students, and children from low-income families are all included in the annual report

cards (No Child Left Behind, 2002).

Florida's accountability system, in fulfilling NCLB, includes AYP, Florida's A+

Plan (school grades), individual student progress towards (or consistent proficient levels

of) mastery on the FCAT, and a return on investment. Return on investment is a measure

that relates dollars spent to student achievement (FDOE, 2005b). These elements are

designed to provide a cohesive and extensive representation of a school's performance

and are made available for parents, educators, and members of the community.









Adequate Yearly Progress as Determined in Florida

"Adequate Yearly Progress measures the progress of all public schools, and school

districts toward enabling all students to meet the state's academic achievement standards"

(FDOE, 2005b, p.1). AYP targets the performance of every subgroup and aims to ensure

that in 1 year's time, students are learning 1 year's worth of knowledge as delineated in

the content standards. Subgroups are created on the basis of race or ethnicity,

socioeconomic status (SES), disability, and English proficiency. States are required to

define AYP for the state, school districts, and schools in a way that will facilitate all

students to meet the state's achievement standards by 2014 (FDOE, 2005b).

Florida uses the FCAT to ascertain each student's level of proficiency, a necessity

for making AYP. There are five possible achievement levels one can attain from the

single FCAT score. The levels range from 1 to 5. Level 1 is below basic, Level 2 is

basic, Levels 3 and 4 are proficient, and Level 5 is advanced. All students scoring a 3 or

above are considered proficient for classification purposes. Florida also has a separate

assessment for students with disabilities who would not be able to earn a standard

diploma. The Florida Alternate Assessment Report (FAAR) also uses a 5-point scale to

determine a student's proficiency level of the SSS. The FAAR scale is as follows; Levels

0 or 1 are below basic, Level 2 is basic, Level 3 is proficient, and Level 4 is advanced

(FDOE, 2005b).

For a school in Florida to make AYP, 95% of all students and all identified

subgroups must partake in the FCAT or alternative assessment (when applicable). A

subgroup must include at least 30 students to be included in the AYP calculations. All

goals must be met by 2014 and a blue print for the progression toward that goal must also

be agreed on and met annually (FDOE, 2003). For example, if Florida declares that 68%









of students will be proficient in mathematics by 2007, then that goal must be realized to

make AYP. Additionally, there must be a 1% increase in the percentage of students

proficient in writing. If the annual objectives for reading or mathematics are not met by

subgroups in a school or district, AYP can still be met if the percentage of nonproficient

students decreased by 10% from the previous year. It is not possible to make AYP if

(under Florida's A+ Plan) a school receives a D or F (FDOE, 2003).

Florida's A+ Plan

Florida's A+ Plan is a grading system for schools: A is the highest grade a school

can receive and F is the lowest (the A+ plan uses a traditional grading scale of A, B, C,

D, and F). To make AYP, a school must first receive a grade of C or higher (FDOE,

2005d).

* A = 410 points or more, meet AYP of bottom 25% in reading, gains for bottom
25% are within 10 points of gains for all students, and 95% of eligible students are
tested

* B = 380 points or more, meet AYP of bottom 25% in reading within 2 years, and
90% of eligible students are tested

* C = 320 points or more, meet AYP of bottom 25% in reading within 2 years, and
90% of eligible students are tested

* D = 280 points or more and 90% of eligible students are tested

* F = Fewer than 280 points or less than 90% of eligible students tested.

A school can earn points if their students do well on the assessments or improve

from the previous year. Schools earn one point for every percent of students scoring 3, 4,

or 5 in mathematics. Schools also earn will also receive one point for every percent of

students scoring a 3, 4, or 5 in reading. One point is also given for each percent of

students who score a 3 or above on the writing assessment. For each percent of students

who gain one achievement level and for students who maintain a level of 3 or above, one









point is awarded. One point is awarded for each percent of students in Levels 1 or 2

demonstrating more than one year's growth. One point is awarded for each percent of the

lowest performing readers (bottom 25%) making learning gains from the previous year

(FDOE, 2005d).

Sunshine State Standards

The SSS were approved by the Board of Education in Florida in 1996. The

standards provide expectations for student performance and achievement. The standards

were written in seven subject areas and aimed to allow flexibility in curriculum, catering

to the needs of different schools. In recent years, changes to the SSS were made to better

accommodate new accountability legislation. Grade level expectations for the major

subject areas were added and are guidelines for the FCAT. The subject areas outlined in

the SSS are music and fine arts, foreign language, language arts, mathematics, science,

social studies, physical education, and health (FDOE, 2005e).

Florida Comprehensive Assessment Test

The FCAT is the only test administered statewide that is designed to align with the

SSS. The FCAT determines a students' level of achievement at each grade level making

it the primary AYP determinate. The FCAT has two major components, a norm-

referenced test (NRT) and a criterion-referenced test (CRT). The NRT currently being

used as part of the FCAT is the Stanford 10. It is used to compare individual students in

Florida to national norms (FDOE, 2005c). NRT's are designed to maximize response

variance and artificially spread the scores (Kohn, 2000). Reading and mathematics are the

only subjects measured in the NRT.

The CRT is designed to measure a students' level of mastery of the SSS in reading,

writing, mathematics, and science. These scores are not designed to be used for









comparison, but as a comprehensive exam measuring knowledge gained inside the

classroom. All students in grades 3-10 are required to take the FCAT Reading and

Mathematics. Students in grades 4, 8, and 10 also take the FCAT Writing and grades 5, 8,

and 11 take the Science portion of the FCAT (FDOE, 2005c).

The FCAT is a high-stakes assessment because of the consequences attached to the

scores (Kohn, 2000). The results impact grade to grade promotion, funds allocated to

schools, high school graduation, teacher rewards, and how the school is viewed by the

community. Intuitively, the stakes attached to the FCAT put many teachers under an

extreme amount of pressure to increase student performance. The pressures felt by

teachers and their opinions have not been extensively studied. Our study aims to

examine the consequences of high-stakes testing, for teachers.

Effects of Testing on Teachers

The fundamental rationale for our study is to give part in the validation of Florida's

accountability system as a whole by examining teacher opinions. Our study examines

opinions for all areas in the accountability design, focusing primarily on the uses and

interpretations of the FCAT. Validation is essential because of the consequences attached

to FCAT scores. AYP and school grades are directly determined by FCAT scores.

Schools that do not make AYP have serious consequences. To illustrate, after 5 years of

failing to make AYP a school will be identified for restructuring. Restructuring entails

implementing "significant alternative governance actions, state takeover, the hiring of a

private management contractor, converting to a charter school, or significant staff

restructuring" (FDOE, 2005b). These and other consequences are the driving force

behind our study. Teacher opinions about the validity of each area of accountability will

be surveyed, specifically FCAT content, SSS, Florida's A+ Plan, and NCLB.














CHAPTER 2
LITERATURE REVIEW

The significance of a methodical examination of validity is emphasized in

measurement literature and widely studied by assessment specialists. Proper validation

methodology and threats to validity are outlined throughout this literature review

providing a foundation for our study.

Validity Argument

The concept of forming a validity argument is that "validation should be a process

of constructing and evaluating arguments for and against proposed test interpretations

and uses" (Haertel, 1999, p. 5). In a meeting of the National Council on Measurement in

Education (NCME) in 1999, President Edward Haertel explained common flaws in the

validation of an assessment's use. Planning a validity argument is often done by going

down a checklist. Checking off items shows what has been accomplished and leaves

little room for discovering evidence against the intended interpretation (Haertel, 1999).

According to Cronbach, "the task of validation is not to uphold a test, practice, or theory.

Ideally, validators will prepare as debaters do. Studying a topic from all angles, a debater

grasps the arguments pro and con so well that he or she could speak for either side"

(Cronbach, 1988, p.3). Haertel points out that though validation, in practice, may be

flawed, few people are willing to investigate or change the uses and interpretations of

tests (Haertel, 1999).

Validity is the degree of appropriateness and adequacy for the intended use and

interpretation of an assessment. According to Messick (1995), "Validity is not a property









of the test or assessment as such, but rather the meaning of the test scores". Validity is a

unitary concept and evidence must be collected from different perspectives. There are six

aspects of validity to consider when developing a validity argument emphasizing content,

substantive, structural, generalizability, external, and consequential basis for construct

validity (Messick, 1995). A brief overview of ways to collect evidence for each aspect of

validity is followed by major validity issues surrounding high-stakes testing and

validation practices in general.

* Collecting evidence for the content aspect of validity includes content relevance,
proper representation of learning objectives to be measured and items, and over
item quality.

* Collecting evidence for the substantive aspect of validity refers to test respondents
engaging in the proper mental processes required by each assessment task.

* Collective evidence for the structural aspect of validity includes the extent to which
the internal structure of the assessment, individual items, and scoring rubrics align
with construct domain of interest.

* Collecting evidence for the generalizability aspect of validity studies the extent to
which score interpretations generalize to and across different population groups,
settings, and tasks.

* Collecting evidence for the external aspect of validity is the process of using
already established tools and practices to judge the quality of the new assessment or
system.

* Collecting evidence for the consequential aspect of validity consists of discovering
the consequences, both actual and potential.

We were concerned primarily with consequential evidence, namely the effects of

high-stakes testing on teachers. After teachers receive the results, they can lend judgment

to the comparability of the test scores and the students abilities during class time. Also,

teachers see the direct effects of testing on students before, during, and after testing

times, hence affording another viewpoint on the authority of the FCAT's use. The scores

generated from the FCAT are comprised of the knowledge acquired in class and many









other factors. The FCAT CRT measures specific domains of knowledge acquired in class,

factors contributing to the students score external to each domain are forms of error

(Haladyna & Downing, 2004). Potential contributing factors to error are discussed in the

next sections. These are important considerations for our study because teachers are

observers and contributing sources, of error.

Validity Issues in High-Stakes Testing

When high stakes are attached to an assessment, test developers, school officials,

and decision makers ensure several aspects of validity. The reliability indices will

undoubtedly be above .9 or .95. The content measured in the assessments will be directly

drawn from the standards set forth by the state and taught in every classroom. Despite

the attention placed on validation, numerous problems arise in high-stakes testing. The

next sections describe some validity issues examined in our study.

Construct-Irrelevant Variance

Construct-irrelevant variance (CIV) is systematic error variance or bias. An

examination of the contributing sources of CIV is important for our study because

teachers are instructors, test preparers, and test administrators. Teachers have the

propensity to impact CIV in a myriad of ways, thereby shaping validity.

Lord and Novick (1968, p.43) describe "systematic error as an undesirable change

in true score". Systematic error correlates to both true and observed scores because each

individual within the group is either affected on unaffected by the CIV (Haladyna &

Downing, 2004). To illustrate, a student scores a 130 on an IQ test that has a standard

error of measurement (SEM) of 3. According to classical test theory, the SEM is derived

mathematically, consistent across test takers, and accounts for random error (Crocker &

Algina, 1986). Other factors may have been measured systematically into their score









having nothing to do with the construct of intelligence. Anything besides the construct of

interest possibly measured with the construct is CIV (Haladyna & Downing, 2004).

Contributing sources of CIV can be specific to an individual or group. An example

of systematic error that is constant for an entire group is a rater that is more strict than

their colleagues. If the rater administering the IQ test scores too stringently, they are

contributing to systematic error, measured with the true score of the student. Every

student assessed by that rater will be at a slight disadvantage. Also, when there are

multiple forms of a test, it is always possible that one form will be slightly more difficult

than the rest. The entire group receiving the more difficult form will have a score that is

an underestimate of their true score for that particular construct. Likewise, an entire

group may have an easier form and their results will be an overestimate of their true

ability (Haladyna & Downing, 2004).

The other type of error that occurs systematically is specific to individuals. Perhaps

the most common source of CIV is reading comprehension (Haladyna & Downing,

2004). This occurs when the students' ability to read the question affects their answer.

For example, a student may know the answer to a question about the solar system, but

because of the vocabulary in the question, are unable to answer. This is especially

problematic for LEP students (Abedi, 2004). If the student would score higher on an

identical form of the test written in their native language, CIV is affecting their results.

Understanding CIV is important for our study because of the implications for

FCAT results. When interpreting FCAT scores, it is important to consider all potential

elements measured in the raw score. Every student has different innate abilities,









motivations, and distractions measured in their results. It is important to acknowledge and

minimize error, thereby increasing validity.

Test Preparation

Test preparation is recommended by assessment specialists. Preparation influences

the error variance in test results. Sound preparation includes providing examples of

different item formats, motivating students, teaching students to use time effectively,

making educated guesses, and so on. Students that are properly prepared for a test will do

better than students without preparation. However, it is possible to prepare students too

much for an exam. The only way to prevent CIV is for each district, school, and educator

to uniformly prepare their students according to the guidelines provided to them in the

testing manuals (Haladyna and Downing, 2004).

Beyond uniformly preparing students, Haladyna and Downing (2004) discuss the

ethical issues which arise from high-stakes test preparation. They address specific issues

including curriculum developed on the basis of test content as opposed to content

standards established by the state, providing students with similar or identical items, or

anything that may narrow the intended curriculum. High stakes tests like the FCAT are

designed to draw a representative sample from a larger domain and assess it. Students

should be taught all of the domain (or content standards) and not overly exposed to

information that is more likely to be on the FCAT.

If the construct is an ability (rather than a domain of knowledge), different

problems may occur. The FCAT Writing is a writing assessment administered each year

to students in grades 4, 8, and 10. If students are taught to write in accordance to the

FCAT Writing rubric and are not exposed to other styles of writing, it would be an









example of construct-irrelevant easiness (Haladyna 2004). The score from the writing

assessment will give an inflated estimate of the students writing ability.

Sources of Unreliability for the No Child left Behind Accountability Designs

In the previous sections, possible threats to the validity of specific assessments,

high-stakes testing in general, and accountability designs as a whole were discussed. For

a viable degree of validity to exist, some reliability (or consistency) must be present.

Reliability is most commonly examined as a property of an assessment and not for an

entire accountability design (Hill & DePascale, 2003).

The NCLB act requires each subgroup within a school to make AYP. Twelve

states have established a cut-off group size that they deem reliable. The cut-offs for those

12 states range from 10 to 75 students, with a median of 30. Florida requires a minimum

of 30 students in a subgroup to be counted (FDOE, 2003). The cut-offs are in place to

ensure results collected yield reliable information about a subgroup. For example, if three

Native-American students attend one school, it is impossible to get any reliable

information from their test results. A general reliability rule is the more information (test

results) the higher the reliability.

NCLB requires all subgroups make AYP. One subgroup can cause an entire school

to fail, reinforcing the need to ensure the reliability (and validity) of the accountability

design. The recommended number of students required for each subgroup is much higher,

than in practice. Hill and Depascale (2003) suggest that roughly 300 students would be

adequate. This would encompass very few subgroups, greatly diminishing the validity of

the accountability design. To reiterate, for the results to be reliable, the number of

students needed would be far larger than most subgroups. Only testing larger subgroups









diminishes the validity of the accountability design and negates the entire purpose of

NCLB.

Positive Consequences of High-Stakes Testing

The general sentiment towards high-stakes testing in measurement literature is

unenthusiastic, but there are positive effects of testing. Our study examined intended,

unintended, positive, and negative consequences of testing and their effects on teachers.

Positive consequences of accountability examined in our study are outlined below. Cizek

(2001) described the following 10 consequences in an article about unintended

consequences.

* Professional Development- Professional development for educators has been
"spotty, hit or miss, of questionable research base, of dubious effectiveness, and
thoroughly avoidable" in the past and sometimes at present. However, professional
development is becoming increasingly better over time. The new accountability
policies and "Principles of High-Quality Professional Development" established by
the Department of Education are ensuring teachers are constantly gaining new
knowledge and expertise in their subject areas.

* Accommodation- The new federal legislation requires that all students be tested.
All students must be assessed and accommodated. Extra attention is given to
students who need it and much focus has been brought to students who may have
been overlooked in the past. Cizek mentions a research study where disadvantaged
students, who had some history of failing, reported that their teachers began to
focus more attention on them after the high-stakes testing and accountability
program was established.

* Knowledge About Testing- The constant submersion in high-stakes testing has
aided in educating teachers on test content, consequences, and construction.
Teachers understand the entire practice of testing more now than ever. This can
affect how well they write tests, grade exams, develop rubrics, and their assessment
practices in general at the classroom level.

* Collection of Information- School districts have become more conscientious about
their data collection practices.

* Use of Information- The accountability movement is in full swing which means
finding information about test scores, funding, spending, graduation rates, and the
like, is as easy to pull up over the internet as your favorite recipe. This information
is all used to improve programs and allocate funds where needed.









* Educational Options-In addition to traditional public schools, parents and students
often have the option of charter schools, magnet schools, and home schooling.

* Accountability Systems-Cizek argues that high-stakes tests are often the foundation
for accountability systems and that accountability in its connotation today is
because of high-stakes testing.

* Educators Intimacy with their Discipline-The idea behind this consequence is that
educators chosen to be involved with content or test development will be immersed
in discussion about the content and it will trickle down to the local level.

* Quality of Tests-Tests today are "highly reliable, free from bias, relevant and age
appropriate, higher order, tightly related to important and public goals, time and
cost efficient, and yielding remarkably consistent decisions" according to Cizek
(2001).

* Increased Student Learning- The primary goal and intended consequence of high-
stakes testing is to increase student learning. There is research that shows a positive
relationship between the presence of high stakes testing and student scores on the
International Assessment of Educational Progress in Canada. In addition there are
other studies that show favorable results for high-stakes testing.

Collecting consequential evidence for validity is the primary focus of our study.

Suggestions made by Cronbach (1988), Messick (1995), and Haertel (1999) for forming a

validity argument will be ensued throughout our study. Also, the survey instrument was

developed and analyzed based on the information and validity cautions provided in the

articles by Haladyna & Downing (2004), Abedi (2004), Hill & Depascale (2003), and


Cizek (2001).














CHAPTER 3
METHODS

Respondents

The sample of 75 teachers were returned from the experimentally accessible

population of 261 teachers employed in six elementary schools (School A: n = 30, School

B: n = 50, School C: n = 55, School D: n = 33, School E: n = 52, School F: n = 41) from

two school districts, one in central Florida and five in north-central Florida. The original

protocol was to investigate six elementary schools from the same district. These schools

were selected based on their accountability success (school grade in the A+ plan) and

AYP status from 2004. Permission to survey teachers was sought from schools receiving

grades of A, B, C, or D. There were no F schools in this school district. A representative

sample of schools was sought to compare the views of teachers from schools of varying

success with Florida's accountability system.

The theoretical premise behind choosing schools receiving both high and low

grades was to gain a lucid depiction of the consequences of testing at the classroom level

and the opinions of teachers from dissimilar schools in relation to each other. For

example, schools with a poor accountability record (low grades), may place more stress

on teachers to improve their students' FCAT scores. Also, teachers from schools having

no success with accountability may be more apathetic than teachers from schools with

established success. Likewise, teachers from successful schools may be under constant

pressure to improve or maintain high FCAT scores.









Five out of the six schools approved the protocol. The school that declined was

replaced by a school with a similar accountability record (School C). However, the

school is located in a different school district. Table 3-1 displays demographic and

accountability information for each school (FDOE, 2005a). Seventy-five (28.7%) surveys

were returned within the allotted timeframe [School A: n = 15 (50%), School B: n = 9

(18%), School C: n = 10 (18.2%), School D: n = 5 (15.2%), School E: n = 22 (42.4%),

School F: n = 14 (34.2%)].

All teachers from kindergarten through fifth grade were asked to participate

including Exceptional Student Education (ESE), Gifted, English for Speakers of Other

Languages (ESOL), Physical Education (P.E.), Art, Music, and Speech. Administration

was not asked to participate.

Table 3-1 School Demographic Information

School Location Grade Grade Grade Total SES Minority AYP
2005 2004 2003 Students % % Status
A N. central A A B 219 54 31 Provisional
B N. central D D C 183 93 97 Not met
C Central A A A 396 39 37 Met
D N. central D 88 92 95 Not met
E N. central B B A 329 41 39 Provisional
F N. central D C B 177 87 84 Not met
*SES is based on percentage of students eligible for free and reduced lunch

Materials

A survey instrument was developed to determine teacher opinions of accountability

at the national and state levels, mainly the effects of the new laws on themselves and their

students. In addition to measuring opinions held by teachers, this survey was developed

to address certain validity concerns that are influenced by teachers in terms of gathering









consequential evidence for a validity argument, such as teaching to the test (i.e., teachers

will be asked to what extent they stress material that is likely to show up on the FCAT).

The survey has three parts and consists of three questions and 34 statements with a

corresponding 5-point Likert scale. The scale is from "strongly disagree" to "strongly

agree" and contains a neutral point. The first section comprises statements (items 1-11)

about the NCLB act, Florida's A+ Plan, and general items about Florida's accountability

design. Part two ascertains opinions (items 12-34) pertaining to Florida accountability

design on a less macro level, particularly the SSS and the FCAT. Teachers have a more

intimate relationship with the SSS and the FCAT, so more items (and of greater detail),

were included in this section. The third section contains three open-ended questions

inquiring professional information about the participant (Appendix shows entire survey).

Procedure

Once permission was given by each school, the surveys were hand-delivered, along

with an invitation to participate and a self-addressed stamped envelope for each teacher.

Surveys were color-coded by school for identification purposes. Packets containing the

above mentioned items were placed in teacher mailboxes by school personnel for

teachers to examine at their leisure. Teachers were given written instructions to return the

surveys within a specified timeframe, approximately 2 weeks, on average for each

school. The length of time it took to gain permission from schools varied extensively

causing the packets to be delivered on different days between the months of August and

October in 2005.

Analysis Approach

The design was based on establishing five independent variables, or five separate

areas of accountability that are in the realm of a teachers' expertise. The five branches of









interest are (corresponding new variable labels are in parenthesis): (1) No Child Left

Behind Act of 2001 (SUMNCLB) (2) Florida's A+ Plan (SUMAPLAN) (3) the Sunshine

State Standards (SUMSSS) (4) the Florida Comprehensive Assessment Test

(SUMFCAT) (5) the subsections of the FCAT (SUMFCATSECT). Each variable was

created by summing the responses of like items on the survey instrument. Grouping

different items to formulate new variables secures a more reliable measure of an overall

attitude towards a specific subject. In addition, a summated score for each participant was

calculated and used in the analysis. The summated score (AVERAGE) was used as a

comprehensive measure for each individual's stance on accountability and derived from

items that specifically addressed an attitude. Cronbach's alpha was computed for each

new variable as a measure of reliability (Table 3-2).

Table 3-2 Reliability Statistics for New Variables

New Variable Statements* N Cronbach's alpha
SUMNCLB 2, 3, 4 72 .903
SUMAPLAN 5, 6, 8 68 .729
SUMSSS 12, 13, 14 70 .743
SUMFCAT 15, 16, 21 69 .826
SUMFCATSECT 31-34 60 .912
AVERAGE 2-6, 8, 12-16, 21 63 .848
*Statements found in Appendix

The primary function of the survey and purpose of our study was to uncover

teacher opinions at definite levels of accountability. Teachers were surveyed in hopes of

them lending a unique perspective on the validity of Florida's educational accountability

design and the consequences of high-stakes testing. A secondary focus of our study was

to examine differences within the sample and uncover factors contributing to the beliefs

held by each teacher. In theory, teachers from different schools (i.e., instructing diverse

subpopulations of students) should have very different experiences with the practices









measured in the survey. The aim of looking at teachers as subpopulations was to uncover

variations in opinions that can be directly influenced by the working/teaching

environment. The rationale for subdividing teachers by school was to gain an

understanding of how the consequences of high stakes testing affect teachers, from

dissimilar schools, in varying respects.

Some variables used in our study occur naturally as a function of sampling or the

demographic information provided by the teachers. The variables of interest SCHOOL

(school where respondent teaches), GRADE (school grade in the A+ Plan), YEAR (the

number of years the respondent has been teaching), INSIGHT (item 6: the grade attached

to each school gives parents insight into how well that school is operating), and IMPACT

(item 9: I have an impact on the grade my school receives). SCHOOL and GRADE

were analyzed as categorical variables on a nominal scale. YEAR, IMPACT, and

INSIGHT are quantitative variables, on an interval scale.

Research Question 1: Teachers' opinions of accountability will be significantly
different at each of the four areas of accountability: (1) NCLB (2) A+ Plan (3)
SSS (4) FCAT

Using the variables created from the existing data set, analyses were performed to

check the overall attitudes towards each accountability branch by the entire sample of

teachers. SPSS was used to run six non-directional pair-wise dependent samples t-test to

test this hypothesis. A Bonferroni adjustment will be made to control for family-wise

Type 1 error rate (alpha = .05/6). Rejection of the null hypothesis for an individual t-test

indicates that there is a statistically significant difference between the two areas of

accountability.









Research Question 2: GRADE will be a contributing factor to the responses on
IMPACT, INSIGHT SUMNCLB, SUMAPLAN, SUMSSS, and SUMFCAT

Planned complex contrasts were performed to check for mean differences, where

schools with a grade of A and B will be contrasted to schools with a grade of D on each

of the six variables of interest. IMPACT and INSIGHT were selected based on the

empirical know-how that teachers from underachieving schools would have different

opinions on items that specifically address their school grade (i.e., teachers with low

scoring students are less likely to attribute their students' and schools failures to

themselves). Mean differences are of interest for SUMNCLB, SUMAPLAN, SUMSSS,

and SUMFCAT because it builds on the first research question by breaking down

opinions of each area of accountability across teachers by grade. A Bonferroni

adjustment was made to control for the family-wise Type 1 error rate.

Research Question 3: YEAR will have a linear relationship with AVERAGE

A simple linear regression was conducted to test whether the two variables have a

linear relationship. If the simple model is accepted YEAR can be used, in part, as a

predictor for overall teacher attitudes.

Research Question 4: Teachers will rate the subsections of the FCAT statistically
higher than they rate the FCAT as a whole

A directional pair-wise dependent samples t-test will be performed to test this

research question. Rejection of the null hypothesis will indicate that teachers rate the

subsections (i.e., mathematics, science, reading, and writing) higher than the FCAT in its

entirety, in terms of being an adequate measure of a student's level of mastery. Type 1

error rate will be set at alpha = .05. This test is being conducted to verify teacher

attitudes towards the FCAT. In theory, teachers could rate the FCAT as an indicator of a

students' level of mastery lower than if it was broken down into subsections.














CHAPTER 4
RESULTS

The descriptive statistics of the measures (i.e., statements on the Likert scale)

included in the overall sample are shown in Table 4-1. The mean score is the average of

all responses for a particular item in terms of the scale of the item response. For

example, because the responses are on a 5-point Likert scale, an average response of 1.58

suggests that on average the responses fell somewhere between "1-strongly disagree" and

"2-disagree". A mean of 3.1 indicate the response fell slightly above "3-neutral". It is

widely accepted and often recommended (i.e., if the Likert scale has at least five points it

can be considered continuous) to analyze this data as if it were interval, technically it is

ordinal data.

Table 4-1 Descriptive Statistics for Survey Statements

Statements N Mean SD
In general, the Florida accountability 73 2.73 1.00
system works well.

Goals set forth by the NCLB Act will 73 2.21 1.01
most likely be actualized.

The NCLB Act has an overall positive 73 2.62 1.05
impact on the United States.

The NCLB Act has an overall positive 74 2.57 1.07
impact on Florida.

The A+ Plan holds schools accountable 70 3.09 1.10
for their students learning.

The grade attached to each school helps 74 2.15 1.12
give parents insight on how well that
school is operating.









Table 4-1 (continued)
Statements N Mean SD
Students from low performing schools 74 2.97 1.09
should be able to transfer to another school.

The A+ Plan helps motivate teachers and 71 2.41 1.33
administrators.

I have an impact on the grade my school 74 3.87 0.91
receives.

Administrators have an impact on the grade 73 3.86 0.89
their school receives.

The student body has an impact on the 73 4.22 0.95
grade their school receives

The SSS adequately outlines the curriculum 72 3.76 0.86
content at each grade level.

The SSS lay the foundation for a broad 72 3.67 0.96
curriculum.

All the SSS will be taught at one point or 70 3.63 0.97
another.

The FCAT measures the SSS well. 69 2.75 0.96

The FCAT assesses the most important 69 2.59 1.02
material at each grade level.

Florida is enacting the NCLB Act 70 2.47 1.02
appropriately with the FCAT.

The high stakes attached to the FCAT are 74 1.99 1.17
necessary.

The FCAT would still be taken seriously 73 3.03 1.01
if there weren't consequences for students.

The FCAT would still be taken seriously if 72 3.64 1.03
there weren't rewards for teachers.

The FCAT is a good indicator of the student's 72 2.68 0.96
level of mastery for required curriculum.









Table 4-1 (continued)
Statements N Mean SD
I have control over how my students perform 68 2.87 0.99
on the FCAT.

I spend extra time in class stressing material 67 3.96 0.99
that is likely to show up on the FCAT.

I spend more time going over test taking skills 65 3.83 1.11
now than before the FCAT was established.

The FCAT has an overall positive impact 74 2.35 1.07
on Florida.

The FCAT has an overall positive impact 73 2.44 1.12
on my school.

The FCAT has an overall positive impact 70 2.24 1.12
on my students.

Item types (e.g. multiple choice) used on the 69 2.84 1.07
FCAT, are the most appropriate type for
each learning objective.

Prompts used in the writing section of the 67 2.76 1.07
FCAT are adequate for measuring the
student's overall writing ability.

Other subjects (e.g. social studies, art) 70 2.36 1.35
should be included in the FCAT.

The FCAT measures the most important 64 3.56 .0.94
concepts in Reading.

The FCAT measures the most important 63 3.22 0.99
concepts in Writing.

The FCAT measures the most important 64 3.42 0.94
concepts in Mathematics.

The FCAT measures the most important 60 3.05 0.96
concepts in Science.









A Likert scale gives the option of using a neutral point. The neutral point allows

respondents that are apathetic towards the topic, an opportunity to answer fairly.

Unfortunately, using a neutral point can cause the survey results to sway towards the

middle and the results often appear to be insignificant. Part of our study intended to

uncover indifferent attitudes towards certain aspects of educational accountability. For

this purpose a neutral point was used and the percent agreement and disagreement for

each statement are shown in Table 4-2. The percentage reported is the valid percent,

which does not take into account missing data (i.e., the percent is out of the people who

responded to that particular item)

Table 4-2 Percent Agreement Statistics for Survey Statements

Statement % Agree % Disagree
In general. the Florida accountability 17.8 39.8


system works well.

Goals set forth by the NCLB Act will
most likely be actualized.

The NCLB Act has an overall positive
impact on the United States.

The NCLB Act has an overall positive
impact on Florida.

The A+ Plan holds schools accountable
for their students learning.

The grade attached to each school helps
give parents insight on how well that
school is operating.

Students from low performing schools
should be able to transfer to another school.

The A+ Plan helps motivate teachers and
administrators.


9.5


17.8


17.6


32.8


12.2



29.8


24.0


64.4


52.1


54.0


27.2


67.6



25.7


57.7









Table 4-2 (continued)
Statement % Agree % Disagree
I have an impact on the grade my school 67.5 8.1
receives.

Administrators have an impact on the grade 72.6 9.6
their school receives.

The student body has an impact on the 79.4 5.5
grade their school receives

The SSS adequately outlines the curriculum 66.7 7.0
content at each grade level.

The SSS lay the foundation for a broad 62.5 11.1
curriculum.

All the SSS will be taught at one point or 57.2 10.0
another.

The FCAT measures the SSS well. 21.7 44.9

The FCAT assesses the most important 17.3 49.2
material at each grade level.

Florida is enacting the NCLB Act 17.1 52.9
appropriately with the FCAT.

The high stakes attached to the FCAT are 12.2 70.3
necessary.

The FCAT would still be taken seriously 32.9 27.4
if there weren't consequences for students.

The FCAT would still be taken seriously if 62.5 11.2
there weren't rewards for teachers.

The FCAT is a good indicator of the student's 19.5 44.4
level of mastery for required curriculum.

I have control over how my students perform 22.1 32.3
on the FCAT.

I spend extra time in class stressing material 77.6 11.9
that is likely to show up on the FCAT.









Table 4-2 (continued)
Statement % Agree % Disagree
I spend more time going over test taking skills 67.7 12.3
now than before the FCAT was established.

The FCAT has an overall positive impact 13.5 55.4
on Florida.

The FCAT has an overall positive impact 17.8 54.8
on my school.

The FCAT has an overall positive impact 12.9 61.4
on my students.

Item types (e.g. multiple choice) used on the 21.7 31.8
FCAT, are the most appropriate type for
each learning objective.

Prompts used in the writing section of the 23.9 37.3
FCAT are adequate for measuring the
student's overall writing ability.

Other subjects (e.g. social studies, art) 20.0 55.7
should be included in the FCAT.

The FCAT measures the most important 50.0 7.8
concepts in Reading.

The FCAT measures the most important 34.9 14.2
concepts in Writing.

The FCAT measures the most important 45.3 12.5
concepts in Mathematics.

The FCAT measures the most important 25.0 21.7
concepts in Science.


Research Question 1

Teachers' opinions of accountability will be significantly different at each of the

four areas of accountability: (1) NCLB (2) A+ Plan (3) SSS (4) FCAT. A series of six

non-directional dependent sample t-tests were performed to test for significant









differences at each level of accountability. The means, standard deviations, and sample

sizes for the variables of interest are shown in Table 4-3. A Bonferroni adjustment was

made to control for the family-wise Type 1 error rate. The mean difference for

SUMNCLB and SUMAPLAN was not statistically significant t(65) = -.205, p = .803.

The mean difference for SUMNCLB and SUMSSS was statistically significant t(69)= -

.9.399,p = .000. The mean difference for SUMNCLB and SUMFCAT was not

statistically significant t(68) = -1.696, p = .095. The mean difference for SUMAPLAN

and SUMSSS was statistically significant t(63) = -8.616, p = .000. The mean difference

for SUMAPLAN and SUMFCAT was not statistically significant t(63) = -.1.127, p =

.264. The mean difference for SUMSSS and SUMFCAT was statistically significant t(67)

= 8.417, p = .000. SUMSSS differed significantly from every other group, indicating that

teachers rate the SSS differently than the other areas of accountability. All other group

differences were not statistically significant.

Table 4-3 Descriptive Statistics for New Variables

Variable N Mean SD
SUMNCLB 72 2.48 0.96
SUMAPLAN 68 2.55 0.96
SUMSSS 70 3.69 0.76
SUMFCAT 69 2.68 0.85
SUMFCATSECT 60 3.30 0.86
AVERAGE 63 2.88 0.66


Research Question 2

GRADE will be a contributing factor to the responses on INSIGHT, IMPACT,

SUMNCLB, SUMAPLAN, SUMSSS, and SUMFCAT. Planned complex contrasts were

performed to test this hypothesis. The Bonferroni test was used for testing the statistical

significance of the simple effects. Schools that received a grade of A or B were combined









and contrasted with schools that received a grade of D. The Bonferroni technique

requires family-wise alpha (.05) to be divided by the number of contrasts (six). The

means, standard deviations, and sample sized for each variable, broken down by school

grade are shown in Table 4-4. The contrast of A and B schools with D schools was

statistically significant for INSIGHT, t(57.92) = 3.716, p = .000. The contrast of A and B

schools with D schools was statistically significant for IMPACT, t(52.35) = 2.985, p =

.004. The contrast of A and B schools with D schools was not statistically significant for

SUMNCLB, t(69) = -0.923, p = .359. The contrast of A and B schools with D schools

was statistically significant for SUMAPLAN, t(45.56) = 2.830, p = .007 The contrast of

A and B schools with D schools was not statistically significant for SUMSSS, t(67) =

0.661, p = .511. The contrast of A and B schools with D schools was not statistically

significant for SUMFCAT, t(61.67) = 0.215,p = .830. When appropriate, equal variances

were not assumed based on Levene's test for homogeneity of variances.

Table 4-4 Descriptive Statistics for New Variables by GRADE

A Schools B Schools D Schools
Variable N Mean Std. Dev N Mean Std. Dev N Mean Std. Dev
INSIGHT 25 2.92 1.15 21 1.95 0.74 28 1.61 0.92
IMPACT 25 4.36 0.64 21 3.81 0.98 28 3.46 0.88
SUMNCLB 24 2.74 1.04 21 2.03 0.56 27 2.59 1.03
SUMAPLAN 23 3.30 0.87 18 2.17 0.50 27 2.16 0.89
SUMSSS 23 3.83 1.00 20 3.63 0.51 27 3.61 0.70
SUMFCAT 23 2.91 0.97 19 2.46 0.80 27 2.64 0.75

Research Question 3

YEAR will have a linear relationship with AVERAGE. A simple regression

analysis was conducted to examine the degree of association between the outcome

variable AVERAGE and the explanatory variable YEAR. The simple model yielded an

R 2 of .082 and was statistically significant, F(1, 57) = 5.076, p = .028, suggesting that









the amount of years teaching (YEAR) is jointly associated with 8.2% of an individuals

summated score on the accountability survey (AVERAGE). The adjusted R 2for the

reduced model was .066. Table 4-5 reports the unstandardized regression coefficients

(b), the standardized regression coefficients (,/), the observed t statistics, and the squared

semi-partial correlations (r 2).

The interpretation of the unstandardized regression coefficient for any explanatory

variable is a function of the scale of measurement of that variable. The interpretation of

the regression coefficient for a continuous variable can be made in terms of rate and

direction of change. The regression coefficient indicates the expected unit change in the

outcome variable for each unit change in any explanatory variable, while holding the

others constant. For example, YEAR is a continuous variable with an unstandardized

regression coefficient of b = .014. This suggests that each unit increase in YEAR (i.e.,

number of years teaching) results in an average .014 unit increase in AVERAGE (i.e.,

more positive view on modem accountability). Even though an increase of .014 in

AVERAGE is statistically significant, it may not be practically significant.

Table 4-5 Summary Statistics for Simple Regression

Variables b Std Error f6 t p r2
Intercept 2.635 .124 21.332 .000
Year .014 .006 .286 2.253 .028 .082


Research Question 4

Teachers will rate the subsections of the FCAT statistically higher than the FCAT

as a whole. A directional dependent samples t-test was performed to check for mean

differences between SUMFCAT and SUMFCATSECT. The means, standard deviations,

and samples sizes of interest are shown in Table 4-3. The difference in the mean response






31


for SUMFCAT and SUMFCATSECT was statistically significant t(59) = -7.670, p =

.000. Reject the null hypothesis. SUMFCATSECT was rated significantly higher than

SUMFCAT.














CHAPTER 5
DISCUSSION

Discussion of Findings

Research Question 1

The average teacher responses for each of the four major areas of accountability

(NCLB, A+ Plan, SSS, and FCAT) were compared to one another. The SSS were rated

higher than every other area of the accountability design. All other comparisons were not

statistically significant, indicating that teachers on average rate NCLB, A+ Plan, and

FCAT approximately the same. Teachers may have rated the SSS higher than the others

because the SSS are the only accountability topic without direct consequences attached.

When teachers lend their opinion, it is probably difficult to separate the area they are

assessing and the consequences attached to it. In other words, responses on the quality of

the FCAT as a measurement instrument, include the negative feelings towards the

consequences. They rate it as a "bad test" independent of the quality of the test because

of the negative consequences associated with it. In addition, teachers work with the SSS

more closely and most likely know more about SSS than the other areas examined in our

study. Teachers appear to have a firmer grasp on the SSS hence rendering their depiction

of the SSS more accurate than the areas they have been exposed to less.

Research Question 2

Schools with a grade of A and B were combined and compared to all the D schools

on each area of accountability (NCLB, A+ Plan, SSS, and FCAT) in addition to IMPACT

(I have an impact on the grade my school receives) and INSIGHT (the grade attached to









each school gives parents insight into how well that school is operating). The purpose of

this hypothesis was to test whether teachers from successful schools differed from

unsuccessful schools in their opinion on the broad accountability areas, in addition to the

two survey items that tapped into how much control a teacher felt they had on their

schools grade. Also, we examined if teachers felt the grade accurately depicted their

school. The contrasts were statistically significant for INSIGHT, IMPACT, and

SUMAPLAN. That is, teachers from schools with differing grades rated the variables

associated with school grades different but not other variables associated with different

aspects of accountability. Teachers from schools with a low grade rate their ability to

impact their schools grade less than teachers from a higher achieving school. Teachers

from D schools responded less favorably than the others on the authority of the school

grading systems ability to let the public how well that school is actually operating.

Underachieving schools also rated the A+ Plan lower than the A and B schools

combined, probably because the A+ Plan has a more negative effect on them. This is

similar to the previous research question where teacher responses on the quality of the

school grading system are affected by the consequences attached. Intuitively,

underachieving schools have more negative consequences thus a more negative attitude

towards the A+ Plan. All the other contrasts were not statistically significant.

Research Question 3

Teachers were asked how many years they have been teaching. This data was used

to test whether there was a linear relationship between years as a teacher and overall

attitude toward accountability. There was a small, but statistically significant

relationship. The model shows that with each unit increase in years teaching there is a

.014 unit increase in overall attitude toward accountability. To illustrate, after 20 years of









teaching, there is only a .28 (on a 5-point Likert scale) increase in overall attitude. This is

not practically significant, in that very small and meaningless changes are observed until

the difference in time teaching is very large. However, this relationship may have been

statistically significant because specific items yielded different responses across teachers

with more or less experience. This could be an indicator of awareness by teachers and a

thorough understanding of accountability. Perhaps teachers that have been teaching

longer understand accountability better and are able to rate it more accurately.

Research Question 4

Opinions held by teachers on the FCAT as a whole were compared to the

subsections of the FCAT. Teachers rated the FCAT lower than the sections that comprise

it. One possible reason for this is, teachers responded about the consequences of the

FCAT instead of the quality of the instrument whereas when responding to individual

subject areas are able to focus on just the quality. In other words, there are not direct

consequences attached to the FCAT subsections and teachers may be able to answer more

fairly. Another possibility for this discrepancy is that teachers do not really know

whether or not these are good testing instruments. In lay terms, most teachers rated the

FCAT as "bad" and the subsections as "indifferent". It was not that they praised the

FCAT subsections, their opinions were neutral. This could be because they did not know

if each section was a good indicator of a student's level of mastery on a subject area.

Implications of the Descriptive Statistics

In addition to the validity implications of the four research questions addressed in

our study, the descriptive statistics contribute greatly to the validity argument. In general,

evidence for consequential validity of Florida's use of the FCAT collected during our

study is not favorable. Results state that only 21.7% of teachers surveyed agree that "the









FCAT measures the SSS well", 19.5% of teachers agree that "the FCAT is a good

indicator of the student's level of mastery for required curriculum", and 12.9% of

teachers agree that "the FCAT has a positive impact on my students".

Teachers were asked to make an evaluative judgment of the quality of the FCAT.

As previously stated, results were not favorable. The importance may not be how the

teachers rated the FCAT, but why they rated it that way. The approach taken throughout

our study was to use teachers as a tool to gauge the validity of the FCAT's use, hence

lending part in a validity argument for Florida's entire accountability design. The overall

sentiment towards every part of accountability measured in our study, with consequences

directly attached, was disapproving or at best, indifferent. It seems likely that the

consequences of the FCAT directly contribute to a teacher's evaluation of the quality of

the FCAT as a measurement tool.

The results show discrepancies in FCAT ratings. In contrast to the 19.5% of

teachers who agreed that the FCAT was a "good indicator of the student's level of

mastery for required curriculum", 50%, 34.9%, 45.3%, 25% of teachers agreed that the

FCAT measured the most important concepts in reading, writing, mathematics, and

science, respectively. This discrepancy could be a result of the text in the items or the

propensity of teachers to underrate the quality of the FCAT based on past experiences.

Teachers may have negative feelings towards the assessment because they disagree with

the consequences attached to it (70.3% of teachers disagreed with "the high-stakes

attached to the FCAT are necessary"). Further analysis is needed to draw conclusions on

this topic.









Limitations of this Analysis

Response Rate

The response rate was only 28%, usually considered unacceptable. People that

respond to surveys have differing characteristics than those who do not return surveys

(i.e., respondents tend to be educated and female). Our studies sample was homogeneous

(all elementary school teachers, educated, primarily female) perhaps helping to reduce

the error (i.e., error may be less than if the sample was more heterogeneous). However, it

is possible that the members of the sample who opted not to participate are more

apathetic towards accountability issues. The reasons for declining participation are

unknown, contributing significantly to the limitations of this analysis.

The response rate also varied across schools. Response rate ranged from 50%

through 15.2%. Only five surveys were returned by one of the schools, yielding an

unreliable representation of that school. An attempt was made to correct for this error by

analyzing similar schools with like responses.

Sampling Issues

One of the schools was from a different district. The implications are mixed.

Some results were generalizable across districts. However, the school in central Florida

differed on many items to a demographically similar school in north central Florida.

Reasons for these deviations are not known. This could be an indicator of the pressures

placed on teachers directly from principals or district officials.

Suggestions for Future Research

As mentioned, the issues discussed in our study did not generalize entirely across

districts. Further samples from districts and schools across Florida need to be sampled in

order to gather more information on the consequences of high-stakes testing. Another









possible angle for future research is examining the factors that can contribute to teacher

attitudes on accountability. Our study examined the role of length of time as a teacher

and school grade for each respondent on accountability opinions. Additional studies

should delve deeper into the demographics, motivations, and backgrounds of their

participants.

Notably, 77.6% of respondents indicated that they "spend extra time in class

stressing material that is likely to show up on the FCAT". A closer look as to what,

according to teachers, constitutes "extra time" and the plausible implications that has on

the validity of the assessments interpretation would make for an interesting study.

Closing Remarks

It is hopeful that our study gives insight into some of the effects of high-stakes

testing on teachers. The NCLB act places more pressure on teachers than ever before in

terms of student achievement. At times, it seems that teachers are being held too

accountable for their students learning. Many teachers expressed their frustrations in the

margins of the survey explaining that much of what they are held accountable for is

beyond their control, and factors like parental support and innate ability are more likely

contributing to the child's success, than their efforts. NCLB is still relatively new and

only time will tell the exact benefits and repercussions of the new accountability designs.
















APPENDIX
ACCOUNTABILITY AND TEACHER ATTITUDES SURVEY INSTRUMENT

I. The following statements address your thoughts on the No Child Left Behind
(NCLB) act, Florida's A+ Plan, and Florida's accountability system in general.
Please indicate your level of agreement with each statement.


strongly disagree


1. In general, the Florida accountability
system works w ell................................................... 1

2. Goals set forth by the NCLB Act will most likely
be actualized ....................... ............................... 1

3. The NCLB Act has an overall positive impact
on the United States.................... ........... ................. 1

4. The NCLB Act has an overall positive impact on
Florida............................. .... .. ............ .............. 1

5. The A+ Plan holds schools accountable for their
students learning................... .............. ................ 1

6. The grade attached to each school helps give parents
insight on how well that school is operating.................. 1

7. Students from low performing schools should be
able to transfer to another school.................................1

8. The A+ Plan helps motivate teachers and
adm inistrators................... ...... .... 1

9. I have an impact on the grade my school receives......1

10. Administrators have an impact on the grade
their school receives.............................. ......... ...... 1

11. The student body has an impact on the grade their
school receives ........................ ... ......... .... ......... .


strongly agree


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5











II. The following statements concern the Sunshine State Standards (SSS) and the
Florida Comprehensive Assessment Test (FCAT). Please indicate your level of


agreement with each statement.

12. The SSS adequately outlines the curriculum


strongly disagree


content at each grade level.......................................... 1

13. The SSS lay the foundation for a broad curriculum.... 1

14. All the SSS will be taught at one point or another...... 1

15. The FCAT measures the SSS well........................... 1

16. The FCAT assesses the most important material at
each grade level........................ ................................. 1

17. Florida is enacting the NCLB Act appropriately
w ith the FCA T................ .............................................. 1

18. The high stakes attached to the FCAT are necessary.. 1

19. The FCAT would still be taken seriously if there
weren't consequences for students................................ 1

20. The FCAT would still be taken seriously if
there weren't rewards for teachers................ ............... 1

21. The FCAT is a good indicator of the student's level
of mastery for required curriculum ................................ 1

22. I have control over how my students perform
on the FCA T ............................. ... ..................... .... 1

23. I spend extra time in class stressing material that is
likely to show up on the FCAT..................................... 1

24. I spend more time going over test taking skills
now than before the FCAT was established................... 1

25. The FCAT has an overall positive impact on Florida.. 1

26. The FCAT has an overall positive impact
on m y school.................. ............... ........ ... ........ ....

27. The FCAT has an overall positive impact on my
stu d en ts ............................................................................ 1


strongly agree

4 5

4 5

4 5

4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5


2 3 4 5











strongly disagree


28. Item types (e.g. multiple choice) used on the
FCAT, are the most appropriate type for each learning
objective ............ ... ........................... ..... ................. 1

29. Prompts used in the writing section of the FCAT
are adequate for measuring the student's overall writing
ability ........................................ .............. 1

30. Other subjects (e.g. social studies, art) should be
included in the FCA T. .... ........................................ 1

The FCAT measures the most important concepts in:
31. Reading................1
32. Writing.................1
33. Mathematics.......... 1
34. Science.............. ... 1


2 3 4 5



2 3 4 5


2 3 4 5


III. In the following section please tell us about yourself.

What grade level do you teach?
If applicable, what subject do you teach?
In what school year did you begin teaching?


strongly agree















LIST OF REFERENCES


Abedi, J. (2004). The No Child Left Behind Act and English language learners:
Assessment and accountability issues. Educational Researcher, 33(1), 4-14.

Cizek, G. (2001). More unintended consequences of high-stakes testing. Educational
Measurement: Issues and Practice, 20(4), 19-27.

Crocker, L., Algina, J. (1986). Introduction to classical and modern test theory. New
York: Wadsworth.

Cronbach, L. (1988) Five perspectives on validity argument. In H. Wainer & H. I. Braun
(Eds.), Test Validity. (pp. 3-17). Hillsdale, NJ: Erlbaum. (as cited by Haertel, E.
(1999) Validity arguments for high-stakes testing: In search of the evidence.
Educational Measurement: Issues and Practice, 18(4), 5-10.)

Florida Department of Education [FDOE]. (2003). Consolidated state application
Accountability Workbookfor State grants under Title IX, Part C, Sec. 9302for the
Elementary and Secondary Education Act (Pub. L. No. 107-110.) March 26.

Florida Department of Education [FDOE]. (2005a). 2004-2005 School Accountability
report. Last retrieved March 2006. Available online at:
http://schoolgrades.fldoe.org/.

Florida Department of Education [FDOE]. (2005b). Fact .\/eert NCLB andAdequate
Yearly Progress. Last retrieved March 2006. Available online at:
http://www.fldoe.org/NCLB/FactSheet-AYP.pdf.

Florida Department of Education [FDOE]. (2005c). FCAT Web Brochure. Last retrieved
March 2006. Available online at: http://www.fir.edu/doe/sas/fcat/fcatpubl.htm.

Florida Department of Education [FDOE]. (2005d). Grading Florida Public Schools
2004-2005. Last retrieved March 2006. Available online at:
http://fim.edu/doe/schoolgrades/pdf/schoolgrades.pdf

Florida Department of Education [FDOE]. (2005e). Sunshine State Standards. Last
retrieved March 2006. Available online at:
http://www.firn.edu/doe/curric/prekl2/index.html.

Haertel, E. (1999). Validity arguments for high-stakes testing: In search of evidence.
Educational Measurement: Issues and Practice, 18(4), 5-10.










Haladyna, T., Downing, S. (2004). Construct-irrelevant variance in high-stakes testing.
Educational Measurement: Issues and Practice, 23(1), 17-26.

Hill, R., DePascale, C. (2003). Reliability and No Child Left Behind accountability
designs. Educational Measurement: Issues and Practice, 22(3), 12-21.

Kohn, A. (2000). Burnt at the high stakes. Journal of Teacher Education, 51(4), 315-327.

Lane, S. (2004), Validity of high-stakes assessment: Are students engaged in complex
thinking? Educational Measurement: Issues and Practice, 23(3), 6-14.

Lord, F., Novick, M. (1968). Statistical ithe', i'\ of mental test scores. Reading MA:
Addison Wesley. (as cited by Haladyna, T., Downing, S. (2004). Construct-
irrelevant variance in high-stakes testing. Educational Measurement: Issues and
Practice, 23(1), 17-26.)

Messick, S. (1995). Standards of validity and the validity of standards in performance
assessment. Educational Measurement: Issues and Practice, 14(4), 5-8.

No Child Left Behind Act of 2001, Public Law 107-110, 115 Stat.1425, 107th Congress
(2002).

U.S. Department of Education [USDOE]. (2006a). Budget Office- U.S. Department of
Education. Last retrieved on March 2006. Available online at:
http://www.ed.gov/about/overview/budget/index.html?src=az.

U.S. Department of Education [USDOE]. (2006b). No ChildLeft Behind. Last retrieved
on March 2006. Available online at http://www.ed.gov/nclb/landing.jhtml.















BIOGRAPHICAL SKETCH

Kathryn Miller received a Bachelor of Science degree in psychology from the

University of Central Florida (Orlando) in 2003. She enjoyed the quantitative aspect of

research psychology and decided to minor in statistics. After graduating, she enrolled as

a graduate student at the University of Florida, majoring in research and evaluation

methodology in the Department of Educational Psychology. While in graduate school she

was fortunate to get the opportunity to be a Graduate Teaching Assistant under Dr. David

Miller for the course, Assessment in General and Exceptional Education, where she

instructed students on proper assessment procedures. After graduation, Kathryn hopes to

relocate to Boston and work on a research team that investigates issues related to health

and medicine.