Juvenile correctional schools: Assessment and accountability policies and practices
CITATION
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/IR00000470/00001
 Material Information
Title: Juvenile correctional schools: Assessment and accountability policies and practices
Series Title: Gagnon, J. C., Haydon, T., & Maccini, P. (2010). Juvenile correctional schools: Assessment and accountability policies and practices. Journal of Correctional Education, 61, 23-45.
Physical Description: Journal Article
Creator: Gagnon, Joseph
 Notes
Abstract: This study focused on school-level approaches to assessment and accountability policies and practices. A national random sample of 131 (34.22%) principals from juvenile correctional schools for committed youth (JC) responded to a mail and on-line survey. No statistically significant differences existed between respondent and nonrespondent schools. Results indicated that the majority of students with or without disabilities in JC schools participated in state assessments. The most common basis of school policies for assessment accommodations was state accommodation guidelines. For most JC schools there was no process for accountability for student participation and performance on state assessments. Almost half of principals did not know if their school made Adequate Yearly Progress. Other salient results, implications, and recommendations for future research are presented.
Acquisition: Collected for University of Florida's Institutional Repository by the UFIR Self-Submittal tool. Submitted by Joseph Gagnon.
Publication Status: Published
 Record Information
Source Institution: University of Florida Institutional Repository
Holding Location: University of Florida
Rights Management: All rights reserved by the submitter.
System ID: IR00000470:00001

Full Text



The Journal of Correctional Education 61(1) * March 2010


Juvenile Correctional Schools:

Assessment and Accountability

Policies and Practices


Joseph C. Gagnon, Ph.D.
Todd Haydon, LCSW, Ph.D.
Paula Maccini, Ph.D.



Abstract
This study focused on school-level approaches to assessment and accountability
policies and practices. A national random sample of 131 (34.22%) principals from
juvenile correctional schools for committed youth (JC) responded to a mail and
on-line survey. No statistically significant differences existed between respondent and
nonrespondent schools. Results indicated that the majority of students with or without
disabilities in JC schools participated in state assessments. The most common basis of
school policies for assessment accommodations was state accommodation guidelines.
For most JC schools there was no process for accountability for student participation
and performance on state assessments. Almost half of principals did not know if their
school made Adequate Yearly Progress. Other salient results, implications, and
recommendations for future research are presented.

Introduction
In response to thirty years of concern with student achievement on both
national and international assessments, the No Child Left Behind Act (NCLB,
2002) was developed to promote a rigorous education for all students. NCLB
has led to an increased focus on establishing challenging standards, measuring
student learning against those standards, and holding schools and local
education agencies (LEAs) accountable for student achievement (Kohl,
McLaughlin, 8 Nagle, 2006). The central components of NCLB ensure all
students participate in state assessments, assessment accommodations are
appropriately used and assessment results are utilized, as well as publicly
reported. NCLB also incorporates accountability for student learning, which
emphasizes student performance on state assessments as a basis of providing
rewards and sanctions to schools (Yell, Shriner, & Katsiyannis, 2006).







The Journal of Correctional Education 61(1) March 2010
Juvenile Correctional Schools Gagnon, et. al.

The assessment and accountability requirements within NCLB (2002) were
designed to promote a high quality education for all youth, including students
with disabilities, in a variety of educational settings. In fact, the Individuals with
Disabilities Education Improvement Act regulations (IDEIA, 2004) are aligned
with NCLB in an effort to ensure youth with disabilities are also included in the
system of educational accountability.
Despite the intentions behind NCLB (2002) and IDEIA (2004), there are
concerns that the promises of educational assessment and accountability do
not extend to all youth. Limited evidence suggests that adherence to federal
education mandates are a significant and longstanding issue of concern for
youth with and-without disabilities in exclusionary school settings (Gagnon &
McLaughlin, 2004; Gagnon, Maccini, 8 Haydon, 2009). Juvenile correctional
schools for committed youth (JC) are one exclusionary school setting with the
worst record of adhering to federal education reform (Browne, 2003; Coffey
8 Gemignani, 1994; Leone, 1994). The mandates in NCLB do not specifically
address the unique characteristics of youth (e.g., mental health, short length
of stay) and systemic difficulties (e.g., security issues) within JC schools (Leone
8 Cutting, 2004). However, there is no legal justification for denying youth in
JC schools with and without disabilities the same educational opportunities,
participation in state assessments, and inclusion in accountability measures
that are significant components of public schools. The sections that follow will
briefly discuss the federal mandates as they apply to JC schools and include (a)
participation in state assessments, (b) assessment accommodations, and (c)
accountability.

Participation in State Assessments
NCLB (2002, see PL 107-110 � 1001(4)) requires that students with and without
disabilities participate in state assessments. Participation is viewed as critical to
improving educational opportunities for students and providing information to
schools and communities concerning student performance (Nagle, Yunker, 8
Malmgren, 2006; Thurlow, Lazarus, Thompson, & Morse, 2005). One of the
primary ways policy makers aligned IDEIA with NCLB was to require that
students with disabilities also be included in state- or district-wide assessments
(IDEA, 2004, see PL 108-446 Sec. 612); NCLB, see PL 107-110 �1001(1)). Over
time, an increasing number of youth with disabilities have participated in state
assessments (Ysseldyke, Dennison, 8 Nelson, 2004). However, it is unclear the
extent to which youth with or without disabilities in JC schools participate in
state assessments.







The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools

Assessment Accommodations
Assessment accommodations are defined as altering assessment materials
and/or procedures in a manner that eliminates the influence of the student
disability and allow students to show their knowledge (Shriner 8 Ganguly,
2007). States are required to develop assessment accommodations that are
considered valid and do not nullify student scores (NCLB, 2002, see PL 107-110
� 704(b)(4)(x); 2004, see PL 108-446 �612(a)(16)(B) and 614(d)(1)(A)(VI)(aa));
Thompson, Johnstone, Thurlow, 8 Altman, 2005; Zeninsky & Sireci, 2007). All
states require schools to adhere to a state list of approved accommodations �
and, as such, school personnel must be aware of these policies (Thurlow,
Lazarus, Thompson, & Morse, 2005). Currently, there is no information available
that indicates the basis for JC school assessment policy and adherence to policy
has not been studied.

Accountability
High-stakes assessments are commonly used to place students in appropriate
classrooms, decide if students should be promoted to the next grade, and
whether or not they should graduate from high school (Ysseldyke, Dennison, 8
Nelson, 2004). Currently, with regard to JC schools, there is no information that
identifies common uses of state assessment results. Existing accountability
requirements within NCLB (2002, see PL 107-110 � 1119(b)(1)(A) and (B)) also call
for schools to publicly report student results on state assessments. However, no
post-NCLB research exists that addresses JC school reporting of student scores
on state assessments. As such, school level approaches taken by JC schools to
use and report state assessment data are unclear.
Rewards and sanctions. The emphasis on education accountability is a
continuation of the Elementary and Secondary Education Act (ESEA) original
goal to close the achievement gap between disadvantaged students and their
peers, as well as for all students to reach grade level proficiency in reading and
mathematics by 2014 (Kohl, McLaughlin, 8 Nagle, 2006). To achieve student
proficiency, NCLB (2002) established state and school accountability that
emphasizes student performance as a basis for rewards and sanctions (2002,
see PL 107-110 � (1111)(2)(A)(iii)); Yell, Shriner, 8 Katsiyannis, 2006). Indicators
for school accountability typically include student performance on state
assessments, performance growth on state assessments, attendance rates, and
dropout rates (Bolt, Krentz, 8 Thurlow, 2002). A significant component of the
accountability system is Adequate Yearly Progress (AYP), which is based
primarily on student scores on state assessments (NCLB, see PL 107-110 �








The Journal of Correctional Education 61(1) March 2010
Juvenile Correctional Schools Gagnon, et. al.

(6161)(1)); Yell, Shriner, 8 Katsiyannis). Whether a school achieves AYP typically
results in rewards or sanctions that commonly include providing or withholding
money (Bolt, Krentz, 8 Thurlow).
AYP, as well as school rewards and sanctions, apply to schools across the
continuum of services, including JC schools. However, it is currently unknown
how JC schools are held accountable, the indicators that are used by the state
to determine positive and negative consequences for these schools, and the
percentage of JC schools that make AYP are not known. For JC schools to be
included in current accountability reform, it is critical that such questions are
answered.
Currently, there is no research that identifies if students with and without
disabilities in JC schools are included in current school-level accountability
systems. There is a critical need to understand the extent which students in JC
schools are benefiting from the emphasis on participation in state assessments,
appropriate use of assessment accommodations, and assessment results.
Further, it is important to identify the extent to which these schools are reporting
assessment results publicly and being held accountable for student learning.

Purpose
The purpose of the current national survey of public and private JC school
principals was to identify policies, practices and philosophies concerning (a)
participation in state assessments, (b) assessment accommodations, (c) how
respondent views toward participation of students in state assessment
compares across achievement of AYP, and (d) school accountability.

Methods
Instrumentation
The current survey of principals was conducted throughout the U.S. and
included five primary sections at the school-level: (a) school, principal, and
student characteristics; (b) curriculum policies and practices; (c) assessment
policies and practices; and (d) accountability policies and practices. The current
report focuses solely on assessment and accountability policies and practices.
The other survey topics are discussed in a separate report by Gagnon, Barber,
Van Loan, and Leone (2009).
Three procedures were completed to ensure reliability of survey data. First,
surveys maintained the same format for both hard copy and online versions
(Fink, 1995). Second, to ensure decisions were consistently followed, a
codebook was developed and used during data entry (Litwin, 1995). Finally,







The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools

reliability checks were conducted on data entered for 30% (n = 39) of the
131 returned surveys. Reliability was calculated by dividing the number of
agreements by the number of agreements and disagreements and multiplying
by 100%. Reliability for data entry was 99.97%.
Specific methods were used to increase survey validity. To ensure content
validity, research questions were developed based on a review of literature,
consideration of current educational reform, personal expertise, and discussion
with experts in the field of special education. Next, an advisory group reviewed
and made recommendations regarding the survey and study methodology.
Also, principal focus groups commented on the format and content of the
surveys. The survey was modified based on the advisory group and focus
group feedback.
The survey included questions concerning assessment (e.g., Should students
in your school participate in state assessments? What percentage of students in your
school participate in state assessments? If less than 95% of students participated, why
did students not participate? What percentage of students of students with disabilities
in your school participate in state assessments? What percentage of students participate
in an assessment that is required by another state? What is the basis of school policies
for assessment accommodations on state assessments? To what extent are state
accommodations that students with EBD or LD have on their Individualized
Education Programs (IEPs) used during classroom instruction?) and school
accountability (e.g., Which describes how results of state assessments are used in your
school? What is the basis for how your school uses state assessment results? Which
indicators are used by the state to determine school consequences? How is your school
held accountable for student participation in state assessments? How does your school
report assessment results for students with disabilities? Did your school make AYP last
school year?).

Sample and Participant Selection
The sample was initially identified using the description of the facilities'
programs. Three criteria needed to be present in the description: (a)
committed/adjudicated youth (court commitment); (b) closed/secure facility; and
(c) education services provided on-site. Variations in JC schools for committed
youth required delineation of specific programs that would not apply. Programs
that did not qualify were juvenile: (a) community corrections (i.e., not a secure
care facility); (b) probation; (c) parole; (d) detention; (e) accountability camps;
(f) programs with no education component; (g) incomplete or inaccurate
information in the database; and (h) program was closed or not yet open. As








The Journal of Correctional Education 61(1) * March 2010
Juvenile Correctional Schools Gagnon, et. al.

a secondary check of the universe, state websites were reviewed. As a result,
63 additional facilities were identified from the website information as possibly
meeting the criteria for the study. These facilities were individually called to
ensure they met the criteria for inclusion in the universe. Twelve additional
programs were included based on phone call verification.
During the planning stages, both coverage error and sampling error
were considered. Use of a comprehensive database is an important factor
for reducing coverage error (Wei Wei, 2003). Therefore, the original universe
of juvenile correction schools for committed youth was taken from the most
comprehensive database available. The 2003 Directory of Adult and Juvenile
Correctional Departments, Institutions, Agencies, and Probation and Parole Authorities
(American Correctional Association, 2003) was the basis of the universe. To
minimize sampling error, the beginning of each survey included two questions
to verify that participants were eligible for the study. The questions included: (a)
Is your school a juvenile correctional school for committed youth? and (b) Does
your school include students in any of grades 7-12?
Potential participants were directed to return the survey without
completing it, if they answered no to either of the initial questions.
A total of 483 schools met criteria for inclusion in the study. Based on
concerns with a possible low response rate, 400 of the qualifying schools
were randomly selected. Assuming approximately 50% (n = 200) response rate,
consistent with other exclusionary schools in national surveys (see Gagnon a
McLaughlin, 2004), responses would approach the 214 needed for a 95%
confidence level and 5% confidence interval. Upon return of the surveys, 17
schools were excluded from the database due to the following issues: (a) not a
JC school (n = 14); (b) facility closed (n = 2); and (c) no grades 7-12 (n = 1). Thus,
the total sample size was 383.

Data Collection
Consistent with Heberlein and Baumgartner's (1978) recommendations, multiple
contacts and reminders to participants included an introductory letter, five
survey mailings, and follow-up phone calls that began after the second mailing.
Respondents were also able to respond via hard copies of the surveys or
online. Additionally, the surveys all included consistent and specific directions,
as well as notation of government sponsorship. Moreover, all principals in the
sample received an incentive at the time of the first survey mailing in the form
of a $2.00 bill. As suggested by Bourque and Fielder (1995), participants were
also assured of confidentiality and anonymity, provided an estimate of the time







The Journal of Correctional Education 61(1) March 2010
Gagnon, et. al. Juvenile Correctional Schools

needed to complete the survey, details concerning when and how to return the
survey, and contact information in the event they wanted a summary of the
final results.

Respondents and Nonrespondents
The response rate for the survey was 34.22% (131/383). One hundred and
twenty-one principals returned hard copies of surveys and another ten
completed.the survey online. Respondent and nonrespondent comparisons
were conducted at the school level using information available from the 2003
Directory of Adult and Juvenile Correctional Departments, Institutions, Agencies, and
Probation and Parole Authorities (American Correctional Association, 2003) and,
for 12 schools, the state website. Comparisons were completed across U.S.
Census Bureau region, security level (i.e., maximum, medium, medium/maximum,
minimum, multiple), contract (i.e., private company) or non-contract, and gender
served (i.e., male, female, co-gender). Based on chi-square analysis, no
significant differences were noted for any of the comparisons.
Respondents included principals from 41 states and all U.S. census regions
(n = 131) were represented (n = 31 Midwest; n = 19 Northeast; n = 60 South;
n = 21 West). Descriptive data was not available for all variables for all
respondent schools. However, available information indicates that each security
level (n = 109) was represented (n = 38 maximum; 34 = medium; n = 6
combination medium/maximum; n = 19 minimum; n = 12 multiple levels of
security) and concerning the type of facility (n = 131) there were 32 contract
facilities and 99 non-contract facilities. Also, with regard to the gender served
at facilities (n = 125), 82 facilities were only for males, 12 facilities for females,
31 co-gender facilities.

Data Analysis
Data analysis included descriptive statistics and chi-square analysis. Descriptive
statistics included frequency, percent, mean, standard deviation (SD) and sum,
as appropriate. Chi-square analyses were conducted to identify principal views
of student participation in state assessment (yes, no) across school achievement
of AYP (yes, no, don't know). Cramer's V was used when calculating effect size for
the comparison. A common alpha of .05 was maintained for all data analysis.
For seven questions, respondents had the option to write in an Other
response (i.e., assessment (e.g., If less than 95% of students participated, why
did students not participate? What is the basis of school policies for assessment
accommodations on state assessments? Which describes how results of state








The Journal of Correctional Education 61(1)* March 2010
Juvenile Correctional Schools Gagnon, et. al.

assessments are used in your school? What is the basis for how your school uses
state assessment results? Which indicators are used by the state to determine school
consequences? How is your school held accountable for student participation in state
assessments? How does your school report assessment results for students with
disabilities?). Analysis of data from open-ended responses was completed
using the following procedures: (a) a graduate student identified preliminary
categories through a review of responses and coded each response into one or
more categories; (b) data was independently placed into categories by another
graduate student and categories were modified, as needed; (d) the graduate
students discussed areas of convergence and divergence, (e) categories were
adjusted, added, or deleted, as needed, (f) each graduate student recorded the
data; and (g) a final discussion and calculation of reliability was completed
(Goetz & LeCompte, 1984; Lincoln & Guba, 1985).

Results
Participation in State Assessments
Principals answered questions concerning student participation in state
assessments (see Table 1). Principals commonly asserted that students at their
school should participate in state assessments (n = 89, 68.5%). Overall, 80.07%
(n = 112, SD = 34.67) of respondents reported that students participated in
state assessments. Also, six principals reported that they did not know the
percentage of students who participated in state assessments. Further, eight
principals reported that participation in state assessments was not applicable.
Concerning students with disabilities, 83.49% (n = 108, SD = 33.56)
participated. Ten principals reported that they did not know the percentage
of students with disabilities who participated. Another 10 principals noted
that participation in state assessments was not applicable for students with
disabilities. Respondents also reported the percentage of students that
participate in a state assessment that is required by another state. For the 121
respondents, 94 reported that 0% of their students are served from outside the
state. One principal reported that 100% of his/her students were from another
state. In addition, 26 principals noted that they did not know if youth were
served from states other than where the facility was located.
For those respondents who reported less than 95% of students participated
in state assessments, they were asked to check all the reasons that applied for
student lack of participation (see Table 1). The most common reasons for not
participating in state assessments were listed under the Other category (n = 38)
were that students did not participate due to student or school exemption (n =








The Journal of Correctional Education 61(1) March 2010
Gagnon, et. al. Juvenile Correctional Schools

14) and students were not in a grade that required testing (n = 13). Also, listed
within the Other category were (a) student length of stay was too short for
participation (n = 4), (b) safety reasons (n = 3), (c) students already graduated
(n = 2), and (d) students were preparing for the GED (n = 2).

Table 1. Participation in State Assessments

Characteristics No. (%)
Should Students Participate in State Assessments in Your School?
Yes 89 (68.5%)
No 41 (31.5%)
If less than 95% of Students Do Not Participate in Assessments, Why?
Not Applicable, All Students Participate 40 (--)
Emotional Distress 7 (--)
The Assessments are Too Difficult 7 (--)
Students are from Another LEA 3 (--)
Students are from Another State 0 (--)
Do Not Have a School Policy 1 (--)
I Don't Know 3 (--)
Other 38 (--)

Note: -- = Percentages not calculated because question asks respondents to, "choose all that apply";
LEA = Local Education Agency



Assessment Accommodations
Principals were asked questions concerning the use of state assessment
accommodations during instruction and the basis for these assessment
accommodations (see Table 2). Principals most frequently reported that the
accommodations were used To a Great Extent during instruction (n = 91, 71.7%).
Overall, the average response on the four-point scale (1 = not at all; 4 = to a
great extent), was 3.57 (SD = .79). The most common basis of school policies
for assessment accommodations was state accommodation guidelines (n = 80,
67.2%). Additionally, for the nine principals that noted an Other basis, the
responses were student IEPs (n = 4), students were exempt or the question was
not applicable (n = 4), or the school used a combination of state and federal
guidelines (n = 1).








The Journal of Correctional Education 61(1) March 2010
Juvenile Correctional Schools Gagnon, et. al.


Table 2. Assessment Accommodations

Characteristics No. (%)
Extent to Which State Accommodations that Students with EBD or LD have
on their IEPs are Used During Classroom Instruction
Not at all 5 (3.9%)
Very Little 9(7.1%)
Somewhat 22(17.3%)
To a Great Extent 91 (71.7%)
Don't Know 0 (0)
Basis for School Policies for Assessment Accommodations on State Assessments
LEA Accommodations Guidelines 17 (14.3%)
State Accommodations Guidelines 80 (67.2%)
School Developed Assessment Accommodations Guidelines 10 (8.4%)
No Identified Basis for Assessment Accommodations 1 (.8%)
Don't Know 2 (1.7%)
Other 9 (7.6%)

Note: -- = Percentages not calculated because question asks respondents to, "choose all that apply";
LEA = Local Education Agency



Participation and Characteristics
Respondent views toward participation of students in state assessment were
compared across achievement of AYP. No statistical significance was noted
for achievement of AYP and whether or not principals asserted that students
should participate in state assessments.

Accountability
Respondents reported all applicable uses of state assessments and factors
on which the uses of assessments are based (see Table 3). Most commonly,
principals reported that state assessment results were used to adjust instruction
or curriculum (n = 64) and identify areas in which the school performance is
acceptable and where improvement is needed (n = 63). Where principals noted
an Other use for state assessment results, responses included school is exempt/
not applicable (n = 7), develop individual plans of instruction (n = 2), and
obtain accreditation (n = 1). The most common basis for schools' use of state
assessments was the state guidelines (n = 54). The next most frequent response
concerning the use of assessment results was school developed guidelines (n =
37). Of the respondents, 59 reported a basis other than State Education Agency







The Journal of Correctional Education 61(1) March 2010
Gagnon, et. al. Juvenile Correctional Schools

(SEA) or Local Education Agency (LEA) guidelines. Some principals noted Other
concerning the basis of using state assessments. Other assertions related to the
use of state assessments included agency developed guidelines (n = 2) and the
school was exempt from state assessments (n = 9).

Table 3. Basis and Use of Assessment Results

Characteristics No. (%)
How Results of State Assessments are Used in Your School
Adjust Instruction or Curriculum 64 (--)
Make Decisions Regarding Student Placement within the School 21 (--)
Make Decisions Regarding Student Grade-Level Promotion 18 (--)
Make Decisions Regarding Student Return to Public or Home School 12 (--)
Evaluate Teachers 13 (--)
Identify Areas in Which School Performance is Acceptable and Where 63 (--)
Improvement Is Needed
Results are not used at the School Level 26 (--)
Don't Know 3 (--)
Other 12(--)
Basis for How School Uses State Assessment Results
LEA Guidelines 29 (--)
State Guidelines 54 (--)
School Developed Guidelines 37 (--)
No Identified Guidelines 23 (--)
Don't Know 8(--)
Other 12(--)

Note: -- = Percentages not calculated because question asks respondents to, "choose all that apply";
LEA = Local Education Agency



Concerning reporting of assessment results for students with disabilities,
principals reported that the most common approaches were reporting results
to the state as part of aggregate data (n = 63) and at the school level (n = 55).
The two Other responses included that the school was exempt from (n = 5) and
results were reported to an education liaison or case worker (n = 2).








The Journal of Correctional Education 61(1)* March 2010
Juvenile Correctional Schools Gagnon, et. al.


Table 4. Reporting and Accountability

Accountability Issue No. (%)
How School Reports State Assessment Results for Students with Disabilities
Results Reported at the School Level 55 (--)
Results Reported to the LEA as Part of Aggregate Data 37 (--)
Results Reported to the State as Part of Aggregate Data 63 (--)
Individual Results Reported to Individual Parents/Guardians 43 (--)
Results Reported to Student's Home School and LEA 37 (--)
Results Not Reported 1 (--)
Don't Know 9 (--)
Other: 16(--)
Did School Make Adequate Yearly Progress
Yes 40 (34.5%)
No 20(17.2%)
Don't Know 56 (48.3%)
Indicators Used by State to Determine School Consequences
Student Participation Rates on State Assessments 57 (--)
Scores on State Assessments 48 (--)
Improvement on State Assessment Scores 43 (--)
Attendance 42 (--)
Drop out Rates 21 (--)
Graduation Rates 29 (--)
Don't Know 28 (--)
Other 25 (--)
How School is Held Accountable for Student Participation in State Assessments
School Sanctions Based on Student Participation and Performance 22 (--)
on State Assessments
Monetary Incentives Based on Student Participation and Performance 9 (--)
on State Assessments
No Formal Process to Hold Schools Accountable for Student Participation 42 (--)
and Performance on State Assessments
Don't Know 24 (--)
Other 37 (--)

Note. LEA = Local Education Agency
Note: -- = Percentages not calculated because question asks respondents to, "choose all that apply"



As shown in Table 4, respondents answered three questions concerning
accountability for student learning: (a) if the school made Adequate Yearly
Progress (AYP) in the previous school year; (b) the methods used to hold schools
accountable for participation in state assessments; and (c) indicators used by








The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools

the state to determine school consequences. While some respondents noted
that their school did make AYP (n = 40, 34.5%), the most frequent response
was that principals did not know if their school made AYP (n = 56, 48.3%).
Concerning methods of holding schools accountable, most JC school principals
reported there was no process for accountability for student participation and
performance on state assessments (n = 42). An additional 24 principals did
not know how their school was held accountable. For those respondents who
reported Other methods of being held accountable, answers included: (a) LEA,
home school, or district is accountable (n = 8); (b) state or Dept. of Juvenile
Justice requirements (n = 7); (c) school is exempt (n = 13); (d) student portfolios
(n = 1); (e) graduation requirements (n = 1); (f) quality review (n = 4); (g) AYP
(n = 1); (h) state assessment (n = 1); and (i) achievement tests (n = 1).
The most common indicators used by the state to determine school
consequences were student participation rates on state assessments (n = 57)
and student scores on state assessments (n = 48). Also 28 principals did not
know the indicators used by the state. Few patterns were noted for Other
indicators reported by respondents. Responses included: (a) consequences
based on student classroom behavior (n = 1); (b) no indicators are used (n =
10); (c) another agency/home district determines consequences (n = 8); (d)
percentage of students that complete a GED (n = 2); (e) the extent to which
there is minority overrepresentation (n = 1); (f) report cards/progress notes
(n = 1); and (g) reading/math improvement (n = 1).

Discussion
The results of the current investigation supply the first national picture of
assessment and accountability policies and practices in JC schools. Results
indicate that numerous concerns must be addressed to ensure youth with
and without disabilities in JC schools are included in current assessment and
accountability systems. The discussion focuses on three? key areas concerning
JC school policies and practices (a) participation in state assessments,
(b) assessment accommodations, and (c) accountability.

Participation
Only about 69% of principals asserted that students in their JC school should
participate in state assessments. However, there was no statistical significance
noted for achievement of AYP and whether principals asserted that students
should participate in state assessments. Two conclusions can be made, in light of
the lack of significance. First, principal personal views may not be a critical factor








The Journal of Correctional Education 61(1) * March 2010
Juvenile Correctional Schools Gagnon, et. al.

when school assessment policies and practices are developed and implemented.
Alternatively, there could be a lack of LEA and SEA oversight of schools (see Gagnon,
Barber, Van Loan, 8 Leone, 2009) that affects the validity of school attainment
of AYP. The possibility of insufficient oversight is even more of a possibility when
considering that almost half of principals in the current study did not know if
their school achieved AYP. One could assume that, if JC schools were accountable
for achieving AYP, principals would be aware of their school's progress.
In contrast to principal views of whether students should participate in
state assessments, it was reported that a higher percentage of students in JC
schools actually participated. However, it remains a concern that only slightly
more than 80% of students with or without disabilities in JC schools
participated in state assessments. Neither NCLB (2002) nor IDEIA (2004)
contains provisions that specifically allow students with or without disabilities
in JC schools to forego participation in state assessments. As such, it is expected
that, consistent with NCLB mandates, at least 95% of students in JC schools
would participate in state assessments. One possible complication of student
participation could be that JC schools provide education to students from other
states (Gagnon, Barber, Van Loan, 8 Leone, 2009). In the current study, however,
only one JC school had students from another state. What is particularly
interesting is that approximately one-fourth of principals did not know if youth
at their facility were from other states. A lack of principal knowledge could
inhibit student participation in another state's assessment, if needed.
Principals were asked why, if less than 95% of students participated in
state assessments, some students do not participate. The most common answer
was that all students participate in state assessments (n = 40). Principals
reported varied reasons for lack of student participation. The most frequent
response, by 17 principals, was that students did not participate due to an
individual or school exemption. For the principals that cited exemptions,
additional research is necessary to understand the context surrounding these
exemptions and by whom the exemptions were given. Overall, remaining
principal explanations were not ones that states typically accept as valid. For
example, principals identified that students did not participate due to emotional
distress; a reason for non-participation that is not valid in any state (Lazarus,
Thurlow, Lail, Eisenbraun, & Kato, 2006). Principals reported other reasons for
exemption, including the assessments were too difficult, students were from
another LEA, and safety reasons. The varied responses from principals indicate
that there are several complications for JC schools concerning student
participation in state assessments. What is clear is that principals of JC schools







The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools

are in need of guidance and policies that maintain federal requirements and
also take into consideration the unique attributes of JC schools.

Assessment Accommodations
Principals were asked about the use of assessment accommodations in class,
as well as the basis of assessment accommodations. Only 71.1% of respondents
answered that, for youth with EBD and LD, assessment accommodations were
used in class To a Great Extent. When students are not provided opportunities to
apply appropriate accommodations on a regular basis, they might not be able
to demonstrate what they know and can do (Shriner & Ganguly, 2007). Slightly
more than two-thirds of principals responded that the most common basis of
school policies for assessment accommodations was state accommodation
guidelines. Results also indicate that close to 20% of schools did not base
accommodations on SEA or LEA guidelines. This raises concern because when
schools do not use LEA or SEAs guidelines as a basis for accommodation decisions,
the reporting of student scores may be affected (Thompson et al., 2005).
Specifically, use of school-developed assessment accommodations may result
in the use of unapproved assessment accommodations, which could invalidate
student assessment scores (Malmgren, McLaughlin, 8 Nolet, 2005). However, it
is not clear from the current data that those schools that did not use SEA or LEA
guidelines necessarily used assessment accommodations that did not align with
the district and/or state. Additional research is needed to provide a definitive
statement of school decisions regarding assessment accommodation choices.

Accountability
Using assessment results. The two most common responses reported by
principals for describing how results of state assessments are used in their
schools was to adjust instruction or curriculum (n = 64), and to identify
areas of acceptable school performance and needed improvement (n = 63).
It is encouraging that principals are appropriately using assessment results.
However, some principals reported using state assessments to place students
within their school or to decide if a student should return to public school. In
fact classroom placement, at the secondary level, should be based on student
age/grade and courses needed for graduation. Moreover, it is wholly
inappropriate to rely on state assessment results to establish if a student should
return to their regular public school upon release from a JC school. No provision
exists with IDEIA (2004) or NCLB (2002) that relates formerly incarcerated youth
success on state assessment with the right to attend public school upon release.








The Journal of Correctional Education 61(1) * March 2010
Juvenile Correctional Schools Gagnon, et. al.

Additionally, it is a concern that some principals did not use assessment results.
When state assessment results are not utilized, students are less likely to
advance to the next grade or graduate from high school (Thurlow 8 Johnson,
2000; Ysseldyke, Dennison, & Nelson, 2004).
Principal responses varied with regard to the basis for their schools' use of
state assessment results. The most frequent basis was state guidelines (n = 54)
followed by school-developed guidelines (n = 37), and LEA guidelines (n = 29).
A smaller number of principals reported that there were no guidelines used as a
basis of their schools' use of assessments (n = 23). The use of school-developed
guidelines raises questions as to whether school policies are aligned with
approved and recommended LEA and SEA approaches to using state
assessments. What is more alarming, is that many JC schools have no identified
guidelines for using state assessment results and that some principals were
unaware of the existence of any guidelines on which to base their use of state
assessment. The lack of guidelines runs contrary to the emphasis within NCLB
(2002) for the standardization of school accountability.
Reporting assessment results. Most responding principals noted that for
students with disabilities, scores on state assessments were reported (a) to the
state as part of aggregate data (n = 63), (b) at the school level (n = 55), and (c)
to individual parents on their child (n = 43). Less than half of responding
principals noted using any single method of reporting assessment results.
The relatively few JC schools that report results in any particular manner raise
concerns because reporting of student assessments results is a key component
of accountability (Gagnon 8 McLaughlin, 2004). When schools do not report
scores for some students on state assessments, the message is that certain
students are less important and do not "count" (National Center on Educational
Outcomes, 2008). In fact, all 50 states have a method for reporting proficiency
of students with disabilities on state assessments (VanGetson 8 Thurlow, 2007).
It is true, however, that schools are not required to report the scores of
students with disabilities if the number of students in a school does not meet
state requirements (Leone 8 Cutting, 2004). The current study does not identify
current levels of students with disabilities and whether they are exempt from
reporting state assessment results. What is clear is that many JC schools are
never held accountable because there is no public reporting of assessment
results. The lack of public reporting makes it difficult to know if these schools
are successfully educating our most troubled youth.
Holding schools accountable. Most principals (n = 42) reported there was
no process for accountability for student participation and performance on state








The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools

assessments. An additional 24 principals did not know how their school was
held accountable. Also of significant concern is that only about 35% of JC
schools reportedly made AYP and another 48% of principals did not know if
their school made AYP. The lack of accountability, principal knowledge related
to accountability, and school success in achieving AYP are all troubling issues.
Concerns are amplified when considering that, nationally, 81% of JC schools
report being accredited by their SEA (Gagnon, Barber, Van Loan, & Leone, 2009).
It is difficult to conjecture the reasons for the gaps in accountability and
principals' understanding of accountability. However, some issues may shed
light on the current status of JC schools. For example, recent research indicated
that approximately one-third of JC Principals are not certified as principals or
administrators (Gagnon, Barber, Van Loan, & Leone, 2009). Although the link
may be indirect, certification is one common measure of principal qualification
(Gates et al., 2003). It is also possible that accountability for student learning
in JC schools is affected by minimum subgroup sizes. Specifically, there may
be a relatively high percentage of youth in a facility in special education, as
compared to the total population of that facility (Gagnon, Barber, Van Loan, 8
Leone). However, if the subgroup of students classified as special education is
still below the minimum subgroup size that is set by the state, this could affect
whether these students are included in accountability measures. "In other
words, if a school (or district) does not have the minimum number of students
for a subgroup, that subgroup is treated as meeting AYP for the purposes of
determining whether the school (or district) met AYP' (Cortiella, 2007, p. 18). In
effect, an entire school with a small population and high percentage of students
with disabilities could be overlooked with regard to AYP.
Principals reported two common indicators used by the state to determine
school consequences: (a) student participation rates on state assessments
(n = 57); and (b) student scores on state assessments (n = 48). However, 28
principals did not know the indicators used by the state. It can be assumed that
those principals who are unaware of indicators use by the state are uninformed
because such consequences do not apply to their school.

Limitations
Two limitation to the study existed. First, similar to other survey research
(Crawford & Tindal, 2006; Donovan & Nickerson, 2007), the response rate
of 34.22% was less than the typically accepted 50% rate for mail surveys
(Weisberg, Krosnick, Bowen, 1989). However, there was a concerted effort to
ensure appropriate power and a randomized sample. In addition, the reliability








The Journal of Correctional Education 61(1) March 2010
Juvenile Correctional Schools Gagnon, et. al.

of the results is indicated by the fact that no significant differences existed
across responding and nonresponding schools. The second limitation was
the possibility of principals reporting information that may not be accurate.
However, principals made some rather unsettling admissions that included a
lack of knowledge on several issues. For example, the most common response
to whether their school made AYP was Do Not Know. The admission supports
the idea that principals were willing to indicate their shortcomings, when
appropriate. In light of these factors, it is appropriate to assert that the study
results are representative of JC schools for committed youth throughout the U.S.

Implications
Future Research
The current study provides an initial and troubling snapshot of assessment and
accountability policies and practices within JC schools. However, several issues
require additional investigation using a variety of research methodologies
within the central areas of participation in state assessment, assessment
accommodations, reporting and using assessment results, and school
accountability. For example, in-depth case study, observation, and interview
research are needed to identify why principals often believe that many students
should not participate and, in fact, do not participate in state assessments while
in JC schools. Additional studies would also provide data concerning the
situations under which JC schools can obtain an exemption or waiver for
student participation in state assessments. As such, the reasons for and
methods of obtaining exemptions are areas in which there is a great need for
research. Moreover, data is needed concerning the extent to which LEAs, SEA,
and correctional education agencies in charge of education, delineate policies
and practices for youth with disabilities in JC schools that are consistent with
NCLB (2002) and IDEIA (2004).
It is possible, given the complex academic, social and emotional needs
of youth with and without disabilities in JC schools that a large number of
students in this setting may participate in a state assessment that is based on
modified achievement standards. The 2007 NCLB regulations that have been
published since the current study was conducted, allow for such assessments.
However, it is unclear how this provision will alter assessment of these volatile
youth. Additional research is needed to monitor the potentially disproportional
effects of this policy on youth in JC schools.
Results from the current study indicate a number of other future research
directions for student assessment accommodations. Similar to other educational








The Journal of Correctional Education 61(1) March 2010
Gagnon, et. al. Juvenile Correctional Schools

settings, there are concerns that students in JC schools may not be receiving
the accommodations that are listed on their IEPs during the actual assessment
(Bottsford-Miller, Thurlow, Stout, & Quenemoen, 2006; Shriner 8 DeStefano,
2003). Future research could evaluate fidelity of teacher implementation of
accommodations by using a combination of student and teacher surveys, direct
observations of teachers, and IEP review. Also, research is needed to discover
the actual accommodations provided by JC schools, how teachers are using
accommodations, and the types of students to whom accommodations are
given. Additionally, Shriner and Ganguly (2007) noted the importance of
providing assessment accommodations based on student-specific characteristics,
as well as individual test items. Future research should assess the extent to
which student characteristics and test items are considered when making
assessment accommodations decisions, as well as 'which accommodations are
appropriate for which individuals' (Thurlow, Thompson, & Lazarus, 2006, p. 665).
The present research sets the stage for additional investigations concerning
the use of state assessment results. For example, information is needed
concerning the alignment of school developed versus LEA or SEA guidelines
for use of state assessment results. Moreover, additional research can lead to
an understanding of why schools develop their own guidelines, what they used
to develop guidelines, and what they do in lieu of either having or knowing
guidelines to assist in the appropriate use of state assessment results.
Future research is also necessary to identify key accountability issues and
the extent to which JC schools: (a) provide an assessment that is aligned with
grade-level standards; (b) report results according to students that achieved at
levels of basic, proficient or advanced; (c) disaggregate data for youth with
disabilities; (d) establish specific performance objectives for subgroups; and (e)
measure AYP according to sub-groups of students' who achieve proficient levels
(Malmgren, McLaughlin, 8 Nolet, 2005). For example, one of the main
requirements of NCLB is public reporting of student performance on state
assessments that are disaggregated by several groups, including students with
disabilities (National Center on Educational Outcomes, n.d.). However, it remains
unclear if JC schools are reporting disaggregated assessment results and if they
are not, what barriers exist that inhibit the appropriate reporting of results. Schools
are not obligated to report the scores of students with disabilities if the number
of students in a school does not meet state requirements (Leone & Cutting, 2004).
Additional issues may exist in light of recent amendments to NCLB (see Title 1
-Improving the Academic Achievement of the Disadvantaged; Individuals with
Disabilities Education Act (IDEIA), 2007). It is unclear how requirements for








The Journal of Correctional Education 61(1) * March 2010
Juvenile Correctional Schools Gagnon, et. al.

minimum numbers of students and requirements of the same number of students
in subgroups will affect JC schools, in which there are typically relatively small
numbers of students. Conceivably, the low number of students in JC schools
could exclude students from important accountability requirements and AYP.
Future research should identify the number of students per school in subgroups
identified by IDEIA in order to determine if results are appropriately reporting.
In the current study, almost half of schools were not held accountable for
student learning. Some states allow accountability measures for special schools
that are different from public schools or allow these schools to choose their
own accountability measure (Bolt, Krentz, 8 Thurlow, 2002). The allowed
variation in accountability measures may explain some of the variability in
how JC schools are held accountable and the reasons that many JC schools
are not held accountable. While these variations may take specific JC school
characteristics into consideration, this is not definitely known. Additional
information is necessary to identify the reasons that JC schools are functioning
outside of current accountability processes.
Conclusions
The education of youth in JC schools is a challenge that is often complicated
by a high percentage of youth with disabilities (Quinn, Rutherford, Leone, Osher,
8 Poirier, 2005). As such, adherence to IDEIA (2004) and NCLB (2002) may be
difficult (Leone 8 Cutting, 2004). However, the stakes are extremely high for youth
with and without disabilities in JC schools and academic failure may contribute
to lifelong problems with recidivism and joblessness (Bureau of Labor Statistics,
2001; Katsiyannis, & Archwamety, 1999; U.S. Department of Labor, 2003).
Unfortunately, results of the current study provide a disturbing picture of
both a lack of principal knowledge and a lack of consistent assessment and
accountability policies and practices in JC schools. The concerns are amplified
when considering that over 80% of JC schools nationally are accredited by their
State Department of Education (Gagnon, Barber, Van Loan, 8 Leone, 2009).
Clearly, there is a need for principal professional development and
communication with LEAs and SEAs concerning assessment and accountability
policies. In fact, SEA Directors of Special Education have recognized the positive
effects that increased communication can have on such issues as student
participation in state assessments (Thompson et al., 2005). Moreover, a clear
link between adherence to IDEIA and NCLB should be an integral part of school
accreditation. It is through communication and holding schools accountable
that youth in JC schools will be assured of access to a free and appropriate
public education.








The Journal of Correctional Education 61(1) * March 2010
Gagnon, et. al. Juvenile Correctional Schools


References
American Correctional Association. (2003). The 2003 directory of adult and juvenile
correctional departments, institutions, agencies, and probation and parole authorities.
Alexandria, VA: Author.
Bolt, S., Krentz, J., 8 Thurlow, M. (2002). Are we there yet? Accountability for the performance
of students with disabilities (NCEO Technical Report 33). Minneapolis, MN: University of
Minnesota, National Center on Education Outcomes.
Bottsford-Miller, N., Thurlow, M. L., Stout, K. E., 8 Quenemoen, R. F. (2006). A comparison
of IEP/504 accommodations under classroom and standardized testing conditions: A
preliminary report from SEELS data (Synthesis Report 63). Minneapolis, MN: University
of Minnesota, National Center on Education Outcomes.
Bourque, L. B., 8 Fielder, E. p. (1995). How to conduct self-administered and mail surveys.
Thousand Oaks, CA: Sage.
Browne, J. (2003). Derailed: The schoolhouse tojailhouse track. Advancement Project.
Washington, DC.
Bureau of Labor Statistics. (2001). Employment experience of youths: Results from the first three
years of a longitudinal survey. Washington, DC: U.S. Department of Labor.
Coffey, O. D. 8 Gemignani, M. G. (1994). Effective practices injuvenile correctional education: A
study of the literature and research, 1980-1992. Washington, DC: U.S. Department of
Justice. The National Office for Social Responsibility.
Cortiella, C. (2007). Rewards 8 roadblocks: How special education students are faring under No
Child Left Behind. National Center for Learning Disabilities.
Crawford, L., 8 Tindal, G. (2006). Policy and practice: Knowledge and beliefs of education
professionals related to the inclusion of students with disabilities in a state
assessment. Remedial and Special Education, 27, 208-217.
Donovan, S. A., 8 Nickerson, A.B. (2007). Strength-based traditional social emotional
reports: Impact on multidisciplinary team members perceptions. Behavioral Disorders,
32, 228-237
Fink, A. (1995). The survey handbook. (Vol. 1). Thousand Oaks, CA: Sage.
Gagnon, J. C., Barber, B. R., Van Loan, C. L., & Leone, P. E. (2009). Juvenile correctional
schools: Characteristics and approaches to curriculum. Education and Treatment of
Children, 32, 673-696.
Gagnon, J. C., Maccini, P., 8 Haydon, T. (2009). Secondary day treatment and residential
schools: Assessment and accountability policies and practices. Unpublished manuscript.
Gagnon, J. C., 8 McLaughlin, M. J. (2004). Curriculum, assessment, and accountability in
day treatment and residential schools. Exceptional Children, 70, 263-283.
Gates, S. M., Ringel, J. S., Santianez, L., Ross, K. E., 8 Chung, C. H. (2003). Who is leading our
schools? Santa Monica, CA: Rand.
Goetz, J. P., & LeCompte, M. D. (1984). Ethnography and qualitative design in educational
research. Orlando, FL: Academic Press.








The Journal of Correctional Education 61(1)* March 2010
Juvenile Correctional Schools Gagnon, et. al.


Heberlein, T. A., and Baumgartner, R. (1978). Factors affecting response rates to mailed
questionnaires: a quantitative analysis of the published literature. American
Sociological Review, 43, 447-462.
Individuals with Disabilities Education Improvement Act of 2004, Pub. L. No. 108-446, 118 Stat.
2658 (2004).
Katsiyannis, A., 8 Archwamety, T. (1999). Academic remediation/achievement and other
factors related to recidivism rates among delinquent youth. Behavioral Disorder, 24,
93-101.
Kohl, F. L., McLaughlin, M. J., 8 Nagle, K. (2006). Alternate achievement standards and
assessments: A descriptive investigation of 16 states. Exceptional Children, 73, 107-123.
Lazarus, S. S., Thurlow, M. L., Lail, K. E., Eisenbraun, K. D., 8 Kato, K. (2006). 2005state
policies on assessment participation and accommodations for students with disabilities
(Synthesis Report 64). Minneapolis, MN: University of Minnesota, National Center on
Education Outcomes.
Leone, P. E. (1994). Education services for youth with disabilities in a state-operated juvenile
correctional system: Case study and analysis. The Journal of Special Education, 28(1), 43-58.
Leone, P. E., 8 Cutting, C. A. (2004). Appropriate education, juvenile corrections, and No
Child Left Behind. Behavioral Disorders, 29, 260-265.
Lincoln, Y., 8 Cuba, E. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications.
Litwin, M. S. (1995). How to measure survey reliability and validity. Thousand Oaks, CA: SAGE
Publishing.
Malmgren, K. W., McLaughlin, M. J., 8 Nolet, V. (2005). Accounting for the performance of
students with disabilities on statewide assessments. The Journal of Special Education,
39, 86-96.
Nagle, K., Yunker, C., 8 Malmgren, K. W. (2006). Students with disabilities and
accountability reform: Challenges identified at the state and local levels. Journal of
Disability Policy Studies, 17(1), 28-39.
National Center on Educational Outcomes. (n. d.). Special topics area: Participation of students
with disabilities. Retrieved February 13, 2008 from
http://cehd.umn.edu/nceo/topicareas/standards/standardsfaq.htm
No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).
Quinn, M. M., Rutherford, R. B., Leone, P. E., Osher, D. M., 8 Poirier, J. M. (2005). Youth with
disabilities in juvenile corrections: A national survey. Exceptional Children, 71, 339-345.
Shriner, J. G., 8 Destafano, L. (2003). Participation and accommodation in state assessment:
The role of individualized education programs. Exceptional Children, 69, 147-161.
Shriner, J. G., 8 Ganguly, R. (2007). Assessment and accommodation issues under the No
Child Left Behind Act and the Individuals with Disabilities Education Improvement
Act. Assessment for Effective Instruction, 32, 231-243.
Thompson, S. J., Johnstone, C. J., Thurlow, M. L., 8 Altman, J. R. (2005). 2005 state special
education outcomes: Five years into the decade means many changes. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.








The Journal of Correctional Education 61(1) March 2010
Gagnon, et. al. Juvenile Correctional Schools

Thurlow, M. L., 8 Johnson, D. R. (2000). High-stakes testing of students with disabilities.
Journal of Teacher Education, 51, 305-314.
Thurlow, M. L., Lazarus, S. S., Thompson, S. J., & Morse, A. B. (2005). State policies on
assessment participation and accommodations for students with disabilities. The
Journal of Special Education, 38, 232-240.
Thurlow, M. L., Thompson, S. J., 8 Lazarus, S. S. (2006). Considerations in the
administration of tests to special needs students: Accommodations, modifications,
and more. In S. M. Downing and T. M. Haladyna (Eds.), Handbook of test development.
Mahwah, NJ: Lawrence Eribaum Associates.
Title 1-Improving the Academic Achievement of the Disadvantaged; Individuals with Disabilities
Education Act (IDEIA), 34 C.F.R.72, � 200-30 (2007).
U.S. Department of Labor. (2003). Educational Resources: So you are thinking about dropping
out of school? Washington, DC: Author. Retrieved January 14, 2005 from
http://www.dol.gov/asp/fibre/dropout.htm
VanGetson, G. R., 8 Thurlow, M. L., (2007). Nearing the target in disaggregated subgroup
reporting to the public on 2004-2005 assessment results (Technical Report 46).
Minneapolis, MN: University of Minnesota, National Center on Education Outcomes.
Wei Wei, C. (2003). Reducing error in mail surveys. PracticalAssessment, Research, 9
Evaluation, 8(18) 1-5. Retrieved January 2, 2006 form
http://PAREonline.net/getvn.asp?v=88n=18
Weisberg, H. F., Krosnick, J. A., 8 Bowen, B. D. (1989). An introduction to survey research and
data analysis (2nd ed.). Glenview, IL: Scott, Foresman and Company.
Yell, M. L., Shriner, J. G., 8 Katsiyannis, A. (2006). Individuals with Disabilities Education
Improvement Act of 2004 and IDEIA Regulation of 2006: Implications for educators,
administrators, and teacher trainers. Focus on Exceptional Children, 39(1), 1-24.
Ysseldyke, J., Dennison, A., 8 Nelson, R. (2004). Large-scale assessment and accountability
systems: Positive consequences for students with disabilities (Synthesis Report 51).
Minneapolis, MN: University of Minnesota, National Center on Education Outcomes.
Zeninsky, A. L., 8 Sirec, S. G. (2007). A summary of the research on the effects of test
accommodations: 2005-2006 (Technical Report 47). Minneapolis, MN: University of
Minnesota, National Center on Education Outcomes.




Biographical Sketch
JOSEPH GAGNON's is an Assistant Professor in the Special Education Department at the
University of Florida. Dr. Gagnon's research includes a focus on curriculum, assessment,
and accountability policies and practices in juvenile corrections schools for committed
youth, as well as day treatment and residential psychiatric schools.

This research was funded by Grant #H324C030043 U.S. Department of Education, Office
of Special Education 8 Rehabilitative Services (OSERS)







Copyright of Journal of Correctional Education is the property of Correctional Education Association and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for individual use.




University of Florida Home Page
© 2004 - 2011 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated May 24, 2011 - Version 3.0.0 - mvs