Title: Mel Lucas ( FCAT 1 )
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00093288/00001
 Material Information
Title: Mel Lucas ( FCAT 1 )
Physical Description: Book
Language: English
Creator: Interviewer: Genevieve Shurack
Publication Date: 2 8, 2005
 Record Information
Bibliographic ID: UF00093288
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

FCAT-1 ( PDF )


Full Text





Interviewee: Mel Lucas
Interviewer: Genevieve Shurack
Date: February 8, 2005

S: Today is February 8, 2005, and I am here with Dr. Mel Lucas, who is Head of
Research and Evaluation [for the Alachua County School Board]. This is on the
FCAT. How did you first get involved in education?

L: Well, about thirty-five years ago, I was finishing a master's degree in general
experimental psychology, and I was looking for a doctoral program to enter in
psychology. The psychology field was kind of glutted at the time. There were
good opportunities and good fellowships available in the College of Education for
Ph.D in educational research. It was kind of a practical decision to going where
the money and opportunity was. That's how I got involved in education, by
getting that Ph.D. in educational research. The money for that fellowship came
from the fairly newly-enacted Title I program, the first big federal money spent for
education years ago with the Great Society. So that's where the money came
from. The need for more educational research came from the requirements of
that legislation to evaluate the quality of Title I programs nationwide. The federal
government was reluctant about getting involved in education, and one of the
compromises made during the legislation was that the expenditure of federal
money towards education would be thoroughly evaluated at the local level. That
created a lot of need for researchers in education. More money for graduate
training and that's what got me into this and where I've been ever since.

S: What does your job entail?

L: The primary responsibility is running the state and district-wide testing program in
Alachua County. I also do research on topics of interest within our educational
community. Evaluate educational programs to see if the students are benefitting.
I do some other research-related activities, but it's mainly research, evaluation,
and testing.

S: I understand that the FCAT is part of an assessment program that started in
1972. What types of assessment did the state have before?

L: In the early 1970s in Florida, I've been trying to think of where the impetus came
from, I suppose out of the Cold War competition with the United States and
Russia. When Sputnik [Soviet satellite launched in 1957] was launched, there
was a realization in this country that we had fallen behind in the race for space.
There was a big emphasis on training more engineers and from that came more
scrutiny on the quality of public education systems. Books were being written, I
think, A Nation At Risk was one of the early ones that was written. [It] exposed, I
think for the first time, that this country's public educational system was lagging
behind. In response to those warnings, many state legislatures took steps to
"beef" up the quality of public schooling. The early attempts of that in Florida was









FCAT1: Mel Lucas, Page 2


with their first state-wide testing program, a basic skills test. Like its name, it
focused just on the basic skills. It was not a comprehensive assessment of
reading and mathematics, but an assessment of only basic skills. It was given at
several grade-levels: elementary, middle, and high school. There were some
mild stakes attached to it, which, later in [our] discussion, I'll bring in the impact of
that early testing and how it's affecting FCAT. There was also instituted in the
mid-1970s, for the first time maybe in the nation, Florida tried passing what they
called, at that time, the Functional Literacy Test, that high school students had to
pass the Functional Literacy Test to earn a high school diploma. That's the early
stages of the state's assessment program, it wasn't called the FCAT until much
later.

S: You said that this was basic test of skills. Are the basic skills reading, writing and
math?

L: Just mathematics, they didn't have writing at the time.

S: So, no reading at all?

L: There's reading and mathematics.

S: So, fast-forward a couple of years, and prior to the FCAT, high school students
were required to take the High School Competency Exam.

L: Yes, HSCT.

S: What was this like?

L: The first name of the high school graduation test was Functional Literacy Test,
and then that name evolved into State Student Assessment Test, Part Two.
Then they eventually brought over the High School Competency Test. The
philosophy and the test itself didn't change too much during that period of time. It
was mainly a change in name, but the test was still measuring a fairly low level of
reading and mathematics skills. I think we could characterize all that early
testing in Florida as kind of a basic skills test of reading and mathematics. It was
not a comprehensive test. Depending on your questions later, I can bring in why
that was important in setting the groundwork for the FCAT.

S: The state decided to phase out the HSCT. What were the reasons behind that?

L: I think it's better to think of it as that the limitations the early testing program for
the HSCT and for the basic skills test at the other grade levels the limitations of
those were being acutely felt within the education community. What was being
learned was that, if the state had a test and they associated with it any kind of
stakes--stakes for the students or stakes for the schools and teachers in the









FCAT1: Mel Lucas, Page 3


schools--that [the test] was very powerful in determining what the teachers
taught. In other words, the teachers would respond in their curriculum, and alter
their curriculum to cover the content of the test that was being used. That was
having, in Florida, a deleterious effect on the quality of education. The test was
narrow in scope and not very challenging with respect to the cognitive levels of
the skills that were being tested. So, it resulted in a shrinking of the curriculum at
the school level into a more narrow curriculum focused on basic skills. That kind
of narrow curriculum does not serve the community well. It was realized the
state test was very much responsible for the narrowing of the curriculum and
dumbing-down the curriculum.
So the FCAT was devised to measure a comprehensive set of skills. A
very conscious effort was made state-wide to define specifically the knowledge
and skills we want our children in Florida to learn in all the content areas:
reading, writing, mathematics, science, and social studies. Those skills were
defined precisely, grade-by-grade, so there is a very complete and
comprehensive definition of the skills we want our children in Florida to learn in
public schools. The FCAT test followed along behind that and was designed to
be a comprehensive measure of those curriculum areas. So far, there's
assessments in mathematics and reading that are comprehensive across grades
three through ten. Science has just been recently developed in grades five,
eight, and eleven, and, of course, the writing test in [grades] four, eight, and ten.
Florida has a very thorough comprehensive assessment of a well-defined family
of skills and it assesses those at all levels of cognitive complexity. The test is
broad in its content that is covered and it's deep with respect to the skill levels
that are being assessed. That philosophy carried over to the high school
graduation test and the tenth grade FCAT, as opposed to the earlier graduation
test measuring basic skills. The current FCAT at the high school level measures
a very complex set of skills.

S: In 1995, it was a Florida Commission on Education Reform and Accountability
that began to conceptualize the FCAT. What was the function of this
commission?

L: I think, I'm speculating here, I was involved on the technical levels during that
period of the test and not at the political levels. I would speculate that the
commission was involved in the development of the philosophy. The idea of
defining well the full breadth and spectrum of our academic curriculum and
devising the means for doing that, the definition of the curriculum and then
devising the means to create the assessment of that.

S: You said you were part of the technical side for the development of the FCAT.
Can you describe your planning sessions? And was there a name for the
committee or council you were on?

L: The name of the committee was FCAT Technical Advisory Committee. Over the









FCAT1: Mel Lucas, Page 4


years, I've been on so many of these committees, and sometimes, in my
memory, they run together. It might have been on the very first stages of the
Technical Advisory Committee, but I know I was involved in the evaluation of the
bids offered by the commercial companies for the twenty-five million dollar state
contract to start the development of the FCAT. The evaluation of those bids
might have been the beginning of the FCAT Technical Advisory Committee. In
that process, we read the bid proposals and questioned the bidders regarding
their bids for developing the FCAT test and administering the testing program
state-wide.
The FCAT Technical Advisory Committee would meet annually. In the
early stages, we were involved with the basic development of the test, its
comprehensive nature, involved with the development with the scoring rules, and
the scoring philosophy. The fact that it has five levels of competency was a
decision we made in that committee. The definition of those levels was made in
that committee. The major focus of the committee, after those initial
developmental stages, has been technical issues about scaling the test.
Developing what's called the developmental scale score of the test. That's the
means by which we can assess student growth over years on the test. Technical
things about that, item-response theory, I was alluding to earlier. That's what the
Technical Advisory Committee did. I also served on several other committees
too, [regarding the] test. One, the Bias Committee, they have a committee that
would review all the test items for gender-bias or racial-bias, and I served on
those committees. I served on committees that developed the standards for
scoring the FCAT writing test, where we would select what we would call range
finders. Select the samples of papers, in the writing essay test, that are used to
score the test. Also [I] served on several item-writing teams where groups of
teachers in Florida would write items for the FCAT test. I've worked on those
committees too.

S: I understand that some of the FCAT is based on the Sunshine State Standards.
What are they and who designed them?

L: I alluded to them earlier. It's the definition of the curriculum, the knowledge and
skills we want our students to learn. So, the Sunshine State Standards are a
pretty detailed description of the knowledge and skills in each of the content
areas reading, math, science, social studies, [and] language arts across all of
grade levels. You can go to the Sunshine State Standards and look at grade
three mathematics and see right there what it is you want third grade students to
know in mathematics. It guides very tightly the curriculum that teachers teach. It
guides very strongly the textbooks that are selected to be used because they
have to match the Sunshine State Standards.
Now, they were developed a couple of different ways. During the early
1990s, there were, in the different content areas, several national movements
towards curriculum definition. In mathematics, there is a national group called,
the National Council for Teachers in Mathematics [NCTM]. It's comprised of









FCAT1: Mel Lucas, Page 5


public school teachers and university professors and mathematicians that meet
on a national level to discuss the mathematics curriculum nationally. They
developed what they called NCTM math standards. The same kind of effort was
undertaken by the National Reading Association. They had much more difficulty
developing their standards. In science, there is a National Science group of the
public school and university science teachers and scientists collaborating
together at the national level for science standards. Those national standards
played a very important role in the development of the Sunshine State
Standards. The state contracted with other agencies to work with Florida
teachers to adapt those national standards to what Florida wanted. That's the
origin of the Sunshine State Standards and their function.

S: In 1998, the FCAT became the leading assessment test in Florida. What
methods does the state require the school to go through in order to prepare for
the exam?

L: Well, the preparation, the best way to answer that, I think, is that Florida has
defined precisely what it is we want our kids to know in the Sunshine State
Standards. Teachers are encouraged very strongly to adhere to the Sunshine
State Standards when they teach, organize their classes and develop classes.
The textbooks that are purchased in Florida are purchased so they correspond
highly with the Sunshine State Standards. So all the instructional efforts, what the
teachers teach, the materials they use, are focused on teaching the Sunshine
State Standards. Teaching the Sunshine State Standards well is the best
preparation for students doing well on the FCAT test, because the FCAT test
assesses those skills and standards in a comprehensive manner. So good
teaching is the best way to prepare students for that test. The state doesn't get
involved in what we would call test prep things. You know, tricks and other types
of test-taking skills that reportedly improve a child's test score. What we're
interested in assessing is the child's knowledge of the Sunshine State Standards.
The best preparation for that is teaching the child those skills effectively. So out
of this office and the state we don't encourage any kind of test-prep type
activities per se. I'm skeptical of the value of that kind of activity. The stakes are
so high for these tests that, just to deal with the anxiety of testing, a lot of our
schools and teachers feel that they have to participate in a lot of these test-prep
activities. They are not encouraged.

S: Going back, actually, to your involvement with the development of the FCAT.
You said you were part of the Technical Committee that decides the grading
scale for FCAT. How did this committee determine the standards for the grading
and what was the process?

L: The FCAT test is scored by what's called book mark procedure. That's where
the state did a field test, let's say, of all the mathematics items in all the grades
tested. They determined for each mathematics test-item its level of difficulty.









FCAT1: Mel Lucas, Page 6


Once each item had its level of difficulty determined from the field test, they
organized each math item--one item per page--into what we'll call a book, page-
by-page, each of the math items on a page, let's say, for third grade
mathematics. Then, the state organized a statewide meeting of some of the best
math teachers in the state, grade-by-grade. They polled each district in the state
who their best and most respected math teachers are. They pulled together
those math teachers, grade-by-grade, to participate in this scaling procedure.
Using these booklets, they asked the teachers at the third grade math level to go
through each of those test questions, starting from the easiest math question and
go through page by page until they get to the point where, in their judgment,
there'll be a transition in knowledge from a student who is performing at the
lowest, "F" level of performance to where that point in that ordered listing of test
items where it transitioned into a "D" grade. In other words, they are going to put
in four book marks in those test items to delineate "F" level, "D" level, "C"level,
"B" level and "A" level performance of students, where they thought in their
judgment that test item that delineated a child that should get an "F" from that
child who should get a "D". Each teacher in that group put their own individual
book marks in there. Then, electronically there is a way to summarize that very
quickly and show the teachers how they varied with respect to where they put the
bookmarks for each of those grade points. At that point, the teachers would sit
down together and talk about it: "Well, here's where I put my mark at between
item thirteen and fourteen for the "F." They would discuss it with other teachers,
and, after the discussion, they'd go through the whole group again and do the
procedure again. After several iterations of that bookmark procedure and
discussion of the teachers' reasons for putting it where they did, there was a
convergence of points and a consensus definition started to emerge about where
those grade points ought to come in that ordered series of items. So it was a
joint decision made by some of the best math teachers in Florida. Grade-by-
grade in math and reading, they use that book mark procedure to set those
achievement levels.

S: So that's how we got the levels one through five?

L: We had achievement levels one through five frankly because of me and my role
in that early stages of the development of the FCAT. There was a discussion of
about how many levels of performance the test ought to have. The initial thinking
was it ought to have four levels of achievement because many others National
Assessments of Educational Progress, particularly, has four achievement levels.
I made the argument that we in Florida should have five achievement levels, and
I shared with them some research I had done here about grading practices in our
schools, about how inflated the grades are. [I] made the argument successfully
that we needed a test that had five levels of achievement that were levels that
meant the same thing when we mean an A, B, C, D, or F in grading students. So
parents understand that grading scale, A, B, C, D, and F, they get that all the
time from their student report cards. It's understood widely in the public the









FCAT1: Mel Lucas, Page 7


meaning of that scale. We needed a scale from the FCAT test that would easily
be related to everyday classroom work to try to inhibit the grade inflation that I
showed them that was occurring in the schools. I think, that's the reason why we
have five levels on the FCAT.
At least in the bookmark procedure they were called A, B, C, D, and F,
and the descriptions of those levels
were what we'd come to think of in
terms of A, B, C, D in normal grading
scale. When the test was finally
published, the state got a little weak
about naming the levels A, B, C, D, and
F, and in the end they named them
levels 1, 2, 3, 4, and 5 instead. So they
changed the name a little bit, but the
meaning of those levels are still the
same and it is what we think of
commonly as an A, B, C, D, and F.

S: You said that other state assessment tests use a four-level grading, how would
you say the FCAT compares to other state assessment tests?

L: I haven't done a comprehensive comparative study of other states, but I'd be real
surprised, and I've heard other people who have done these in other states
agree with us, that the Florida's FCAT is one of the best, if not the best,
nationally. [This is] because of a couple of things, on the one hand, it is a
comprehensive assessment of a well-defined curriculum, the Sunshine State
Standards. It's a deep assessment, in other words; it measures all levels of
cognitive functioning. It's scored on the five level scale. So, it's technically a
very sound test; it's comprehensive, it's deep, it's well-developed across the
grade levels with the developmental scale score, and the scoring of it with the
five levels is sound. So I think it's the best.

S: Florida is the fourth-largest state in the nation, and, yet, aside from the fact that
[the FCAT] was critiqued by in-state educators, it was written by an out-of-state
testing company at least from the research I did, I found that it was written by
CTB/McGraw-Hill. Why did the state go outside the state for assessment
questions?

L: Your premise isn't really accurate; the state did contract this out. There are not
adequate resources in Florida alone to do this type of test development. This
kind of test development requires resources beyond what's available in Florida
per se. These kind of tests are extremely complex to develop. Now, Florida
teachers did, in many cases, write the test items themselves. We contracted
CTB to organize the development of the test. They hired teachers in Florida to
write test items. Now, I won't say that over the years, the initial items, I think,









FCAT1: Mel Lucas, Page 8


every one of them was written by Florida teachers. I served on those test-writing
committees, but since that period of time, test-item writers are coming from
different areas. It doesn't matter, you don't need Florida teachers to write a good
test-item on the Sunshine State Standards. A good test-item could be written in
India for the Sunshine State Standards. It doesn't matter where it's written; a
good test-item is a good test-item. It doesn't need to be home-grown.
S: How often do they reassess the FCAT questions? Because I'm sure some of
them could be outdated.

L: Every year there are new test-items that are phased in, and old test-items that
are phased out. Not the whole test, but there's a certain portion of items every
year that are new. There are a certain portion of test-items that are
experimental, being prepared for inclusion for future tests.

S: I did a lot of my research on the Florida Department of Education website. It said
that the FCAT is graded throughout the nation. How are those graders trained? I
understand that you can't grade the entire test in-state because there aren't
enough people.

L: When I was talking about FCAT, one of the good elements of it is that it
assessed the full depth of the curriculum. The lower-order cognitive skills and
higher-order cognitive skills, it is one of the unique characteristics of the FCAT
that not many other state tests have. It has, in additional to multiple-choice
questions, it has, in reading and mathematics, short answer and short essay
questions what we call constructed response, where the student taking the test
has to create the response, not select from a series of responses already given.
It is more difficult to grade those kind of constructed responses because you
have to get a trained team of test scorers to do that. The process of training
people to score this test has been very well developed. When the tests are
scanned, an image is made of each page and that image contains both the kid's
response to the multiple-choice questions and also the image of what they wrote
in response to the short answer question. So that goes into the computer. That
computer image of what the child wrote on the short answer question, that little
essay in a sense that the child wrote, that can be directed out to a number of
different scorers and they could be remotely dispersed. They could be anywhere
in the world. They score that according to certain rules they are trained to follow.
How they score that question and other questions can be monitored by an
individual who sees how the test scoring's going. That person can intersperse,
without knowledge of the people scoring the test, certain test-calibrating items
that there's strong agreement on the score it should get. So they can monitor
how each individual is scoring items and seeing if there is any score drift. They
can control qualitative scoring that way by monitoring [how] scorers are scoring
each individual item and in that stream interspersing these control-items. So
there's a very structured training of the scorers. They are assessed after that
training on their ability to score on a standard student responses. [They]









FCAT1: Mel Lucas, Page 9


screened out the successful ones from the unsuccessful ones until they go into a
pool of scorers. Then their scoring is monitored. The scoring goes on the
computer and is kind of a instantaneously at the time they are being monitored
on how they are scoring the standard item that has been interspersed in there to
kind of check to see if they are still scoring on target. Can you see how that
works? (The items that there's a high degree of certainty about what score it
should get, and they will send those out, along with the other items.) You'd score
it and then you'd see if it gave the right score or not. To see if you, as a scorer,
are drifting too much away from the standard.

S: Kind of like that benchmark thing?

L: It could be called a benchmark. A benchmark is something else. It's that idea of
known standard item that could get interspersed in there to see if the scorer is
starting to drift in their scoring. A very high level of training goes into doing that.
Also, a high level of monitoring due to sophisticated computer programs that are
available.

S: In 1999, the Florida A+ Plan was developed. Can you explain the school grading
system?

L: I think in Florida, and this is also transnationally, that the great reform effort that's
occurring is called Standards Reform. That's a way we can characterize the
Public School Reform effort nation-wide, Standards Reform. The idea is you
establish very precisely what you want/we want our children to know, like the
Sunshine State Standards do. You have a good solid description of what it is
you want our kids to know, then you have an assessment of that knowledge and
skills in that test. Then you have an incentive system that we encourage the
children and the teachers to learn that requisite knowledge and skills better.
So you have to have definition of knowledge and skill. You have to have
the assessment and you have to have an incentive system that ties those two
things together. That's the kinda of paradigm for standards reform. In Florida,
the incentive systems that ties the incentive system back to the standards
themselves, that incentive system is called the A+ Plan. The A+ Plan is a system
of rewards and sanctions based on student performance on FCAT for students,
for teachers, and for schools. So how a child performs on the FCAT test has the
impact on that child, on the child's teacher, and the child's school as a whole.
Now, on the stakes that affect the child, at grade three they must pass the
FCAT reading, as we already noted. At grade three, promotion decision is based
on FCAT score, very important. At high school graduation, they have to pass
FCAT reading and math. There [is also some] legislation in Tallahassee this year
to extend to some other grade levels a promotion based on FCAT performance
that is coming down the road. Probably this year some more grade levels will
have to pass the FCAT to be promoted. For schools, I'll tell you how the grades
work.









FCAT1: Mel Lucas, Page 10


The schools are awarded grades through the A+ Plan. If the school
makes an A, they receive money for every child tested in FCAT in that school.
The money is substantial. A school can get enough money, it can share that
money to the teachers and other staff members of that school. The amount of
money is nothing to sneeze at, it might be worth up to maybe $50,000? Several
thousand dollars of bonus money can come to a [teacher] if they make an A. If
they make an F, there are sanctions of increasing pain; that function would come
in. If a school makes an F one year, they might go into school improvement with
monitoring. If they get that second F, I think that opportunity scholarships are
made available to students. They don't have to go to that school, they can get a
opportunity scholarship to go to another school. If they keep getting more F's
after that, eventually the state can dissolve that school and get rid of those
teachers.

[End of Side 1, Tape A.]

L: The A+ plan has a system of sanctions and rewards; rewards for getting the high
grade, and sanctions for getting an F. Sanctions apply to the teachers and
schools and the students. Now the grades of how the school is awarded an A, B,
C, D, or F, is pretty complex. It's based on FCAT scores, both on the percentage
of students who score level three and above in reading, mathematics, and
writing, and an additional element called value added or gains. The percentage of
children that make gains from one year to the next on FCAT is also included in
that grading. Between those two ways of looking at the percent of kids scoring at
a certain level, and the percentage of kids making a gain from one year to the
next, those values are combined in such a way that yield a grade of either an A,
B, C, D, or F for the school.

S: So basically the FCAT performance really is the determining factor for schools.
Can you describe the relationship between the school and the school board and
its relationship with the FCAT? How does that all tie together?

L: At the school board, [which has the] elected six members in each county-some
school boards may have more than six in Florida, but in Alachua County there's
six board members-they're elected and they set broad policy. They hire the
superintendent. They handle budget and those kind of matters. They set the
general tones and policies of the schools. They are not specifically involved with
each school every year when it comes to issues about the school grade. Of
course, the school grades are reported publicly, the school board members are
aware of what those grades are, but the school board doesn't have any specific
responsibility on an annual basis regarding the grade of each school.
Now schools that get low grades, of course, there's a lot of pressure
internally because they want good grades, because they don't like bad grades,
and they want good grades because they'd like to do well. Everybody wants to
do good. Those schools that get low grades are motivated to improve, but there's









FCAT1: Mel Lucas, Page 11


no specific action that's required annually by the school board with respect to the
school grades.

S: You mentioned that the state monitors the low performing schools. Do they give
them any other assistance whatsoever?

L: Yes. The schools that scored an F get assistance from the state, and I think
there's probably even some money that's available for that assistance. Of
course, that money can't be construed in any way, it's money that's used for
specific remedial purposes of the school. The state does provide technical
assistance and some resources to help the schools.

S: What would you ask principals to do differently to prepare for the FCAT?

L: I don't think there's anything I'd ask them to do differently. We've been giving the
FCAT for a number of years, I think everyone understands basically how it works.
The issue really is that the schools understand that the Sunshine State
Standards are the guide to what we want to teach in our schools. All of our
curriculum is focusing on the Sunshine States Standards. All textbooks are
focusing on Sunshine State Standards. All of our in-service training for teachers
is focusing on Sunshine State Standards. So we're teaching the Sunshine State
Standards thoroughly in our schools. That's what we want to happen. I think
everybody understands that, so there's not anything different that needs to be
done with respect to the principals and how they focus their instruction and their
efforts at the school. Now maybe I could say, we want the principals to get all
those kids that are scoring low and make them score high; yeah, that's what we
want them to do.

S: With any assessment test, there is going to be a plethora of criticism. What is
your opinion of the many criticisms the FCAT receives?

L: I haven't heard too many valid criticisms of the FCAT. Most of the criticisms
about FCAT are criticisms that are legitimate for most testing programs, but not
for the FCAT. They just don't understand how the FCAT's different from those
testing programs. The major criticism of FCAT and the incentives that go along
with it is that teachers teach the test. We know that, teachers do teach the test.
But that's only bad when the test is a narrow, shallow assessment of what we
want kids to know. If teachers teach to a narrow, shallow test, that's bad. I would
criticize an accountability system that was based on a shallow, narrow test.
Fortunately, the FCAT is a deep and broad test, and we want the teachers to
teach those skills. So for that criticism that's legitimately applied to a lot of other
testing programs, when it's applied to FCAT, it just is not a valid criticism
because the FCAT test is not a shallow test. It's fine for teachers to teach the
skills in Sunshine State Standards, that's what we want them to do. That is the
most frequent criticism, and it's a criticism that just reflects a misunderstanding









FCAT1: Mel Lucas, Page 12


by the critic of the FCAT test and how it was designed.

S: You also said you were on the Bias Committee for the FCAT. In light of that, I've
heard that many minority students don't do as well on the FCAT as other
students, so there's a criticism that there's unfair treatment towards minority
students. Since you were on the Bias Committee, what is your opinion of that?

L: Well, there's certain differences, fairly systematic differences, in the average
score among various ethnic groups in the average score, but there's certainly
plenty of students, African-American students, who score very well on the FCAT.
There's certainly some Asian students whose average score on FCAT is very,
very high. There are some Asian students who score very poorly on FCAT. So
you have to understand that when we're talking about differences in performance
by ethnic groups, we're only talking about differences in the average
performance. There's certainly plenty of African-Americans that are very good
students and score very high on FCAT. But as a group on average, African-
Americans tend to score lower than Anglos tend to score. Anglos tend to score
lower than Asian students. Hispanic students in Alachua County score very well
on the test. The differences in the scores on the FCAT tests are due to
differences in knowledge, not differences due to a bias in testing. The tests have
been tightly scrutinized with respect to any kind of bias. There are very
sophisticated and very adequate ways to make sure that a test is not biased
against an ethnic group of boys or girls, so the test is essentially free of bias, and
the differences that we see in average test scores are differences in knowledge.

S: Would you say there's a difference between the more affluent communities and
the poor communities in scoring, and if so, what would some of the reasons
behind that be?

L: Yes, there are differences again. There are plenty of students who are poor who
score very high. There are plenty of students who are from rich families who
score very low, so we're just talking about differences in averages here. But there
is a correlation between the socio-economic status of the family and the child's
test score. That correlation is ubiquitous, widely seen in all kinds of tests and all
kinds of environments, not just an issue about FCAT. Every test that exists that
I've seen of either mental ability or academic achievement, that correlation exists,
and it exists around the world. Now, the reasons for it are complex, in some ways
complex, and some of the reasons are pretty obvious. Children resemble their
parents with respect to their success in school. Parents pass on to their children
habits of study, habits of discipline, and intelligence are passed on to the
children. Families that are effective families, adults who are effective individuals
who have good jobs. They have those good jobs and make more money
because they have some self-discipline, they are reasonably well-educated, [and]
they're reasonably intelligent, and they tend to pass on those good habits of
study and discipline and intelligence to their children, one way or the other. So









FCAT1: Mel Lucas, Page 13


those children who come to school with those good study habits, that good self-
discipline, they will thrive better in a school situation than children who come from
a dysfunctional family. A family where maybe they don't have two parents
present, maybe there might be some substance-abuse problems, or there might
be other habits that contribute to that family's low income where they can't get a
good job or hold a good job [could be due to] some dysfunction in their behavior.
Those kinds of problems that adults have unfortunately we pass on to our
children. If we're, as adults, undisciplined and careless, our children tend to be
the same way, unfortunately. So children with those habits that come to school
don't benefit as much from the educational program that we provide.

S: Overall, how would you evaluate student performance since 1998?

L: That's an important, important question. The question is, really, has the FCAT
test and A+ Plan and Standards Reform Movement benefitted the kids in Florida?
We can certainly look to the FCAT test itself to see improvements in scores, but
that's not adequate to answer that question, because just by a change in what's
being taught and focusing really on the Sunshine State Standards, you could see
an improvement in skills. That in and of itself, even though the Sunshine State
Standards have a broad definition to what we want our kids to learn, it's still not
adequate to answer the question. You have to go beyond Florida's FCAT testing
program itself to look for evidence to answer that question. We've all been
anxiously waiting to see the evidence, and it's starting to emerge. The best way
to look at it is to look at what's called the National Assessment of Educational
Progress [NAEP]. That's a testing program that's gone on for twenty-plus years
across the nation where a sample of children have been tested in every state for
twenty-plus years. State-by-state comparisons are now possible with that test.
Florida has been moving up in the NAEP comparisons from what it was earlier.
All the southeast has been lower than most of the country. Florida has been
higher than most of the states in the southeast. Florida has been improving
faster than most states in the nation recently, so the NAEP data is starting to
come back and showing the benefits of the A+ Plan and FCAT testing in Florida.

S: That's good to know. This is pretty much my final question for you. Is there
anything you would do to make the FCAT better since you've worked with it for
so long, or do you feel that it's sufficient as it is now?

L: I don't think there's anything really that needs to be improved on the FCAT. I
think it's as good as it can be. We need to expand it in all the curriculum areas,
we are moving into science now. We need to move into language arts, but it's
really a good test.

S: Is there anything else you'd like to say that I left out that you'd want to share?


L: No, you had good questions.









FCAT1: Mel Lucas, Page 14


S: Thank you. Then that's it, thank you very much.

L: It is my pleasure.

[End of Interview.]




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs