<%BANNER%>

Analyzing the Placement of Community College Students in English as a Second Language for Academic Purposes (EAP) Courses

Permanent Link: http://ufdc.ufl.edu/UFE0015741/00001

Material Information

Title: Analyzing the Placement of Community College Students in English as a Second Language for Academic Purposes (EAP) Courses
Physical Description: 1 online resource (141 p.)
Language: english
Creator: May, James S
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: Teaching and Learning -- Dissertations, Academic -- UF
Genre: Curriculum and Instruction (ISC) thesis, Ed.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The intention of this research was to increase the effectiveness of student placement tools and strategies used by community colleges to place nonnative English speakers into courses designed to teach English for future academic pursuits. More specifically, this research sought to analyze and improve placement practices at Valencia Community College in Orlando, Florida by identifying placement variables that best predicted success in various English as a second language (ESL) for Academic Purposes (EAP) courses. Locus of Control scale scores, a computed indicator of Generation 1.5 status, and results from four subtests of the ACCUPLACER Levels of English Placement (LOEP) Test were tested individually and within composite models for their ability to predict success as measured by final course grades and teacher evaluations of placement. These variables were tested for their ability to predict successful placement of first semester, self-identified nonnative English speakers into ESL classes covering four different skills (reading, writing, speech, and grammar) across five different levels of possible placement (EAP levels 2-6). Results indicated that the reading subtest was the best predictor of student final course grades. The essay subtest was the best predictor of teacher evaluation of placement, and individual subtests were preferred over composite models. Furthermore, both Locus of Control and the computed indicator of Generation 1.5 status were found to be correlates of student success. Additional recommendations are suggested for how to improve placement practices.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by James S May.
Thesis: Thesis (Ed.D.)--University of Florida, 2007.
Local: Adviser: Harper, Candace.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0015741:00001

Permanent Link: http://ufdc.ufl.edu/UFE0015741/00001

Material Information

Title: Analyzing the Placement of Community College Students in English as a Second Language for Academic Purposes (EAP) Courses
Physical Description: 1 online resource (141 p.)
Language: english
Creator: May, James S
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2007

Subjects

Subjects / Keywords: Teaching and Learning -- Dissertations, Academic -- UF
Genre: Curriculum and Instruction (ISC) thesis, Ed.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: The intention of this research was to increase the effectiveness of student placement tools and strategies used by community colleges to place nonnative English speakers into courses designed to teach English for future academic pursuits. More specifically, this research sought to analyze and improve placement practices at Valencia Community College in Orlando, Florida by identifying placement variables that best predicted success in various English as a second language (ESL) for Academic Purposes (EAP) courses. Locus of Control scale scores, a computed indicator of Generation 1.5 status, and results from four subtests of the ACCUPLACER Levels of English Placement (LOEP) Test were tested individually and within composite models for their ability to predict success as measured by final course grades and teacher evaluations of placement. These variables were tested for their ability to predict successful placement of first semester, self-identified nonnative English speakers into ESL classes covering four different skills (reading, writing, speech, and grammar) across five different levels of possible placement (EAP levels 2-6). Results indicated that the reading subtest was the best predictor of student final course grades. The essay subtest was the best predictor of teacher evaluation of placement, and individual subtests were preferred over composite models. Furthermore, both Locus of Control and the computed indicator of Generation 1.5 status were found to be correlates of student success. Additional recommendations are suggested for how to improve placement practices.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by James S May.
Thesis: Thesis (Ed.D.)--University of Florida, 2007.
Local: Adviser: Harper, Candace.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2007
System ID: UFE0015741:00001


This item has the following downloads:


Full Text





ANALYZING THE PLACEMENT OF COMMUNITY COLLEGE STUDENTS IN
ENGLISH AS A SECOND LANGUAGE FOR ACADEMIC PURPOSES (EAP) COURSES

















By

JAMES S. MAY


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF EDUCATION

UNIVERSITY OF FLORIDA


2007


































2007 by James S. May

































To Sharon, Aidan, Collin, & Sabrina









ACKNOWLEDGMENTS

This dissertation would not have been possible without the support of my wife. Sharon has

been the inspiration for many of my achievements. In addition, her extraordinary support in

every area of our lives including taking on extra responsibilities with and providing for the needs

of our three beautiful children Aidan, Collin, and Sabrina has made it possible for me to

complete this project.

For as long as I can remember, my parents, Norman and Judy, have always been there to

support my academic endeavors. Their unwavering support and constant prodding have helped

me to get this dissertation finished. I also want to thank my sister, Patrice, and my wife's parents,

Lois and Dick. Their continued support has meant a lot to me.

The guidance, support, and encouragement given by my dissertation advisor, Dr. Candace

Harper, has kept me on track and focused on my goals. Her knowledge and experience cleared

up many confusing issues surrounding the writing of my doctoral dissertation. My committee

members, Dr. Miller, Dr. Swain, and Dr. Thompson have all been instrumental in my reaching

this point in my academic life. Without input from each of them, I would never have gotten to

where I am today. My dissertation team has always been there when I have needed them. I thank

them all.

Other individuals who must be recognized include my closest colleagues at work, Maiken,

Sarah, Andrew, and Summer, who have all in one way or another supported me or been a

sounding board for ideas throughout my dissertation process. Words of final thanks go to the rest

of my colleagues, my students, and my friends who have supported and offered words of

encouragement throughout the completion of my dissertation.









TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S ..............................................................................................................4

L IS T O F T A B L E S .................................................................................8

LIST OF FIGURES .................................. .. .... ...... ................... 10

ABSTRAC T ................................................. ............... 12

CHAPTER

1 INTRODUCTION ............... .............................. ............................ 14

Im petus for the Study .................. .................. .................. ........... .. ............ 14
Purpose and Rationale .................................. ... .. .......... ......... .... 19
Research Questions................. .. ....... ... ....... ...... ............... 20
V a ria b le s .................................................................................................................................2 1
D dependent Variables ................................... ... .......... ........... ...21
Independent V ariables ............................................................ .... ............... 22
L im itatio n s ................... .......................................................... ................ 2 4

2 R EV IEW O F R E SEA R CH .............................................................................. ............... 26

L language C om petence.............. .. ...................................... .... ..... .. .. ............26
Assessment of Language Competence .....................................................................29
R recent R research in Language Testing......................................................................... ..... 32
Locus of Control ............................................................... ..... ...... ......... 34
G en eration 1.5 ............................................................... ................3 6
D differences and difficulties ........................................... ............................................37
Program place ent for generation 1.5 ........................................ ........................ 39
The N eed for Better Placem ent Procedures......................................................... ... .......... 41
P rog ram M o d els ...........................................................................4 5
C current Placem ent Practices ............................................................................ ...............50
P ilot Stu dy ...................................................................................52
The LOEP and Its Recommended Use ...........................................................................55
H olistically S cored E ssay s............ ................................................................ .. .. ...... .....57

3 M materials and M methods ..................................... ... .. ........... ....... ......58

In tro d u ctio n ...........................................................................................................5 8
S tu d y S ettin g ..........................................................................................5 8
Participants ................... ............................ 59
M materials and D ata C collection ............................................................59
Procedures ...................... ................................. .........60









4 R E SU L T S ....................................................... 64

Su rv ey R esu lts ................................................................64
Q u estion 1 .............. ... .............................................................66
Student opinions ................................................................... 66
T teacher opinions ....................................................... 66
Analysis of open-ended teacher responses .............. ... ................. 67
A d v ic e ..............................................................................6 7
S p e e c h c o u rse s ................................................................................................... 6 8
M isp la c e m e n t ..................................................................................................... 6 9
G en eratio n 1.5 ................................................................7 0
T each in g p ro b lem s ............................................................................................. 7 1
L O E P ........................................................................................................7 2
P lacem ent in general ............................................................73
Other placement comments ............... ..... ......................74
Q u e stio n s 2 5 ...............................................................................7 5
S tu dy 1 .............. ..................................................................... 7 5
Study 2.......................................85

5 DISCU SSION ......... .. ....... .... .... ... ........ ...................... ........ 89

V alencia's E A P Population ..............................................................89
Student and Teacher Beliefs about Placement .................................... 90
T h e P referred A p p ro ach ................................................................................................... 9 3
R leading Subtest Preferred ...............................................................96
Predicting Evaluation of Placem ent ........................... ...... ........ .... ..........................99
L O C and G generation 1.5........................................................100

6 CONCLUSIONS AND RECOMMENDATIONS .......... ..................................104

Recommendations for Testing and Placement ........................................ .........105
O th er R ecom m en nation s ................................................................................................ 106

APPENDIX

A LOEP Essay Rubric ........................................................................108

B Survey Instrum ents U sed in the Study ............................................... ..........................109

L better to In stru cto rs......................................................................................................... 10 9
Informed Consent: Instructor............... ......... ......... ........ 110
In stru cto r S u rv ey ..................................................................................... 1 1 1
G generation 1.5 Survey ....................................................................................................... 113
L better to Students ......... ............ ............................ ............................114
Informed Consent: Student ....................... .......... ..... ............... 115
S tu d en t S u rv ey ...................... .. ............. .. .......................................................1 16
S u rv e y S c rip t ...................... .. ............. .. .......................................................1 1 9



6









C Complete Results of Student/Teacher Surveys................................. ....................... 120

L IST O F R E F E R E N C E S ..................................................................................... ..................130

B IO G R A PH IC A L SK E T C H ......................................................................... ... ..................... 14 1


















































7









LIST OF TABLES


Table page

2-1 Number of Florida community colleges employing common practices in the
placement of ESL students with many colleges using multiple practices .......................42

2-2 Explanation and an example for placing students into EAP courses at Valencia..............54

4-1 Means, standard deviations, and correlations for final course grades and predictor
v ariab le s ...................... .. .. ......... .. .. .................................................... 7 6

4-2 Summary performance of all competing models in study 1 ...........................................77

4-3 EAP 0300: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades.............................78

4-4 EAP 0320: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades............................. 78

4-5 EAP 0340: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades.............................79

4-6 EAP 0400: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades.............................80

4-7 EAP 0420: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades............................. 81

4-8 EAP 0440: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades............................. 81

4-9 EAP 1500: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades.............................82

4-10 EAP 1520: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades............................. 83

4-11 EAP 1620: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades.............................84

4-12 EAP 1640: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades............................. 85

4-13 Means, standard deviations, and intercorrelations for teacher evaluation of placement
and predictor variables for first semester survey respondents........................................87

4-14 Summary of simultaneous multiple regression analyses for models predicting
successful placement as measured by teacher evaluation of placement...........................88









5-1 Post-hoc results comparing the 3 original models with each subtest added as a
com peting m odel .........................................................................97

5-2 Summary of simultaneous multiple regression analyses for LOCSCAL &
GN15CMPT at predicting successful placement as measured by teacher evaluation
of place ent ............. ...... ............. ......................................... 100

5-3 Percentages of students in each course failing to answer questions on the Trice LOC
Index ......................................................... ...................................10 1

C-l Native languages of EAP placement survey respondents listed by campus....................120

C-2 Countries of origin for EAP placement survey respondents listed by campus..............121

C-3 Gender of EAP placement survey respondents listed by campus.................................123

C-4 Semester of enrollment for EAP placement survey respondents listed by campus.........123

C-5 Ages of EAP placement survey respondents listed by campus ............ ................123

C-6 Number of Years in the U.S. for placement survey respondents listed by campus.........125

C-7 Survey responses to, "What year did you graduate from high school/earn GED?"
listed b y cam pu s.................................................................... 12 6

C-8 Survey responses to, "Are you the first person in your family to go to college?"...........127

C-9 Survey responses to, "If you weren't the first person in your family to go to college,
w h o w as?" ................... ............ ........................................ ................. 12 7

C-10 Survey responses to, "Are you the first person in your family to go to college in the
U .S .?" ...................... .. ............. .................................. ................ 12 8

C-11 Survey responses to, "If you weren't the first person in your family to go to college,
w ho w as?" ................................ ......... .. .......................... ................ 128

C-12 Survey responses to, "Have you gone to college outside of the United States?"............129

C-13 Number of years spent in college outside of the U.S. by placement survey
respondents listed by cam pus................................................. .............................. 129









LIST OF FIGURES


Figure pe

2-1 Upshur's simple program model: This figure illustrates the simple program model
discussed by Upshur (1973). Students enter, receive instruction, and leave ...................45

2-2 Upshur's complex model: This figure illustrates the most complex program model
discussed by Upshur (1973). It is an example of a multi-level program with several
types of decisions being m ade. ............................................... ............................... 47

2-3 Possible improved model: This model illustrates the decision stages and types of
tests that could be avoided by enhancing placement practices at VCC...........................49









LIST OF ABBREVIATIONS


ESL

EAP


Generation 1.5


LOEP


English as a Second Language

English as a Second Language for Academic Purposes, usually refers to
ESL at the college level.

Generation 1.5 students usually have come to the United States in their
early teen or pre-teen years. They have often attended U.S. schools, and
many of these students have even graduated from American high schools.
While attending American schools, these students have had time to
acquire informal English. Many of them use American idiomatic
expressions, and some may even have American accents. Errors in their
language are detectable, but they do not interfere with understanding, and
these students are comfortable speaking English and do so with relative
ease. Their reading, grammar, and writing skills on the other hand are
usually behind that of their college-ready peers. They are not what you
may consider true ESL students, but they are not true native English
speaking students either.

Levels of English Proficiency Test. Used to refer to the ACCUPLACER
ESL battery of tests.









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Education

ANALYZING THE PLACEMENT OF COMMUNITY COLLEGE STUDENTS IN ENGLISH
AS A SECOND LANGUAGE FOR ACADEMIC PURPOSES (EAP) COURSES

By

James S. May

August 2007

Chair: Candace Harper
Major: Curriculum and Instruction (ISC)

The intention of this research was to increase the effectiveness of student placement tools

and strategies used by community colleges to place nonnative English speakers into courses

designed to teach English for future academic pursuits. More specifically, this research sought to

analyze and improve placement practices at Valencia Community College in Orlando, Florida by

identifying placement variables that best predicted success in various English as a second

language (ESL) for Academic Purposes (EAP) courses. Locus of Control scale scores, a

computed indicator of Generation 1.5 status, and results from four subtests of the

ACCUPLACER Levels of English Placement (LOEP) Test were tested individually and within

composite models for their ability to predict success as measured by final course grades and

teacher evaluations of placement. These variables were tested for their ability to predict

successful placement of first semester, self-identified nonnative English speakers into ESL

classes covering four different skills (reading, writing, speech, and grammar) across five

different levels of possible placement (EAP levels 2-6). Results indicated that the reading subtest

was the best predictor of student final course grades. The essay subtest was the best predictor of

teacher evaluation of placement, and individual subtests were preferred over composite models.

Furthermore, both Locus of Control and the computed indicator of Generation 1.5 status were









found to be correlates of student success. Additional recommendations are suggested for how to

improve placement practices.









CHAPTER 1
INTRODUCTION

Impetus for the Study

There has been a long standing need for accurate placement of nonnative speakers of

English into academic programs. Program administrators and faculty have been concerned about

this need for more than 50 years, as evidenced by Gibian (1951) and Ives (1953). The goals of

admitting and placing students accurately into programs designed to develop their language skills

to a level commensurate with their college-ready peers have led to a variety of testing

instruments and English as a Second Language (ESL) programs nationwide. Unfortunately,

many of the tests used by colleges and schools to place students were not originally designed for

this purpose.

For admissions purposes, most schools use some form of language proficiency test, the

most common being the TOEFL (Test of English as a Foreign Language). Although the TOEFL,

much like the SAT (Scholastic Aptitude Test) and the ACT (American College Test), is a

moderate predictor of overall success in college, it is not designed to serve as a placement tool.

Many schools also employ other language proficiency tests for placement purposes, like the

CELSA (Combined English Language Skills Assessment), the CELT (Comprehensive English

Language Test), the Accuplacer LOEP (Levels of English Proficiency), and the Accuplacer CPT

(College Placement Test). However, these tests are designed to assess proficiency and do not

align language skills with individual course objectives and outcomes. More accurate placement

would result from the analysis of test results and survey data designed to identify student

characteristics aligned with course objectives and outcomes. This concept is supported by The

College Board and Accuplacer developers:

ACCUPLACER tests are designed to assist institutions in placing students into appropriate
courses. Given that institutions differ greatly with respect to composition of the student









body, faculty, and course content...each institution should establish their own cut scores to
facilitate placement decisions based on factors and data unique to their institution (College
Board, 2003)

However, most colleges do not have the budget required to administer a large battery of surveys

and tests or to hire individuals who can identify student goals, prior experiences, background

knowledge and abilities. Therefore, many schools simply rely on a few multiple choice tests to

make placement decisions.

Another major problem with the current use of placement tests is the way they are

administered. Students often take placement tests during the application or admissions process.

For many students, this is a long day where they are required to stand in lines for lengthy periods

of time. By the time they are given their placement tests, many students are frustrated, tired,

hungry, or simply more interested in getting back to work or play than taking tests. Therefore,

results on these tests may not be accurate representations of the true abilities of these students.

Another confounding factor is that many students feel pressured to take placement tests.

Students are often unaware of established application deadlines and arrive late expecting to

enroll in classes quickly. Rather than waiting and applying for a subsequent semester, they often

feel an urgent need to complete prerequisite testing regardless of how it might affect placement.

This pressure, in conjunction with disinterest in the test, presents a problem because many

students are unaware that their results on these tests will affect their future placement into

courses.

For nonnative English speaking populations, these problems are compounded in that

nonnative speakers often have trouble understanding instructions and the processes involved in

testing. These students may also be limited by time. Some international students are simply

traveling in the United States visiting programs for which they may later seek student visas.

They may not have the language skills to explain why they would rather come back and take a









placement test on another occasion. And, nonnative English speakers may end up taking multiple

tests in a single day because schools often require one exam for admission and another for

placement into programs.

Due to both an increase in ESL student populations and the inability of existing ESL

programs to adequately prepare these students for college courses, post-secondary institutions

around the U.S. have been developing multileveled preparatory programs for nonnative speakers

of English. These programs are often significantly more complex than the typical two-level

preparatory programs for native English-speaking high school graduates needing academic

assistance in their transition to post-secondary education. For example, each community college

in the state of Florida has been able to develop a program that best suits its unique needs. Given

different populations and different needs, schools across the state of Florida have adopted

different programs offering different courses and therefore requiring different placement criteria.

Due to budget constraints, timing issues, and lack of personnel, institutions have been

forced to simply do their best in developing programs that provide for the needs of their diverse

populations. A consortium of concerned ESL/EAP professionals has been working to address

problems inherent in the state's EAP programs: the variety of needs present, exit testing,

placement testing, and the special needs of Generation 1.5 students. Generation 1.5 is a term

coined by Rumbaut and Ima (1988). It refers to students from non-English speaking backgrounds

who typically have lived in the U.S. for some time, have aural/oral competence in English that is

near native, but read and write at a level below average in English. (For more information on

Generation 1.5 students, Chapter 2).

Although the consortium of ESL/EAP professionals is making strides to address many of

the issues listed above, it has not been able to address and solve all of the problems. For









example, even though there are relatively standard courses across EAP programs, there are no

common placement exams for entrance into these courses. In addition, even schools that use the

same placement exams and offer the same courses have developed different cut-scores, i.e.,

scores that are used to make decisions about the level of a program in which a student will be

placed.

At a consortium meeting in June of 2004 held at Valencia Community College (VCC), it

was discovered that none of the 12 colleges present had empirically addressed the troublesome

issue of placement, and to this date, little has changed. However, currently the consortium is

entering Phase 3 of a statewide Council on Instructional Affairs (CIA) initiative to standardize

how EAP students are placed in community college courses within the state. To facilitate this

initiative, on February 9, 2007, the consortium chair presented recommendations to the CIA for

changes to existing statutes. One recommendation was to adjust state statutes and administrative

rules so that programs could elect to offer institutional credit for EAP courses. Another

recommendation was that the administrative rules on college-level testing and placement be

amended to read, "...Prior to the completion of registration, the EAP students' language

proficiency shall be assessed by the College Board Accuplacer LOEP or the ACT Compass

ESL." Members of the consortium have called for any research that could assist them in

developing more accurate placement that would allow for a stronger match between student

abilities and goals and the courses they require to achieve those goals.

Students are best placed into courses that challenge them but allow them to earn passing

grades and achieve acceptable levels of understanding. Students are misplaced in courses that are

either too difficult or too easy to provide any meaningful challenges. At many schools, students

are often placed above or below their levels of ability, which may lead to high stress or extreme









boredom. This stress and boredom can in turn lead to low attendance, disruptive behavior in the

classroom, failure to achieve, and poor grades. Students who feel high levels of stress may also

not see success as a possibility, which can lead them to withdraw from a course or program. This

is often the case with Generation 1.5 students.

Given the anecdotal and observed evidence identifying the inadequacy of current

placement practices for EAP students at Valencia (See Chapter 2), there is a clear need to

identify a more meaningful, comprehensive, and efficient system for placing students. This

system should take into account student background, both personal and academic, student

abilities, and any other factors that can be identified to affect placement (i.e., level of education

& literacy in the native language, age, motivation, etc.).

In order to do this, a placement system needs to be established that could collect multiple

sources of information through the use of survey and test results. However, identifying optimum

placement is a challenging endeavor. The criteria that are most often used to judge effective

placement are frequently influenced by variables other than student ability and performance.

Teacher beliefs about students' true abilities are affected by attendance, participation, and

student attitudes. Final course grades are also influenced by these same teacher beliefs and other

factors like social promotion. In other words, relying solely on teacher judgment as a variable

may tell us more about student/teacher relationships and teacher perception of student attitudes

than it does about student abilities. However, using only final course grades might be too limited

because it fails to account for students who lack skills or fail to complete work. These students

are then passed on because teachers are afraid to have high numbers of failing students or simply

do not wish to see those same students again the following semester. Therefore, in this study









both final course grade and teacher evaluation of placement were used to identify variables that

best predict student success.

Purpose and Rationale

The intention of this research was to increase the effectiveness of student placement tools

and strategies used by community colleges to place nonnative English speakers into courses

designed to teach English for future academic pursuits. More specifically, this research sought to

identify variables that predicted the successful placement of second language students into

Valencia's EAP program. It was assumed that identifying effective predictors would provide the

researcher and other decision makers with the necessary information to make decisions about

existing placement mechanisms. It would also inform other institutions as to best practices.

In order to prepare students for college courses, many community colleges have

developmental, or preparatory, reading and writing programs. These programs usually offer two

levels of each skill prior to admitting students to the first level of college composition. These

developmental courses are designed for native English speakers who do not have the reading and

writing skills necessary for college courses. In contrast to programs designed for native English

speakers, the EAP program at Valencia includes fifteen (15) courses spanning five (5) levels,

covering up to four (4) different language skills per level. It should be mentioned here that EAP

programs in the State of Florida can have up to 6 ability levels. However, faculty members at

Valencia decided that allowing students with language proficiencies below the cut-off at level 2

might negatively impact Adult Education programs at the county level. Therefore, students

demonstrating language proficiencies below level 2 are sent to Orange County for adult basic

education. Given the large span of Valencia's 15 course EAP program, accurate placement of

students is a significant concern. Students are currently placed into this matrix of courses based

on an established formula for averaging scores from four parts of a placement test: one









holistically graded essay and three objectively graded subtests. (For more information, see

Chapter 2)

In short, students currently take and receive scores for three objectively graded LOEP sub-

tests (Reading, Sentence Meaning, and Language Use). The three objective scores are then

averaged. This averaged score is then averaged with a number derived from the students'

holistically scored essay and the average is used to place students. This, however, has not always

been the method of placement at Valencia. From 2001 through 2003, essays were not used. The

decision to not use essays was in large part due to requests by counseling and admissions

personnel who wanted the ability to register, test, and assign students to classes in one day. EAP

teachers agreed to a trial period, but as they noted more instances of misplacement of students,

they asked to have essays reinstated. Prior to reinstating the reading of essays for placement,

students were placed simply by the average of the three objectively graded subtests. Some

believe Valencia should return to this older process; others believe the college should use

individual subtest scores to place students in skills courses at different levels.

Research Questions

In addition to identifying descriptive information about Valencia's EAP, this research

sought to identify the most effective aspects among several approaches to placing students into

EAP courses at Valencia by finding answers to the following questions:

1. What are the student and teacher beliefs about placement at Valencia?

2. Which of the following three approaches best predicts student success as measured by final
course grades and teacher evaluation of placement?

Model 1: Averaging the three objectively scored LOEP subtests, without the essay

Model 2: Using an equally weighted average of both the objectively scored LOEP
subtests and the Essay test

Model 3: Using the four LOEP subtests as individual predictors









3. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
final course grades?

4. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
teacher evaluation of placement?

5. Do the student variables of Locus of Control and Generation 1.5 add to the prediction of
placement in EAP courses as measured by final course grades and teacher evaluation of
placement?

Variables

In addition to describing Valencia's EAP student population and eliciting both quantitative

and descriptive feedback about placement at Valencia, this research employed multiple

regression analyses to test the predictive abilities of a variety of independent variables on two

different dependent variables.

Dependent Variables

Final Course Grades: Each course name (i.e., EAP 1640) represents the final course grades

that students received in each course. Final grades have been assigned the following point

values: A=4, B=3, C=2, D=l, and F=0. Each course number also gives information about the

course. The first number, either a "0" or a "1," represents whether or not a course counts for

credit ("1" indicates college credit). The second number, "2 through 6" indicates the level of a

course, 6 indicating the highest skill level. The final two numbers indicate the type of course;

00 = Speech, 20 = Reading, 40 = Writing, 60 = Grammar, and 81 = Combined Skills.

Teacher Evaluation of Placement: During the summer of 2006, teachers were asked to rate

the placement of their current EAP students. They were asked to identify students as either

well placed or to recommend an alternate placement level. Using results from instructor

surveys, students were placed into 7 different levels. Levels 2-6 correspond with the levels









offered at Valencia, and levels 1 and 7 signify the teacher belief that a student is either under

or over prepared for Valencia's program.

Independent Variables

Locus of Control (LOCSCAL): This term stands for Locus of Control scale score. During

the summer of 2006, 470 students took the Trice Locus of Control inventory as part of their

student surveys. Scores ranged between 0 28.

Generation 1.5 (GN15CMPT): This term is a computed indicator of Generation 1.5 status.

An attempt was made to validate a survey measure of the variable through correlation with the

teacher judgment of the construct. During the summer of 2006, students and teachers took

surveys. Students were asked a variety questions thought to relate to the construct as

discussed in the literature on Generation 1.5 students (See Chapter 2). Teachers were asked to

identify students in their courses who they thought were members of Generation 1.5.

Intercorrelations between student survey responses and teacher ratings of Generation 1.5

status as measured in the instructor surveys revealed that three survey questions demonstrated

small to medium correlations (Cohen, 1988) with teacher evaluation of Generation 1.5 status:

The grade students started school in the U.S. r = -.35 p<.001, students age r = -.26 p<.001,

and if a student had gone to college outside the US r = -.26 p<.001. The formula for this

indicator identified students as Generation 1.5 if they answered yes to two of the following

three criteria: if the student started school in the U.S. before 10th grade, 2) if the student was

younger than 20, and 3) if they had not gone to college in another country. This variable is

moderately correlated with teacher identification of 1.5 status, r = .40 p<.001.

LOEP Reading (LORC): This term stands for the Levels of English Proficiency reading test

score, a score between 1 and 120. On this test, students read passages of 50 to 90 words and

then answer questions about their reading. They may read about the arts, science, or history.









Half of the test questions ask about information that is stated in the passage. The other half

asks students to identify the main ideas, fact vs. opinion, or the author's point of view.

LOEP Language Use (LOLU): This term stands for the Levels of English Proficiency

language use test score, a score between 1 and 120. This test is designed to measure the

students' understanding of the English vocabulary. The sentences come from a variety of

different subject areas. Students are asked questions about basic and important idioms,

particularly terms of beauty, age, greatness, and size, adverbs such as before, after, during,

and prepositions of directions and place.

LOEP Sentence Meaning (LOSM): This term stands for the Levels of English Proficiency

Sentence Meaning test score, a score between 1 and 120. Students are asked to fill in a blank

with a word or phrase, or combine two sentences. The skills covered are writing skills

including the proper use of nouns and verbs.

LOEP Essay (LOES): This term stands for the Levels of English Proficiency Essay test

score, a score between 1 and 120. Students have 60 minutes to write an essay on a topic

provided by the administrator. Students are asked to organize their ideas carefully and to

present them in more than one paragraph. Trained readers at Valencia read each student's

essay and rate it from 1 to 7. This number is then multiplied by 10 and added to the number

50.

LOEP Average (LOEPAVG): This term stands for the computed average of the three

objectively scored LOEP subtests: LORC, LOLU, and LOSM. This represents the method

used to place students during the two-year period that essays were not read.









LOEP Average with Essay (LPAVGWE): This is the current methodology used at

Valencia Community College to place students. It is a composite score derived from the

average of LOEPAVG and LOES.

Limitations

A primary limitation of all studies identifying the predictive characteristics of tests has

been summarized by the College Board (2003), "There is no perfect measure of appropriate

course placement." There are a variety of reasons why students may not do well in courses

including prior experience, motivation and background knowledge.

Another limitation of studies such as the current study is that the generalizations are

somewhat restricted due to institution-specific data. Unless other institutions have matching

populations and offer similar courses, the transferability of findings is limited. In this study, the

demographics of the population and course descriptions are reported in detail.

Another limitation deals with regression analyses in general. In regression analyses,

Schmidt (1971) suggested minimum case-to-predictor ratios ranging in value from 15-to-1 to 25-

to-1. Using these ratios, this research would require anywhere from 15 to 100 students per

regression. Nunnally (1978) stated that if there are only two or three independent variables and

no pre-selection is made among them, 100 or more subjects would provide a multiple correlation

with little bias. Haris (1975) recommended that the number of subjects be greater than 50 + the

number of predictor variables.

Because it was unknown how many first semester EAP students would be taking courses

during the time period the surveys were being administered, it was not possible to identify

whether or not this research would have enough students to carry out reliable regression

analyses. Therefore, two studies were proposed. The first would use LOEP subtest and final

course grade data gathered on first semester EAP students attending Valencia for the three









academic years prior to the study. The second study would use data gathered from first time EAP

students responding to the surveys during the summer of 2006. As it turned out, only 131 first

semester students were in the survey group which would have led to critically low numbers in

each course.









CHAPTER 2
REVIEW OF RESEARCH

Anecdotal reports from EAP instructors at Valencia have indicated that current placement

practices are not effective. If evidence can be established that teachers' reports are valid and

placement is ineffective, adjustments to placement procedures could be made now before

Valencia begins to experience problems related to the forecasted growth of ESL populations. The

Bureau of Economic and Business Research at the University of Florida projects a 10% increase

in Spanish speaking populations in Orange and Osceola counties (Valencia's feeder counties)

between 2005 and 2015. This population is expected to increase 17% by 2030. (BEBR, 2005)

However, identifying valid and reliable placement mechanisms for placing students into

language programs depends on properly operationalizing the construct or constructs being tested.

Language Competence

What exactly does it mean to be competent or fluent in the use of a second language, and

how is competence determined? Structural linguists from the 1940's and 1950's often viewed

language positivistically as a formal code that could be analyzed, taught, and tested. From their

perspective, discrete point tests could distinguish competent use of language. Views began to

change, however, in the late 1950's with Chomsky's (1957) distinction between competence and

performance. The concept was further analyzed by Troike (1969) as receptive and productive

competence and performance. Beliefs about language have continued to evolve over the past

quarter of a century through the works of individuals such as Dell Hymes (1972). Hymes

introduced the concept of communicative competence and argued that speakers of a language

have to have more than grammatical competence in order to be able to communicate effectively

in a language. Speakers also need to know how language is used by members of a speech

community to accomplish their purposes. These beliefs have been expanded and have evolved









further through the works of Oiler (1979), Canale and Swain (1980), Bachman (1990) and

others.

Although Hymes introduced the concept of communicative competence, the work

conducted by Canale and Swain (1980) became canon in applied linguistics. According to

Canale and Swain (1980), communicative competence consists of four components: grammatical

competence (sentence structure/syntax), sociolinguistic competence (appropriateness of language

use), discourse competence (cohesion and coherence), and strategic competence (use of

communication strategies). A more recent study of communicative competence by Bachman

(1990) further expands the concept. In this view, communicative competence can be divided

into two aspects: linguistic competencies which include phonology and orthography, grammar,

vocabulary, and discourse; and pragmatic competencies which include functional,

sociolinguistic, interactional, and cultural competence.

In Bachman's view, the linguistic aspects of communicative competence are those

involved in achieving an internalized functional knowledge of the elements of a language.

Individuals who have phonological competence have the ability to produce the distinctive and

meaningful sounds of a language: consonants, vowels, tone and intonation patterns, rhythm

patterns, stress patterns, and all other suprasegmental features that carry meaning. Orthographic

competence is closely related to phonological competence; however, orthographic competence

describes an individual's ability to decipher and encode the writing system of a language. An

individual with grammatical competence has the ability to recognize and produce the distinctive

grammatical structures of a language and to use them effectively in communication. Individuals

with lexical competence have the ability to recognize and use words in a language in the way

similar to that of native speakers using them. This includes understanding the different









relationships among families of words, idiomatic (non-literal) expressions, and the common

collocations of words. Individuals with discourse competence have the ability to understand and

construct oral and written messages from various genres: narrative, expository, persuasive, and

descriptive. These individuals understand that different genres have different characteristics that

help maintain coherence and perform various functions.

In Bachman's view, the pragmatic aspects of communicative competence are those that

have to do with how language is used in communication situations to achieve the speaker's

purposes. Individuals with functional competence have the ability to accomplish communicative

purposes in a language, like greeting people or requesting assistance or information. Individuals

with sociolinguistic competence have the ability to interpret the social context of linguistic

utterances and to use language in socially appropriate ways for any communication situation. An

individual with interactional competence knows how to interpret and apply the unwritten rules

for interaction in various communication situations within a given speech community and

culture. These individuals can initiate and manage conversations and negotiate meaning with

other people while paying specific attention to body language, eye contact, and proximity.

Individuals with cultural competence have ability to understand behavior from the standpoint of

the members of a culture and to behave in a way that would be understood by the members of the

culture in the intended way. In other words, these individuals use language appropriate to the

social structure of a culture and the values, assumptions, and beliefs of the people.

Although Accuplacer LOEP developers do not describe or conceptualize the language of

their tests in terms of Bachman's views of competence, the LOEP subtests currently offered at

VCC do tend to focus more on the linguistic rather than pragmatic aspects of communicative

competence. For example, because the test is written, test takers need some level orthographic









competence to decipher and encode the writing system of English. Knowledge of grammar and

vocabulary is tested in that students are required to recognize and produce the distinctive

grammatical structures of English and to use them effectively in communication. In addition,

students need to recognize and use words in English in ways that are similar to the ways native

English speakers use them. One could argue that knowledge of discourse is another underlying

competency assessed in both the reading and writing tests because understanding of rhetorical

patterns would enhance students' abilities to interpret reading passages and organize essays.

Phonology and the pragmatic competencies, on the other hand, are not as readily

identifiable in the LOEP tests currently used at Valencia. Because Valencia does not give the

LOEP Listening test, students are not assessed on their ability to interpret and produce the

distinctive and meaningful sounds of English. This could be a major weakness considering the

fact that these LOEP tests are used to place EAP students into EAP Speech courses. In addition

to phonology, the LOEP Listening subtest could also add information about students' pragmatic

competencies; the test purports to be a measure that

measures the ability to listen to and understand one or more people speaking in English.
The conversations take place in academic environments such as lecture halls, study
sessions, a computer lab, the library, the gymnasium, and the like; and in everyday
environments such as at home, shopping, at a restaurant, at a dentist's office, listening to
the radio, reading the newspaper, and performing tasks at work. (College Board, 2004)

Assessment of Language Competence

Bachman's expanded view of communicative competence may give a glimpse at why

assessment of competence is so difficult. It is difficult enough to develop tests of discrete skills

of language, and performance of discrete skills is not necessarily an accurate indicator of

competence in language. Assessment of more integrative aspects of language is even more

complex.









According to the American Educational Research Association, American Psychological

Association, and the National Council on Measurement in Education (1999):

For all test takers, a test that employs language is, in part, a measure of their language
skills. This is of particular concern for test takers whose first language is not the language
of the test. Test use with individuals who have not sufficiently acquired the language of the
test may introduce construct-irrelevant components to the testing process. In such
instances, test results may not reflect accurately the qualities and competencies intended to
be measured. ... Therefore it is important to consider language background in developing,
selecting, and administering tests and in interpreting test performance. (p. 91)

Unfortunately not all institutions have completely understood the importance of language

background and its affects on testing. Ortiz and Yates (1983) showed that Hispanic students were

over-represented by 300% in classes for the mentally retarded. Oller (1992) was not surprised by

this and added that this type of misdiagnosis may continue to go unnoticed due to what he calls

"monolingual myopia" which he contends has been prevalent for more than a century and still

pervades the American educational scene. In a study of within-group diversity of

disproportionate representation of ELL students in Special Education (Artiles, Rueda, Salazar, &

Higareda, 2005), it was found that ELLs identified by districts as having limited proficiency in

both their native language (L1) and English (L2) showed the highest rates of identification in the

Special Education categories investigated. These students were consistently overrepresented in

learning disabilities and language and speech disabilities classes. Furthermore, these students had

greater chances of being placed in Special Education programs. Other research has demonstrated

how ELLs are negatively affected by content based assessment measures (Abedi, 2006; Abedi,

Lord, Hofstetter & Baker, 2000; Abedi, Lord & Hofstetter, 1998; Abedi, Lord, Kim-Boscardin,

& Miyoshi, 2000; Abedi, Lord, & Plummer, 1997).

Because tests have been shown to misdiagnose second language students, one begins to

wonder if they can be used to accurately place these same students or predict their success in

certain courses. Some would say no. In 1990, Goodman, Freed, and McManus found that even









the Modern Language Aptitude Test (MLAT) was not an accurate predictor of success in foreign

language courses. They speculated that perhaps the failure of this test was the result of the fact

that the test measured discrete points of language ability but that language teaching was moving

toward integrative, holistic, approaches to language development.

In 1961, John Carroll suggested a distinction in language testing between discrete point

and integrative approaches. With his Unitary Trait Hypothesis, Oller (1979) posited that

language proficiency consisted of a single unitary ability. Oller himself later disconfirmed

aspects of his hypothesis recognizing that the "the strongest form of the unitary trait hypothesis

was wrong" (Oller 1983). Some have contended that discrete point methods are either better or at

least equivalent to integrative methods (Rand, 1972); however, tests of discrete points such as

syntactic rules have been shown to generate reliabilities in the range of .6 to .7 (Evola, Mamer &

Lentz, 1980) while tests that are more integrative in nature generate reliabilities of .8 to .9 (Oller,

1972). In this light, how can the support for unitary language ability found in Oller and Perkins

(1978, 1980); Oller (1992); and Carroll (1983) be explained? Does language testing, discrete or

integrated, reveal information about a single trait or divisible competence? Oller (1992) posited

that it is illogical to argue that tests that focus on particular rules of grammar will yield

equivalent results to tests that require integrated whole grammars.

There is support for each side of the argument as to which hypothesis is correct, unitary

language ability or divisible competence. Support for divisible competence includes Bachman

and Palmer (1983); Farhady (1983); Fouly, Bachman, and Cziko (1990); and Upshur and

Hombourg (1983). Support for unitary language includes Oller and Perkins (1978, 1980); Oller

(1992); and Carroll (1983). However, Carroll might have summarized things best:

With respect to a unitary language ability hypothesis or a divisible competence hypothesis
I have always assumed that the answer is somewhere in between. That is, I have assumed









there is general language ability but, at the same time, that language skills have some
tendency to be developed and specialized to different degrees, or at different rates so that
different language skills can be separately recognized and measured. (p. 82)

Summarizing the research of Lynch, Davidson, and Henning (1988) and Oltman, Stricker,

and Barrows, (1990), Oller (1992) suggests that in the early stages of second language learning

distinct dimensions of listening, writing, and reading ability may be observed and may even

resolve into further sub-component traits, but as learners progress to more mature, native-like

abilities in the target language, all factors tend to converge on one unitary trait.

Recent Research in Language Testing

Recent research into the predictive abilities of language testing has not yielded the most

informative results. Lee (2005) asked the question, "To what extent does the CEEPT

(Computerized Enhanced ESL Placement Test) predict international graduate students' academic

performance?" In his study CEEPT scores were given ratings of 1-4 and these levels were

correlated with GPA using Pearson product moment correlation coefficients. Lee found a

correlation coefficient of .052 for the overall sample between CEEPT scores and first semester

GPA. However, the direction and magnitude of correlation varied depending on the discipline.

For language oriented disciplines there were positive relationships, such as Business (r=.275) and

Humanities r=0.35). In contrast there were negative relationships for non-language oriented

disciplines, such as the Life Sciences (r=-0.548) and Technology (r=-0.213). Unfortunately,

although Lee discussed how his qualitative data complemented his results, he failed to offer an

explanation for the positive and negative correlations.

Research on the strengths of the relationships between self assessment and language test

scores or abilities is mixed. Some studies have found moderate to strong relationships (Bachman

& Palmer, 1989; Clark, 1981; Coombe, 1992; Leblanc & Painchaud, 1985). However, others

have found weak relationships (Blanche & Merino, 1989; Moritz, 1995; Peirce, Swain & Hart,









1993). A meta-analysis of the validity of self-assessment as a means to predict future language

performance by Ross in 1998 found weak to moderate links at best. In another study looking at

second language placement testing, Phakiti (2005) looked at test taker's ability to predict success

based on answers to questions on an English placement test designed by the English Language

Institute at the University of Michigan. Phakiti's study found that, in general, participants tended

to be overconfident in their test performance. He believes that overconfident test takers possibly

stop engaging in cognitive tasks prematurely and under-confident test takers spend too much

time on a task that has already been successfully completed. Phakiti found that beginners

exhibited the poorest predictive ability, which supports Blance and Merio's (1989) findings. This

could be further supporting evidence of distinct differences between beginners and more

advanced students on placement tests. Research investigating these differences could support or

refute Oller's (1992) suggestion that in the early stages of second language learning, distinct

dimensions of listening, writing, and reading ability may be observed and may even resolve into

further sub-component traits, but as learners progress to more mature, native-like abilities in the

target language, all factors tend to converge on one unitary trait.

Assessment for placement is a challenging endeavor. Developing and standardizing tests

that can accurately place students into a course, or a matrix of courses, is difficult at best. This

can be seen by very low correlations between placement test scores and student achievement in

courses. For example, the state of California requires that placement tests maintain at least a 0.35

correlation with course grades (College of the Canyons, 1994). The College of the Canyons also

noted that it was not reasonable to expect placement tests to be very strongly related to course

grades. Spann-Kirk (1991) concluded that students placed by advisors instead of placement tests

achieved to the same degree. Smith (1983) came to the conclusion that student scores on









placement tests were less significant in placing students than high school grade point average,

credit hours completed during a term, and age. Simply put, placement test scores alone may not

be the most effective way of placing students. There is therefore a clear need for additional

information to help in the placement decision-making process. In 1997, Culbertson found that

multivariate prediction models could be used to increase the predictability of placement. As a

result, this study also seeks to identify characteristics that can be used in this decision-making

process.

It has been suggested that learning a second language is different in many ways than

learning other subjects because in addition to the learnable aspects it is also socially and

culturally bound, which makes it a social event (Domyei, 2003). The social dimension may

explain why the study of L2 motivation was originally initiated by social psychologists. In the

first comprehensive summary of L2 Motivation, Gardner and Lambert (1972) considered the

motivation to learn a second language the primary force behind enhancing or hindering

intercultural communication and affiliation. Gardner's theory of "integrative" motivation laid the

groundwork for other theories: self-determination theory (Deci and Ryan, 1985), goal theory

(Tremblay and Gardner, 1995), and attribution theory (Weiner, 1992). According the Weiner, the

attributions of motivation are generally described in three dimensions: (a) Locus, (b) Stability,

and (c) Controllability.

Locus of Control

One variable that may offer important information about students is Locus of Control

(LOC). LOC is a psychological construct developed from the social learning theory of Julian

Rotter (1966) which refers to a generalized expectancy that people have regarding the degree to

which they can control their own fate. LOC is concerned with whether individuals attribute the

cause of something internally or externally. Individuals with an internal LOC believe that their









behavior is directly responsible for specific outcomes; internals believe they are the cause of the

outcomes. By contrast, individuals with an external LOC believe that their behavior and the

consequences are independent; they believe that events are controlled by luck, fate, and chance

or powerful others. Research findings have been quite consistent over the years suggesting that

students with an internal LOC were more likely to be successful in education than students with

an external LOC (Ford, 1994; Lao 1980; Ogden & Trice, 1986; Park, 1998; Shepard, 2006;

Trice, 1985; Trice, Ogden, Stevens, & Booth, 1987).

Recent research on Locus of Control has identified relationships between LOC and student

use of technology (Rita, 1997; Wang, 2005). However, with the exception of one recent study

(Estrada, Dupoux, & Wolman, 2005) there is a paucity of research investigating LOC scales

among language minority students. Estrada et al's study addresses the effects of LOC and other

predictors on the personal-emotional and social adjustment of community college ELLs. The

study found that LOC was significantly associated with social and personal-emotional

adjustment. Other research indicates that LOC may be sensitive to cultural differences; Reimanis

(1980) found that personal control was similar among individuals from comparable cultures.

In 1975, Rotter posited that more precise predictions could be made from LOC instruments

that were developed with specific behavioral areas than from generalized ones. The most widely

researched specific LOC scale was developed by Crandall, Katovsky, and Crandall (1965). This

scale measures school children's perceptions of their control in achievement/academic situations.

Unfortunately, not many scales have been designed to be used specifically with college

populations; the two that have been used with these populations (Clifford, 1976; Lefcourt,

VonBaeyer, Ware, & Cox, 1979) have been described as giving short shrift to scheduling, class

attendance, and competing activities (Trice, 1985). Trice indicated that these scales focused









exclusively on effort and studying. His scale was purported to predict a wider range of relevant

college behaviors.

The LOC scale used in this study was developed by Trice to predict a wide range of

behaviors related to college students. It was designed to have high reliability and construct

validity with respect to Rotter's LOC scale (1966) and Smith's achievement motivation (1973),

while simultaneously having high predictive validity with respect to academic performance. It is

a 28-item, self-report inventory using a true/false response format. Low scores are associated

with higher GPAs and high scores are associated with lower GPAs. The inventory is designed to

measure beliefs in personal control over academic outcomes. The Kuder Richardson-20

reliability coefficient is reported at .70, indicating an adequate level of internal consistency. Also

test/retest reliability over an interval of five weeks was .92, and discriminate and convergent

validity data seem to be adequate for research purposes (Trice, 1985). Furthermore, this LOC

scale has been shown to be predictive of such academic outcomes as class grades, class

attendance, extra credit points earned (Trice, 1985), freshman GPA (Ogden & Trice, 1986), class

participation, homework completion, and study time (Trice, Ogden, Stevens, & Booth, 1987).

Generation 1.5

The term Generation 1.5 was first used in the late 1980s to describe students who fit the

description of neither first generation nor second generation Americans (Rumbaut & Ima, 1988).

Generation 1.5 students usually have come to the U. S. in their early teen or pre-teen years.

Often, they have attended U.S. schools, and many of these students have even graduated from

American high schools. While attending American schools, these students have had time to

acquire informal English. Many of them use American idiomatic expressions, and some may

even have American accents. Errors in their language are detectable, but they do not interfere

with understanding, and these students are comfortable speaking English and do so with relative









ease. Although these students develop oral fluency in English rather quickly, this oral fluency

often hides difficulties with academic English (Ruiz-de-Velasco & Fix, 2000). Their reading,

grammar, and writing skills are usually significantly below those of their college-ready peers

(Goen, Porter, Swanson, & VanDommelen, 2002; Harklau, 2003; Myles, 2002; Roberge, 2002).

Other academic skills, including critical thinking and general knowledge, are typically weak.

Due to their lack of familiarity with formal registers of language, Generation 1.5 students may

also lack ability required to discuss abstract concepts using appropriate grammatical or rhetorical

structure. Generation 1.5 students' limited development of academic literacy might be due to

prior lack of attention to problems and barriers that interfere with the students' abilities to

demonstrate what they know in writing (Grant & Wong, 2003; Harklau, 2003). Myles (2002)

suggested that a lack of prior instruction in writing for academic purposes could cause the

students' lack of motivation for learning and create a negative attitude toward English.

Differences and difficulties

In many ways Generation 1.5 students are similar to native English-speaking college

preparatory students. Interruptions in many immigrants' schooling upon arrival in the U. S. often

produce gaps in the cultural and academic knowledge expected of college students that can take

several years to remedy (Spack, 2004). Unfortunately two semesters in English speaking

preparatory programs are insufficient to address the unique problems presented by Generation

1.5 students. Other research has shown that Generation 1.5 students' patterns of errors differ

from those of international students, and, as a result, should lead to different placement testing

and instruction (Reid, 1997). According to Thonus (2003) many Generation 1.5 students have

lost or are in the process of losing their home languages without having learned how to write

well in these languages or use them academically. Therefore, teachers need to use different

teaching techniques for these students given that there are fewer first language skills on which to









scaffold new learning. Unfortunately, little has been done for this population given that the

teachers who most often work with Generation 1.5 students, community college faculty, graduate

students, and part-time instructors, are unlikely to have background knowledge and the material

resources needed to carry out research and advocacy efforts for these students (Matsuda,

Canagarajah, Harklau, Hyland, & Warschauer, 2003).

Literacy can be expressed in many different forms: functional, academic, workplace,

informational, constructive, emergent, cultural, and critical (Wink, 1999). Swanson (2004)

suggests that lack of a specific type of literacy is another reason for the lack of college readiness

for Generation 1.5 students. Many high school programs have stand-alone ESL classes that teach

language in discrete lessons that emphasize functional literacy (i.e., reading and writing) rather

than critical literacy (i.e. understanding the social and political implications of written

knowledge) which is what is commonly needed for success in college (Swanson, 2004).

Academic preparation aside, the problem still remains; Generation 1.5 students do not fit

perfectly into any of the traditional student categories, nor have they been a significant focus of

research on students' learning to write in ESL (Harklau, 2003).

Community colleges are seeing increasing numbers of Generation 1.5 students. As of

1999, some schools even began to report that Generation 1.5 students were forming the majority

of their second language students (Lay, Carro, Tien, Niemann, & Leong, 1999), and in 2002,

other schools reported these same findings (Blumenthal, 2002).

Generation 1.5 students complicate the issue of initial placement, given that these students

do not fit the mold of traditional EAP students or traditional college prep students. This question

has been debated in the areas of ESL curriculum, program design, and placement, and is

reflected in the different methods and materials used from institution to institution (Harklau,









2000; Smoke, 2001). Harklau, Losey, and Siegal (1999) have noted that EAP pedagogy and

materials are geared toward students who have recently arrived in the U.S. as adults, often with

sophisticated educational backgrounds. If curriculum and instruction have been developed with

this type of student in mind, Generation 1.5 students placed into these EAP courses are clearly

mismatched. Many teachers feel that the curriculum designed for traditional EAP students is

often too slow for Generation 1.5 students, while the curriculum designed for preparatory

students is often over their heads. They would likely benefit from working with EAP

professionals, but if EAP classes are not a perfect fit for them, where should they be placed?

Valdes (1992) believes that it is necessary for secondary and post-secondary programs to

develop criteria to distinguish between students who need ESL instruction and students (like

Generation 1.5 students) who have problems with academic English but don't need ESL classes.

Valdes labels these two groups incipient bilinguals and functional bilinguals, respectively, and

suggests that functional bilinguals should be placed into mainstream courses but still be provided

specific instruction that allows them to work on the fossilized aspects of their second language.

Harklau (2003) makes a variety of suggestions for working effectively with Generation 1.5

students. The most germane to this research is that it is important to be aware of students' prior

academic literacy experiences because research has shown that high school students in low track

classes receive different kinds of instruction from those in higher tracks (Harklau, Losey, &

Siegal, 1999). For example, low track students focus more on substitution drills, dictation, short

answer, and writing from models while high track students are taught argumentative and

analytical writing and have experience writing research papers.

Program placement for generation 1.5

Bergan Community College in New Jersey has an American Language Program (ALP)

which serves the same population as many EAP programs in Florida, college-bound, nonnative









English speakers. In the mid to late 1990s, Bergan began to notice a dramatic increase in the

number of English language minority students who were graduates of American high schools

and who didn't fit the traditional student molds (Miele, 2003). Realizing that these students did

not fit neatly into the common three categories of college-ready students, preparatory students

needing remedial English, and ESL students, faculty at Bergen developed special courses to deal

with the Generation 1.5 students they labeled as crossover students. Students who had resided in

the U.S. for at least eight consecutive years were given the standard placement test and either

placed into preparatory English courses or college level courses based on their results. Students

with fewer than three years in U.S. high schools who were nonnative speakers of English and/or

students who had resided in the U.S. for fewer than eight consecutive years were assessed using

the Comprehensive English Language Tests (CELT). If these students demonstrated significant

ESL characteristics in their writing samples, had reading and sentence skill scores comparable

with 8th or 9th grade students, spent three or fewer years in American high schools, and were

nonnative English speakers predominantly exposed to another language at home, they were

designated as crossover students. These students were then advised to take specifically designed

crossover courses.

Another program hosted in the General College of the University of Minnesota uses

principles of developmental education and content-based ESOL pedagogy to help Generation 1.5

students learn academic language (Moore & Christiansen, 2005). Their approach is believed to

be more effective than stand-alone ESL because language is often learned best in the context of

content area learning (Krashen, 1982; Zamel, 2004). Students are required to enter the program if

they have been in the U.S. educational system eight or fewer years, have a home language other

than English, and have test scores documenting a need for English support. Most of the students









in the program have been in the U.S. between two and eight years. This program has proven

successful in both retention and successful progression of students into degree-granting programs

(Christensen, Fitzpatrick, Murie, & Zhang, 2005).

The Need for Better Placement Procedures

There are currently 1,202 community colleges nationwide (AACC, 2007), and two-year

schools can expect an increase of nearly 20% in growth by 2010 (Gerald, 2000). Cohen (2002)

estimated that more than half of the community colleges nationwide offered ESL/EAP programs.

In the 21st Century even more community colleges will be required to offer ESL instruction. One

of the major issues that will need to be addressed is proper student placement. For students to get

the most out of their post-secondary education, they will need to be accurately placed into

programs.

Unfortunately, as community colleges are scrambling to develop ESL/EAP programs,

there appears to be a tendency for each college to reinvent the wheel in terms of assessment.

Defining, measuring, and documenting the success of ESL/EAP students is a complex and

difficult task that has rarely been attempted outside individual institutions (Ignash, 1995), and no

two colleges seem to be using the same processes for placement and assessment. According to

Blumenthal (2002), procedures and policies for assessment and placement of EAP students differ

widely from college to college. In 2005, Florida's Division of Community Colleges and

Workforce Education conducted a survey seeking responses to questions about developmental

education in the Florida Community College System. This survey noted, "Clearly, there is not a

standardized assessment and placement process for ESL students in institutions across the

community college system."(Florida Department of Education, 2005). Table 2-1 below details

the finding regarding placement practices at community colleges across the state.









Table 2-1. Number of Florida community colleges employing common practices in the
placement of ESL students with many colleges using multiple practices
Test or Practice Utilized Number of Colleges
CPT/Accuplacer College Placement Test 18
ACT/SAT 7
LOEP/TOEFL/English Placement Test/Celt 15
Use of writing sample 5
Consultation with an advisor 2
CASAS 4
TABE 2
No ESL course offerings/program 4
Note: Adapted from "Developmental Education in the FFCS"

While some colleges use holistic writing assessments, others use discrete point grammar

tests. While some colleges require students to enroll in specific classes based on assessment

results, others leave decisions up to students. This diversity is understandable given the varying

nature of colleges and the variable demographics and needs of the student populations.

It is not only in the realm of ESL where schools have felt testing pressures. In Florida in

the 1980s, minimum competency testing was established in the form of the College Level

Academic Skills Test (CLAST). Students had to demonstrate ability on this exam before they

could be awarded an Associate in Arts degree. Cut scores, or bench mark scores that indicate

whether or not students have successfully demonstrated performance of some skill on the

CLAST were a contested topic, and scores were raised incrementally until 1992 when they

reached their current cut levels. Since January of 1996, further adjustments have led to the

recognition of waivers for all sections of the CLAST. Students can receive waivers if they earn a

2.5 or higher on related course work or if they meet state mandated criteria on the placement

tests as they enter the community college system. This adds to importance of reliable and

accurate placement testing.

Another increase of testing pressures in the State of Florida occurred in May of 1997 when

legislation required each institution to set its own course requirements and to implement a









standardized, institutionally developed test using state-developed items to serve as an exam

needed to exit from preparatory programs. The state then developed various versions of a

blueprinted exam to be used for these exiting purposes. However, they failed to mandate a

statewide cut-off score to be used with the exit tests. The Southern Regional Education Board,

which is a consortium of 16 southeastern states, has recommended the establishment of statewide

standards for student performance and for placement of students into college courses (Abraham

& Creech, 2002)

In addition, many schools are feeling increased pressure to place all students accurately. In

August of 1997, full implementation of a major change in placement testing was mandated by

the Florida legislature. In conjunction with Educational Testing Service, the state of Florida

adopted a placement test and one acceptable score statewide for students entering community

colleges. Students scoring below the cut-off are required to take and pass remedial courses in

each area of deficiency: English, reading, and mathematics (Smittle, 1996). Students taking these

remedial courses are required to pay identical fees, but do not earn credit toward graduation.

Accurate placement into these remedial programs has become very important because these

courses can not be applied to graduation credits. In addition, these remedial programs have a

serious effect upon students, colleges, and the state because all three share the cost of these

programs. Florida's House Bill 1545, which went into effect in July of 1997, requires students to

pay "full instructional costs" after failing a first attempt in college preparatory courses. Another

funding problem introduced around the same time was that as of the 1999-2000 academic year,

community colleges would receive state funds based on program completions (Abstien, 1998).

Whereas prior to this, funding had been dependent on program enrollment. In spite of the









obvious challenges resulting from this change, one significant benefit was the shift in focus to

program outcomes. (Banta, Rudolph, Van Dyke, & Fisher, 1996; Grunder & Hellmich, 1996).

Section 1008.30(1) (formerly 240.117) of the Florida Statutes K-20 Education Code

explains placement further:

The State Board of Education shall develop and implement a common placement test for
the purpose of assessing the basic computation and communication skills of students who
intend to enter a degree program at any public postsecondary educational institute (Florida
Statutes Title XL VIII, 2003).

The following subsection 1008.30(2) detailed that the mandated test would assess basic

competencies that are "essential to perform college-level work." Students not meeting the criteria

fall under a directive as outlined in 1008.30(4) (a):

Public postsecondary educational institution students who have been identified as requiring
additional preparation pursuant to subsection 1 shall enroll in college-preparatory or other
adult education pursuant to s. 1004.93 in community colleges to develop needed college-
entry skills... A passing score on a standardized, institutionally developed test must be
achieved before a student is considered to have met basic computation and communication
skill requirements.

This focus on placement and testing led to additional challenges. As of the 1997-98 academic

school year, the problems with placement criteria for entering freshmen at Miami Dade

Community College, one of the largest post secondary institutions in the state of Florida, reached

such a high level that a memorandum was issued by the school administration asking for

assistance in developing and enhancing initiatives that would guarantee student success. In this

memorandum, it was noted that approximately 72% of incoming students would require

placement in preparatory reading classes, and 57% would require placement in preparatory

writing courses (Padron, 1997).

When nearly 72% of an institution's student body requires remedial training of some sort,

there is clearly a problem. Increasing the number of preparatory courses seemed like a good

solution to meeting the needs of these under-prepared students. For example, the State of Florida









increased the number of levels for ESL students from three levels of preparatory English to six

levels of preparatory English. So as not to negatively affect county programs, Valencia only

adopted five of the six levels. And, in order for an expanded preparatory program to function

properly, there is again a reliance on proper placement.

Program Models

In 1973, John Upshur discussed the educational context of language testing with examples

of different types of instructional programs. To illustrate the many problems faced by multilevel,

multi-skill, preparatory programs, an abbreviated version of his discussion follows. Upshur

started by discussing a simple program in which students enter the program, undergo instruction,

and then leave. A visual of this can be found in Figure 2.1. The two major problems with this

program model are that there is no indication of whether or not the program is appropriate for all

who enter and no way of knowing if the instruction offered is effective.



Enter





Instruction






Exit


Figure 2-1 Upshur's simple program model: This figure illustrates the simple program model
discussed by Upshur (1973). Students enter, receive instruction, and leave.









Extensions to the previous program model included adding tests at key decision stages.

First, an exit test was added to solve the question of whether or not instruction was effective.

However, this failed to identify whether the program was appropriate for all who entered. Next,

tests were given prior to entrance. This solved the program appropriateness question, but

introduced the problem of needing multiple equivalent versions of tests so that students would

not simply be retaking the same tests at entrance and exit from the program. Upshur's most

complex model (Figure 2-2) shows an example of a multi-level program with several different

types of placement decisions being made. In this model the initial decisions of admission and

placement in this program are made on the basis of a single test. If students are too low, they are

not admitted into the program. If they are able to demonstrate mastery of the objectives of the

courses, they are exempt. Those students who are at an appropriate level for the program can

then be placed into one of three levels of instruction based on their test scores. At the completion

of each level of instruction, students take an achievement test to determine if they can progress to

the next higher level. If they pass, they move forward. If they don't pass, they are given remedial

work based on their areas of weakness determined by scores on the achievement test. In this

program, rather than being tested again after remedial work, students are simply moved forward

to the next step of the program. Upshur noted, however, that it would be possible to test these

students again.

Upshur's program models were designed to show the range of testing choices/placement

decisions that need to be made in a program and to illustrate the complex assessment issues

involved from entrance through exit in a program. They have been used here as a basis for

explaining the importance of proper placement at Valencia.























































Figure 2-2 Upshur's complex model: This figure illustrates the most complex program model
discussed by Upshur (1973). It is an example of a multi-level program with several
types of decisions being made.




47









Like Upshur's complex model, students at Valencia are placed into the program on the basis of a

test [Valencia's test actually includes 4 subtests].However, while Upshur's model has 3 levels,

Valencia's has 5 levels. Furthermore, at Valencia each of the five levels is subdivided again by

language skills. Currently at Valencia students are tested and placed into all skills at one level.

Some personnel at Valencia feel this is appropriate while others believe students should be

placed into skills across levels based on individual subtest scores. This cross level skill

placement based on individual subtest scores in each skill is also supported by The College

Board (2003).

A complete model of Valencia's five-level program is beyond the scope of this discussion.

However, the model in Figure 2-3 depicts what an improved model of Valencia's third level

would look like with these additional assessment points. However, if the current process is

efficiently placing students into appropriate skills and levels, some of the decision stages could

be avoided. Currently Valencia does not employ diagnostic and achievement tests at the

beginning and end of each skill course within each level. By enhancing the effectiveness of

initial placement practices, Valencia may never need to develop these other testing measures. It

should be stated here that it is not the intent of this study to analyze diagnostic and achievement

measures. This study seeks to analyze and enhance current placement practices. Valencia's

model has been added here to illustrate the number of decision stages and testing measures that

could be avoided if students are placed properly at the time of admission.
















Level 3


Reading




Confirming
Diagnostic/
placement
Test






Score ?




Below 50%
Go to Level 2


Writing




Confirming
Diagnostic/
placement
Test







Score




Below 50%
Go to Level 2


Above 90% Above 90%
Go to Level 4 Go to Level 4
If Pre-reqs If Pre-reqs
Completed Completed


Between Between
50% -90% 50% -90%


Speech




Confirming
Diagnostic/
placement
Test







Score ?




Below 50%
Go to Level 2



Above 90%
Go to Level 4
If Pre-reqs
Completed


Grammar




Confirming
Diagnostic/
placement
Test







Score ?




Below 50%
Go to Level 2



Above 90%
Go to Level 4
If Pre-reqs
Completed


Figure 2-3 Possible improved model: This model illustrates the decision stages and types of tests

that could be avoided by enhancing placement practices at VCC.









Current Placement Practices

Valencia Community College, like most other community colleges, has two tracks for

students who are not ready to take college level reading and writing courses. Prep one and Prep

two courses, have been developed to equip native English-speaking students with the academic

reading and writing skills they will need to survive in college. These skills include identifying

implied and stated main ideas; recognizing tone, bias, and fact or opinion; and understanding the

rhetorical structure of a five-paragraph essay. Students are often placed into or exempted from

these preparatory courses based on entrance, placement, or a combination of exams. Students

scoring above established cut scores on placement exams are admitted directly to English

composition; students scoring slightly under established cut scores are placed into Prep two; and

students scoring significantly below established cut scores are placed into Prep one.

At Valencia, most students are admitted to the school using one of the following

instruments: the SAT (Scholastic Aptitude Test), the ACT (American College Test), or the

Accuplacer CPT (College Placement Test). Prospective students of Valencia who do not have

satisfactory English and reading scores on the ACT or the SAT are required to take the state-

approved CPT, a computer adaptive placement test. Moreover, students who do not have recent

(within two years) ACT, SAT, or CPT scores are also required to take the CPT for proper

placement (VCC College Catalog, 2007).

All entering freshmen who do not score sufficiently well on the Accuplacer CPT (College

Board, 2003) for admission and who self-identify as nonnative English speaking students are

given the Accuplacer LOEP exam (College Board, 2003). Prior to 2001, Valencia's cutoff scores

for LOEP placement were based on placement information contained in the LOEP Coordinators

Guide, which outlines high, intermediate, and low levels. However, after a curricular switch from

three distinct levels of ESL to five distinct levels of EAP, a result of the state's initiative to









standardize community college level ESL programs in 2001, Valencia was forced to reevaluate

placement cutoff scores. It was mutually agreed upon by faculty and staff that the new cutoff

scores would be created by decreasing the intervals in the existing placement cutoff scores rather

than performing expensive and time consuming statistical procedures to develop new cutoff

scores. Other institutions across the state simply looked at Valencia as a model and adopted

Valencia's new cut-scores. It was understood that this would increase error in placement;

however, program coordinators were still able to make placement decisions about individual

students based on a combination of results from both the objectively scored multiple choice

questions and subjectively scored essay elements of the LOEP test. In addition, diagnostic testing

was used at the beginning of each semester for each class. It was thought that if students were

misplaced at entry into the program, it could be addressed at the time of diagnostic testing.

Subsequent decisions served to increase the error in placement of students into the EAP

program at Valencia, thereby elevating the need to identify more accurate placement procedures.

In 2001, in the interest of one-stop registration, an administrative decision was made that the

writing sample of the LOEP would not be included as part of the entry assessment process for

nonnative English-speaking applicants, as it delayed their placement and registration. The five

ESL faculty members on the committee felt it pedagogically unsound to discontinue evaluation

of the writing sample in placement because it was the only direct, productive measure of English.

However, at the behest of personnel in the assessment office, supported by members of student

services, the committee agreed to a trial period of one year for this procedure, secure in the belief

that the departmental "diagnostic" exams, which at the time were given at the beginning of each

course, would allow students who had been improperly placed in EAP courses to move into more

appropriate courses.









Unfortunately, in 2002 course-based diagnostic testing and early movement of misplaced

students were eliminated as options for EAP coordinators. In addition, in a review of the

literature conducted by the author, it was found that Valencia was using the LOEP for placement

in a manner inconsistent with that recommended by LOEP developers. (See LOEP and Its

Recommended Use on page 54)

Pilot Study

In 2003, in response to complaints by EAP instructors at Valencia about misplacement of

students, discussions began again about revamping the placement process. It was suggested that

it might be more appropriate to place students into the different language skill classes (reading,

writing, speech, and grammar) based on scores from the three objectively scored LOEP tests

(Reading, Language Use, and Sentence Meaning) and a holistically scored essay. To examine the

possible impact of these changes, a limited pilot study was conducted by the researcher. This

study was simply a post hoc analysis of how differently students would have placed into levels if

individual subsections of the LOEP were used rather than the aggregate scores. In the analysis,

actual placement levels derived from the use of aggregate scores were compared to individual

scores on each aspect of the LOEP test for each student in the sample population. The researcher

(with the help of the Office of Institutional Research) obtained access to the placement test

scores of 1,052 students in Valencia's database from June 2002 through August 2003. The

analysis of differences between students' actual placement based on the aggregate score and their

possible placement based on each individual skill test score (i.e., their LOEP Reading test score,

LOEP Language Use test score, etc) found that if students were placed into levels solely on their

LOEP Reading test scores instead of the aggregate of the three LOEP subtest scores, students

would place one level above or below their aggregate placement level 51% of the time. It was

also found that if students were placed into levels solely on their Sentence Meaning placement









test scores instead of the aggregate scores, students would place one level above or below their

aggregate placement level 49% of the time. And, it was found that if students were placed into

levels solely on the basis of their Language Use placement test scores instead of the aggregate

scores, students would place one level above or below their aggregate placement level 50% of

the time. Unfortunately, Essays were not being evaluated during this time period, so Essay scores

were not included in this pilot study.

Although this pilot study revealed differences between placement levels derived from

individual LOEP scores versus aggregate scores, it did not identify which subtests best predicted

success. However, the study did provide decision makers with enough information to make a

change. It was decided that essay readers would be trained, and evaluation of essays would once

again be used in the placement process.

Currently, students are placed according to a formula that was agreed upon by a committee

made up of faculty and decision makers from various departments at Valencia. The formula was

selected because it was easy to add the necessary fields to the database and was simple to

calculate. Furthermore, it gave counselors a number that they could compare to existing cut-

scores, and it satisfied teacher requests that a written sample of student language performance be

included in the placement of EAP students. However, prior to the current study, this method was

not empirically tested for its ability to place students accurately. Table 2-2 lists the instructions

and gives steps for placing a hypothetical student based on LOEP test scores.

The cut-scores Valencia employs only place students within levels. Students placed into

level 5 are required to take reading, writing, grammar, and speech at that level. This presents

problems when students score at different levels across the subtests. For example, if you look at

the sample student's scores in table 2-2, the student scored a 5 (which converts to 100) on the









essay, a 106 on the reading, a 92 on the Sentence Meaning, and an 84 on the Language Use.

Based on this formula, this student would be placed into level 5 for all language skills.

Table 2-2. Explanation and an example for placing students into EAP courses at Valencia.
Steps Example
LORC = 106
1) Student scores on the three objectively scored LOEP tests LOSM = 92
(Reading, Sentence Meaning, and Language Use) are averaged. LOLU = 84


2) A number is derived for the holistically scored essay. Trained
readers read each student's essay and rate it from 1 to 7. This
number is then multiplied by 10 and added to the number 50.


3) The average of the numbers derived from steps 1 and 2 is
used to place the student into levels based on the Valencia's cut-
scores.

Valencia Cut-Scores
Students scoring 65 or below are not admitted.
Students scoring 66-75 are admitted to level 2.
Students scoring 76-85 are admitted to level 3.
Students scoring 86-95 are admitted to level 4.
Students scoring 96-105 are admitted to level 5.
Students scoring 106-115 are admitted to level 6.
Students scoring 116 or higher are exempt.


Essay Rating
Multiply by 1
Add 50 =


Average= 94

5
0= 50
100


Derived Essay Score = 100



Average = 97


Student placed in Level 5


However, if placed by individual scores, this student would be placed in level 6 for reading, level

5 for writing, and level 4 for grammar. Many of the faculty members at Valencia believe that

placement across levels is more appropriate than placing a student into all skills at one level.

Furthermore, none of the currently used subtests addresses productive/receptive speech/listening

skills. Therefore, some instructors feel a listening/speaking test should be added unless one of the

other LOEP subtests is found to be a reliable predictor of success in speech classes.









The LOEP and Its Recommended Use


Currently at Valencia all students are required to take the Accuplacer CPT (College Board,

2003), a computer-based placement test. However, for EAP students, this exam is not used for

actual placement purposes. CPT scores are simply gathered and kept on record because it is a

requirement in Florida for community college students to take the CPT. Students who self-

identify as nonnative English speakers are required to take the Accuplacer LOEP (Levels of

English Proficiency Test). According to the LOEP Coordinator's Guide (2001), Valencia is

currently using the LOEP in a manner that is inconsistent with what is recommended by LOEP

developers. Skills tested by the LOEP are described below.

* LOEP Reading Comprehension (LORC): Students read passages of 50 to 90 words and
then answer questions about their reading. The reading passages are about a variety of
different topics. They may read about the arts, science, or history. Half of the test questions
ask about specific information that is stated in the passage. The other half asks students to
identify the main ideas, fact vs. opinion, or the author's point of view.

* LOEP Language Use (LOLU): This test is designed to measure the students' understanding
of the English vocabulary. The sentences come from a variety of different subject areas.
Students are asked questions about basic and important idioms, particularly terms of
beauty, age, greatness, and size, adverbs such as before, after, during, and prepositions of
direction and place.

* LOEP Sentence Meaning (LOSM): Students are asked to fill in a blank with a word or
phrase, or combine two sentences. The skills covered are writing skills including the
proper use of nouns and verbs.

* LOEP Essay (LOES): Although Accuplacer offers a computer graded writing assessment,
LOEP Essay at Valencia an essay exam that is graded locally using trained readers and a
holistic rubric. The holistic rubric used for grading the essay can be found in Appendix A.
Students have 60 minutes to write an essay on a topic provided by test administrators.
Students are asked to organize their ideas carefully and to present them in more than one
paragraph.

Additional LOEP tests are available but are currently not used by Valencia. They include:

* LOEP Listening: for this test, a committee of college faculty and other educators defined
the listening skills considered important for entry-level college students. Both literal
comprehension and implied meaning were included, and seven listening skills were
identified. Multiple-choice items were developed to measure the listening skills.









* Write Placer ESL: This is a direct measure of student writing using prompts and rubrics
designed by ESL experts. Student essays are scored using the IntelliMetric artificial
intelligence system, a computer graded system.

As noted above, students at Valencia are currently placed into one of five levels of courses by

averaging their essay scores with the average of the three subtests of the LOEP test. This score

is then compared with cut scores to place students into levels 2, 3, 4, 5, and 6 (again, the cut-

scores being 66-75, 76-85, 86-95, 96-105, and 106-115 respectively). Unfortunately, the cut

scores being used were never normed to Valencia's program or student population. The cut

scores were instead taken from the LOEP Coordinator's Guide and then manipulated to fit five

levels instead of the three levels they were originally designed for by decreasing the spread in

each cut score range. However, the LOEP Coordinator's Guide actually did make

recommendations [which led to the current research] as to how the LOEP should be used for

placement of students.

The three components of LOEP may be administered singly or as a group. We recommend
that institutions investigate which score combinations provide the greatest accuracy for
their curricula, and establish cut scores and placement mechanisms accordingly.
Particularly in the case of ESL, using individual test scores for placement in the various
topical areas would be a necessary part of establishing a placement system using LOEP.
Our purpose here, however, is to provide evidence that LOEP is valid... (College Board,
2001)

One can infer from these instructions that scores on the LOEP subtests should be used to place

students into different skills at different levels. For example, the reading test should be used to

place students into reading classes at different levels. If investigations at a particular institution

found that reading subtest scores also predicted placement into other courses, like speech, then

reading subtest scores could also be used to place students into those courses as well. One could

assume that the Language Use and Sentence Meaning subtests might predict success in writing

or grammar courses, but the College Board left it up to individual institutions to identify which









subtests or subtest combinations provided the greatest accuracy in placement. Valencia currently

places students into one skill level regardless of differences in individual LOEP subtest scores.

Holistically Scored Essays

As mentioned above, holistically scored essays are currently being used as part of the

placement practices at Valencia. Valencia's essay scoring rubric can be found in Appendix A.

However, some administrators, in the interest of quicker placement testing, would like to return

to using only the objectively scored LOEP subtests for placement decisions, and this issue

remains a controversial one at VCC. An important benefit to using only the objectively scored

tests is cost; no expense would be incurred by paying readers to score the essays.

When it comes to placement into composition courses, it has been suggested that a timed

essay exam is the preferable placement measure if the only alternative is a multiple-choice test

(Garrow, 1989; Wolcott & Legg, 1998; Zinn 1988). In addition, some studies have found that

placing language and ethnic minority students using only multiple-choice tests can be

problematic (Belcher, 1993; College of the Canyons Office of Institutional Development, 1996;

Garrow, 1989; Jones & Jackson, 1991; White, 1990), timed essays have been found to be more

predictive of final grades in writing courses when combined with multiple choice tests

(Cummings, 1991; Cunningham, 1983; Galbraith, 1986; Garrow, 1989; Isonio, 1994; Wolcott,

1996; Wolcott & Legg, 1998). Therefore, based on the research, one could conclude that

Valencia is doing the right thing by including both the essay and the objectively graded subtests

in its placement practices. Whether these practices are actually making a difference and

justifying the additional time and cost has yet to be determined.









CHAPTER 3
MATERIALS AND METHODS

Introduction

This research used survey data and data collected from Valencia Community College's

Office of Institutional Research to examine characteristics that would lead to more efficient

placement of students into EAP courses. It also sought to more accurately identify Valencia's

EAP student population on Valencia's three major campuses and elicit student and teacher

feedback about placement. It compared the predictive values of individual LOEP subtest scores

with two composite models of LOEP subtest scores. In addition, it analyzed whether or not the

variables of LOC or a computed indicator of Generation 1.5 status functioned to assist the

prediction of successful placement of students into EAP classes.

Study Setting

Valencia is a fairly large community college. According to Valencia Community College

Facts (2006), it is the third largest community college in the state of Florida with an FTE (Full-

Time Equivalent) of 21,227 students. Fifty-eight percent (58%) of the student body is female; the

national average is 59% female (American Association of Community Colleges, 2007). The

student body of Valencia's EAP program, which includes approximately 80 sections during the

summer and 120 during the fall and spring, is quite diverse, with students from different ethnic

and socioeconomic backgrounds as well as rural, urban, and suburban settings. Annual

enrollment cost at Valencia is slightly lower than the national average, at $2,100 per year as

opposed to the national average of $2,272. The average student age at Valencia is lower than the

national average, at 24 and 29 respectively. On average though, Valencia is similar to the other

1,202 community colleges nationwide. Because only limited demographic data could be gathered

through student records, more detailed demographic data were gathered through surveys to









inform the level of generalizability of the population. This information is reported in the Results

section.

Participants

Participants in the survey part of this study were all consenting students and instructors

taking part in EAP courses (levels 3, 4, 5 and 6) at Valencia's East, West, and Osceola campuses.

During the 2006 Summer A & C semesters, 470 students and 19 instructors participated in the

survey part of this research. With the assistance of Valencia's Office of Institutional Research,

EAP student placement and final course grade data were gathered for all of the survey

respondents. In addition, data were gathered for first time students in Valencia's EAP program

over the previous three years (2003 2006), yielding complete LOEP placement test scores and

final course grade information for an additional 1,030 students.

Materials and Data Collection

During the summer of 2006, the investigator visited and administered surveys in all regular

EAP courses offered during the summer A & C terms at Valencia Community College. These

surveys took place during the middle of the semester; teachers were asked to take an instructor

survey, and students were asked to take a student survey.

The questions on the teacher survey were intended to yield information about teacher

perceptions of placement effectiveness at Valencia. Questions 1-3 were Likert type questions

asking teachers to: 1) rate how well Valencia does at placing students 2) explain how often they

have students that they feel are misplaced and 3) determine how many students were misplaced

during the semester of the survey. The fourth question was an open-ended question allowing

teachers to qualify any of their answers or make comments. Teachers were given class rosters

and asked to rate each current student as well placed or not well placed. If students were not well

placed, instructors were asked to provide a placement level for them. Teachers received only one









survey, but the survey contained class rosters for all of the classes they were teaching during the

summer semesters. The teacher survey can be found in Appendix B.

Teachers were also sent a description of Generation 1.5 students and another set of class

rosters for all of the courses they were teaching. They were then asked to read the definition and

indicate which students they felt were members of Generation 1.5. A copy of this survey has

been included in Appendix B. Questions on student surveys were intended to provide

information about demographics (questions 7-10, 12 & 24), academic history (questions 11&

13-19), language use (questions 20- 23), technical knowledge (questions 25-26), Locus of

Control (Questions 27-54), and general feelings on placement (question 55-58). A copy of the

student survey can be found in Appendix B.

To maintain consistency in the administration of the surveys, all surveys were administered

by the investigator, and the same introduction script was used in each class (See Appendix B).

Survey responses for both students and instructors were then entered into SPSS for analysis.

Procedures

The following represents the procedures for answering the research questions. Although

not officially addressed as a research question, the researcher was interested in presenting a

detailed description of the composition of Valencia's EAP population so that findings from this

study might be generalizable to other community colleges and universities in Florida and the

U.S. Student survey responses were entered into SPSS and the data were analyzed using

descriptive statistics to find frequency distributions and measures of central tendency. Results are

reported in Chapter 4.

One goal of this research sought to identify the predictive abilities of the LOEP subtests on

final course grades; therefore, it was decided that only first semester students would be included

in this part of the research because first semester students would have taken the LOEP subtests









immediately prior to attending their first courses at Valencia. However, because it was

impossible to guess the number of first semester students taking courses during the time frame of

the study, two studies were proposed to guarantee enough students for unbiased multiple

regression procedures. The first study used test data from 1,030 first time students in Valencia's

EAP program over the three-year period prior to the summer of 2006 (2003-2006). The second

study used all willing EAP students taking courses at Valencia during the Summer A and C

terms of 2006. However, only first-time students were used in the analyses. These studies were

conducted in an effort to seek answers to the research questions:

1. What are the student and teacher beliefs about placement at Valencia?

2. Which of the following three approaches best predicts student success as measured by final
course grades and teacher evaluation of placement: 1) Averaging the three objective LOEP
subtests, 2) Using an equally weighted average of both the objectively and subjectively
scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors?

3. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
final course grades?

4. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
teacher evaluation of placement?

5. Do the student variables of Locus of Control and Generation 1.5 add to the prediction of
placement in EAP courses as measured by final course grades and teacher evaluation of
placement?

Teacher comments qualifying their answers about placement were also analyzed

descriptively following a simple method for coding qualitative data (Lofland & Lofland, 1995).

After teacher responses were gathered, they were transcribed yielding 132 distinct comments.

Each of the 132 comments was then run through an initial coding and focused coding process.

Each comment was coded with a classifying label that assigned meaning to individual pieces of

information within the token. The first pass through the data yielded 46 different codes. For









example, after reading the following sentence, "My opinion is that we should either offer a level

6 grammar class or reevaluate the standards by which students test out of EAP 1560," it was

initially coded "Advice." However, teachers made a variety of comments giving advice.

Therefore, on recursive passes through the data, this was given a focused code of "Advice on

courses." After initial coding, the other 46 original codes were reviewed in recursive passes

through the data in an attempt to eliminate less useful codes, combine smaller categories into

larger ones, and subdivide larger categories into more meaningful parts. Results are reported in

Chapter 4.

The first study used data from Valencia's Office of Institutional Research; complete LOEP

placement test scores and final course grade information for 1,030 students who attended

Valencia over the previous three years were entered into SPSS. Final course grades in each of the

skill courses (e.g., Reading) at each of the proficiency levels (EAP Levels 2-6) were used as the

dependent variable. Final course grades were weighted as follows (A = 4, B = 3, C = 2, D = 1,

and F = 0). For this research, withdraws (W, WF, and WP) were not used. Separate regression

analyses were conducted for each course using each of the three competing models as a predictor

variable. When two models were found to significantly predict final course grades, an F-test was

conducted to compare the regression models.

The second study used current student/teacher data to run similar analyses and check the

predictive abilities of the three competing models. The current data also allowed for these

analyses to be run using both final course grades as a dependent variable and teacher evaluation

of placement as a second dependent variable. In addition, two new variables were analyzed for

their predictive abilities: Locus of Control and Computed Generation 1.5 status.









As discussed earlier, an attempt was made to validate a survey measure of the computed

Generation 1.5 variable through correlation with the teacher judgment of the construct. The

computed variable of Generation 1.5 status was found to be moderately correlated with teacher

identification of Generation1.5 status r = .40 p<.001.

In addition, to investigate whether students who were computed as Generation 1.5 differed

from students not computed as Generation 1.5 in their ratings by professors as being members of

Generation 1.5, a Chi Square statistic was used. Results indicated that students computed to be

members of Generation 1.5 are significantly different from non-members when rated as

Generation 1.5 by instructors. 2 (1, N = 470) = 75.12, p<.001. Students rated as Generation 1.5

in the computed model were more likely than expected by the null hypothesis to be rated as

Generation 1.5 by professors than students who were not computed as Generation 1.5. Phi, which

indicates the strength of the association between the two variables, is .40 and, thus, the effect size

is considered to be medium to large according to Cohen (1998).

The methodology for the second study was similar to the first study: separate regression

analyses were conducted for each course using each of the three competing models as a predictor

variable and final course grades as the outcome variable. When two models were found to

significantly predict final course grades, an F-test was conducted to compare the regression

models. The same procedures were used with teacher evaluation of placement as the outcome

variable. Finally, the two new variables (Locus of Control & Generation 1.5) were added to the

prediction models to test their predictive values. Results are reported in Chapter 4.









CHAPTER 4
RESULTS

Survey Results

Because only limited demographic data could be gathered from student records, more

specific demographic data were gathered through surveys to inform readers about the level of

generalizability of results based on Valencia's population. Other questions on the student surveys

sought to find answers to questions about academic history, language use, technical knowledge,

and Locus of Control. This section reports the summarized results of survey responses to these

types of questions. A complete list of results for the student and teacher surveys can be found in

Appendix C.

Survey respondents spoke 37 different languages and came from 67 countries. The top five

languages spoken at Valencia were Spanish, Creole, Arabic, Portuguese, and French. The top

five countries of origin were Columbia, Haiti, Puerto Rico, Morocco, and Peru.

On all three campuses, nearly half of the students surveyed were from Columbia, Puerto

Rico, or Haiti. In terms of major differences, however, most of the Haitian students attended the

West Campus; In fact, 28.85% of the West Campus population was Haitian as opposed to 4.63%

on the East Campus and 4.92% on the Osceola campus.

In terms of gender, there were slight differences between campuses, but school wide 59.5%

of the respondents were female. The national average is 59% female (AACC, 2007). Of the

students surveyed, 89% were in their first, second, or third semester at Valencia. The ages of

survey respondents ranged from 17 to 59 with a mean of 26.86, a median of 23, and two modes,

19 and 21. More than 60% of respondents were below the age of 26.

The majority of survey respondents had been in the U. S. for five or fewer years, with the

mean number of years in the U.S. being 6.1 and the median and mode being five and three









respectively. Fifty-nine percent of survey respondents entered the U.S. K-20 system at the

college level, only 41% reported having attended U.S. K-12 schooling. Appendix C also contains

information on year of graduation or GED completion.

In addition to demographic questions, students were also asked questions about academic

history, language use, technical knowledge, and Locus of Control. When asked, "Are you the

first person in your family to go to college?", 72.26%, reported that they were not, with 50.51%

of those reporting that their siblings/cousins had gone to college and 41.02% reporting that their

parents or their parents' siblings had gone to college. When asked, "Are you the first person in

your family to go to college in the U.S.?", 62.5%, reported that they were. Of those who were not

the first in their families to go to college in the U.S., 82.31% reported someone from within the

same generation, i.e., a sibling or cousin, to have been the first to go to college in the U.S. Of

survey respondents, 39.22% reported having gone to college outside the U. S.; many of these

respondents also reported having spent more than two years in colleges outside the U.S.

In addition to prior educational experience, students were asked questions about how often

and how well they used English. Most students rated their abilities to write papers and do

research in English as average. The majority reported that among their friends and peers with

whom they did things with every week only a few were native English speaking, but most

reported using English most of the time to speak with their friends. The majority also reported

that their families did not often use English in the home. In response to the two questions about

computers, 96% of respondents reported having a computer at home, and the majority of

respondents rated their abilities to use the computer as above average or expert.

Finally, the complete descriptive statistics for student results on the Trice Locus of Control

Index can be found in Appendix C. However, for the sample of 380 EAP students, scores ranged









from 1 to 20 with a mean score of 8.83 (SD = 3.44). Trice's original study (1985) looked at two

sample populations: 107 sophomore and junior teacher education majors, with a mean score of

12.46 (SD = 4.32) and 82 freshman general psychology students, with a mean score of 13.22 (SD

= 4.92).

Question 1

1. What are the student and teacher beliefs about placement at Valencia?

Student opinions

This section reports the results of analyses of student survey data eliciting students'

opinions on how well Valencia is doing at placing them into the courses they need. When

students were asked about their beliefs on placement, 21% felt that they had been misplaced.

Teacher opinions

In the teacher surveys, teachers were first asked to select a word that best describes

Valencia's accuracy at placing students into the EAP courses they require. The choices were: (1)

Poor (2) Below Average (3) Average (4) Above Average (5) Excellent. The general consensus

was that Valencia does an average to above average job at placing students. Fourteen of the 19

teachers surveyed described Valencia's accuracy as "Average," the remaining 5 described

Valencia's accuracy as "Above Average." Survey responses had a mean of 3.26 and a median of

3.

Teachers were then asked to comment on how often they have students in their EAP

classes that they feel might be better placed in a different level. The response options were: (1)

Never (2) Rarely (3) Sometimes (4) Often (5) Every Semester. Fourteen of the 19 respondents

noted that students were "Sometimes" misplaced. One instructor responded that students were

"Often" misplaced, and the remaining four responded that students are misplaced "Every

Semester" Survey responses showed a mean of 3.47 and a median of 3.









Teachers were also asked to comment on how many of their students they felt should have

been placed in a different level during the semester in which the surveys were being conducted.

Their choices were: (1) None (2) A few (3) Several (4) Most (5) All. Two respondents indicated

that "None" of their students should have been placed differently. Thirteen respondents indicated

that "A few" should have been placed differently. Three respondents indicated "Several", and

one respondent indicated "Most." Survey responses showed a mean of 2.16 and a median of 2.

Analysis of open-ended teacher responses

Finally, teachers were asked to provide any comments that qualified or explained their

responses. Of the 19 participating teachers, 17 chose to add comments to their surveys. The

coding of those comments produced 112 statements classified into eight major categories, each

containing between one and five subcategories. The eight major categories were: 1) comments

giving advice, which comprised 19.64% of the useful tokens; 2) comments about speech courses,

15.18%; 3) general comments on misplacement, 14.29%; 4) comments about Generation 1.5

students, 13.39% 5) comments on teaching problems, 11.61%; 6) comments on the LOEP,

8.93%; 7) comments about general placement procedures, 8.93% and 8) other placement

comments, 8.04%.

Advice

The advice category was comprised of teachers giving advice on courses, teaching,

placement, and pre/co-requisites. In giving advice on courses, teachers commented on the

possibility of creating new courses, "My opinion is that we should either offer a level 6 grammar

class or reevaluate the standards by which students test out of EAP 1560." Others wanted to

"combine EAP 1560 with EAP 1540." Still others noted that courses should be made optional,

"Make EAP 1500 optional and keep only EAP 300 and 400." Advice on teaching yielded a few

comments on using the labs to help students catch up and using existing Prep English resources.









Advice on placement included ideas such as placing by skills rather than by levels and using

specific parts of the LOEP to help make decisions about placement into specific courses, "If

students score into level 6 but have a weak U [LOLU] score on the LOEP, 1560 could be part of

their mandate." Some teachers commented on the need for trained counselors to make decisions

about placement, while one teacher suggested adding a different type of test altogether "Where I

taught in Oxford they used a 1-page CLOZE test as the only placement tool." There were only

two contributors to comments about pre/co-requisites, but one teacher felt quite strongly about

creating a gate at level 5, "Students should stay in level 5 until language basics are mastered."

Another teacher made comments about limiting the movement from campus to campus while

taking pre-requisites for classes, "It is OK to take 0340 and 0360 on West, pass the classes, and

then take 0440 and 0460 on East, but it is not okay to take 0360 on East and 0340 on West."

Furthermore, "They [the students] should not be allowed to skip a couple of semesters and not

take the prerequisite."

Speech courses

The speech category revealed a variety of problems with placement into Speech courses

within the program. The instructor quote that best summarized this issue was, "Speech is where

all battles begin!" Another respondent said, "I believe misplacement often happens in speech

classes." Comments were also made revealing that Speech may not be the only area with

problems, "I have had A students in Speech that have not passed Reading and vice versa." One

instructor posited that the reason for these troubles is that "There is no Speech component to the

LOEP, and although we have started to read LOEP essays, I don't think the process mirrors the

intensity of the curricula." If this is the case, it could explain why another instructor said, "There

are at least 3 people in my classes now that would have done OK in the 1500 level (a higher

level)." However, students are not always misplaced into classes that are too low for them.









Another teacher commented, "For my speaking class there is one girl that could use a lower level

in speaking, but I had this girl in grammar 360 last semester, and she was one of the best."

Another instructor said, "In the past, I have had students in EAP 1500 who I could barely

understand. Then on the opposite side, I have had students in the EAP 300 level that could have

done OK in a higher level." Some teachers believed that speech may not be necessary for all

students, "It seems that if academic speech is all a student needs, they would be better placed in a

regular speech class or a prep speech class." "I have quite a few students in 1500 who have no

accent and who could make it in a regular college speech class." Others placed blame on the

difficulty of giving diagnostics in speech courses, "It's difficult to evaluate speech on the first

day of class like other subjects," while another found that even existing diagnostics currently are

not working, "I gave a diagnostic exam in 1500, and the results showed that no students should

have been placed higher. However, regarding the oral production of some students, I think

they've been misplaced." Some instructors had issues with the course curriculum, "A few of my

1500 (and 400) students don't need pronunciation work, and a very few of them (not this term)

don't even need listening work." And another commented, "The linguistics section is not

relevant to them." Finally, not all comments about speech courses were negative, "...they ALL

need to learn how to produce an academic speech," and "They do benefit by learning to take

notes on the lectures and they learn the components of a good speech."

Misplacement

Teachers had a variety of things to say about the misplacement of students into their

courses. "I have a number of students in my current classes whom I feel should have been placed

in a different level. About 25% to 30% of my students this semester would have benefited more

from another level." Another teacher went on to say, "Last semester I had a handful of students

who probably should have been in a lower level course than they were in. As a result, at least









partially, they struggled through the courses (and did not pass them)." Some teachers had clear

beliefs about the misplacement phenomenon. "I believe that students who are misplaced are

more frequently under-prepared than over-prepared. That is, most misplaced students belong in

a lower level, not higher." This idea was supported by others. "Sometimes we get students whom

we feel should have been placed in a lower level." "I had some... in level 3 who really belonged

to level 2, but we didn't have level 2, so they were placed in level 3." However, this was not

always the case. "Sometimes we get students that just seem way beyond the level of EAP," and

"The other one, I didn't know why he was in third level. His abilities seemed higher." One

teacher noted that it's not placement that is the problem, "9 times out of 10 they have come to us

by being promoted through the levels." Another teacher suggested why the misplacement leads

to problems. "There are some students who substantially lack a high enough proficiency level to

even understand instructions or a particular task. Language comprehension gets in the way." And

while one teacher mentioned what could be considered obvious issues with misplacement, "What

I have noticed is that some are placed in this level because of poor oral skills and others because

of poor writing skills," another made comments one wouldn't expect, "(We shouldn't be)

allowing students to exit level 5 without passing the final exams." Another felt that, "The

students who believe that they themselves are misplaced are often the students who aren't" while

another pointed out why we may not hear about student perspectives regarding misplacement, "I

have found that students who are misplaced are often gracious and don't complain about the

placement."

Generation 1.5

Generation 1.5, also yielded a healthy percentage of comments from teachers, "Once again,

the problem arises with 1.5s; all other students are placed right." Some commented on how

Generation 1.5 students felt, "They (1.5s) were confused why they were in EAP" while others









commented on the reasons these students needed to be in the courses, "Writing, it's tough. They

(1.5s) think they don't belong to their assigned levels because they were good at it in high school

but then they can't pass or barely pass the class." "I bet half of the class thinks they should be

moved to the next level and again the problem is with 1.5s-fluent, American accent, good

vocabulary, but no structure: Can't make complete sentences, most verbs are missing, etc."

Others made comments about why Generation 1.5 students were in EAP courses, "As not having

a listening component to the LOEP, Generation 1.5 students are placed into 1500 especially often

[, usually] not necessarily needing the course." Some teachers asked questions while others made

recommendations for how to deal with this population, "Should there be separate classes for

1.5ers?" "Some teachers think they are bored or misplaced and should be moved to a higher

level." "Maybe we should have 1.5s in level 5, 6 and send them straight to prep classes."

"Combine 1560 with 1540. It might be more meaningful for 1.5s." There was, however,

consensus that teaching Generation 1.5 students had its difficulties, "They are shocked the way

we teach structure directly and sometimes they struggle with the method more than grammar

itself." "Most 1.5s are bored in grammar classes but rarely do they improve their grammar skills.

"Level 5 is the hardest of all to teach for us and to take for them: They are bored and we can't

(or it's hard to) improve their speech skills." "I don't see much progress with 1.5s in EAP 1500."

Teaching problems

When it came to problems with teaching, one instructor noted, "...I think my greatest

difficulty as a teacher is to teach the necessary skills in a way that reaches all of the students."

Another expressed her belief in what happens when students fail to connect with her or the

content, "There are 2 people in particular I can see getting bored." Others didn't see the content

as the issue. One teacher responded that "...it is easy to cover material, but not so easy to

diagnose why different groups of student don't understand it, and how to reach the different









groups." Another faculty member added to the difficulties of teaching to students with a wide

range of ability levels, "It is uncomfortable-and sometimes embarrassing-to have completely

native [English] sounding students in classes. I don't feel I am necessarily meeting their needs."

A different teacher felt that the problems may stem from the diversity in students'

preparedness/needs, "The problem I have with writing is high school grads know essay

organization but have problems with grammar and mechanics. On the other hand, other students

aren't familiar with any organization or mechanics or sentence structure. It's hard to balance

between two groups!" While one teacher commented on how EAP students are simply a difficult

population to teach to, "Fossilization of mistakes is a major problem in adult ed," another

suggested that the students simply didn't care, "When I point out grammar errors to them, I could

be speaking Dutch as far as they are concerned." Some teachers believed that what could be

considered problems of placement were actually problems with the inability to move people, "(at

my other school) it was easier to move people around during the first couple of weeks of term.

The problem here is more that [sic], once misplacements are identified it's hard to change it,

especially if that student passed the lower level and got promoted." This instructor went on to

say, "Last year I had a 1540 who wrote not only like a NS [Native Speaker] but like a very good

writer who was a NS. However, I couldn't get her exempted from 1640 because she bombed the

state exit multiple choice test."

LOEP

In terms of LOEP placement, one instructor revealed a lack of knowledge about the test

and how students are placed into the program, "I have never seen the instrument that was used-

or is used- to place them." Another instructor seemed to have quite a bit to say about the

problems using the LOEP for placement:









First of all, I think the LOEP and the curricula are out of line [alignment]. In addition, the
writing given at placement is too short to be of consistent value. Holistic training is
haphazard college-wide. Inter-rater reliability is not consistently conducted. The absence of
all of these controls weakens the use of the LOEP essays. In terms of reading, I don't know
that the LOEP accurately reflects the type of skills being taught at the upper levels of EAP
courses. In this case, I think the placement test is more rigorous than the exit tests.

Other instructors gave different reasons for the problems with the LOEP and placement. "Some

reasons are averaging the LOEP scores, which can cause students to place too high in one area or

too low in another." Another appreciated that LOEP essays were once again being read,

"Anecdotally anyway, students seem to be better placed since we began reading LOEP essays.

However, because we don't have placement by skill, some students are still in classes they may

not actually need."

Placement in general

Not all of the comments about Valencia's placement of students were negative; some

comments revealed that there are students being properly placed. "I believe that most students

are well placed within the EAP program." In this instructor's opinion, the reason for this is, "The

testing instruments do an excellent job, and the readers help to confirm the placement." Another

instructor also commented about the importance of reading a sample of student work, "For the

one gentleman, as soon as I saw his writing, I new [sic] he was placed right." In one teacher's

eyes, Valencia is comparable other schools, "I don't think we're doing a better or worse job of

placement than anywhere else." However, this instructor did go on to reveal that "I don't feel

that any of the students are placed too low even if they are more advanced than the other students

in the class. They still have to make adjustments in their knowledge." Another teacher qualified

his positive comments about placement, "If this had been done last semester, it would have been

much easier to answer question 3 (the number of students that should have been placed in a

different level); at least at the present my students seem to be in the right place." He went on to









say, "In reference to question #3, I was, at first, concerned of [sic] a few of my students.

However, after careful consideration, I realized that their achieving 75's hardly constitutes

struggling through the courses. I believe my students are in the right classes this time around."

Other placement comments

The last category was comprised of comments about students and placement across

language skills. For example, students might write like a NS but they did badly on reading, or

vice versa, or they speak like a NS but can't write. When it came to writing, one teacher

commented, "Overall, most of my EAP 1640 students seem to be in the right place for their

current skill level. However, there always seems to be one or two students who are much below

the required skill level." Others suggested exactly what those missing skills might be, "For

example, several students in 1640 have never attended a grammar class." Another teacher

showed agreement with this by stating, "I have, however, noticed that some of my level 6

students who have placed directly into level 6 lack the grammar and sentence structure skills

necessary to be successful in Advanced Comp. for nonnative speakers." In terms of placement

into grammar courses, however, there was an altogether different take, "Grammar: I never had

grammar students misplaced." In reading, teachers made the following comments. "In EAP

1520, even though many speak well, none of them are reading totally on grade level. I consider

they all need 1520." "For reading, most of the people can benefit from that class." Finally, one

instructor mentioned how her belief that no students were misplaced was later disproved by

students exempting the next level. "Reading: I haven't had any students misplaced; again, I had

some who passed the exemption test at the end of level 5."









Questions 2-5

This section reports the results within the two studies designed to answer the remaining

research questions.

2. Which of the following three approaches best predicts student success as measured by final
course grades and teacher evaluation of placement: 1) Averaging the three objective LOEP
subtests, 2) Using an equally weighted average of both the objectively and subjectively
scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors?

3. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
final course grades?

4. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
teacher evaluation of placement?

5. Do the student variables of Locus of Control and Generation 1.5 add to the prediction of
placement in EAP courses as measured by final course grades and teacher evaluation of
placement?



Study 1

The first study used an existing database of 1,030 first-time EAP students over the past

three years. Multiple regressions were conducted to compare the abilities of three competing

models at predicting success in EAP courses as measured by final course grades. The first model

considered used only the average of the three objectively scored LOEP subtests: Reading

(LORC), Sentence Meaning (LOSM), & Language Use (LOLU). The second model used a

composite score computed by averaging the first model with the LOEP Essay Score. The third

model considered used the four individual LOEP subtest scores as independent variables. The

means, standard deviations, and intercorrelations can be found in Table 4-1.













Table 4-1. Means, standard deviations, and correlations for final course grades and predictor
variables


Level/Skill
2 Combined
3 Speech
3 Reading
3 Writing
3 Grammar
4 Speech
4 Reading
4 Writing
4 Grammar
5 Speech
5 Reading
5 Writing
5 Grammar
6 Reading
6 Writing


Variable N
EAP0281 66
EAP0300 96
EAP0320 95
EAP0340 74
EAP0360 81
EAP0400 179
EAP0420 184
EAP0440 157
EAP0460 157
EAP1500 226
EAP1520 237
EAP1540 194
EAP1560 202
EAP1620 213
EAP1640 179


M
2.91
3.42
2.91
2.82
2.79
3.07
2.56
2.44
2.54
3.08
2.56
2.66
2.61
2.60
2.66


1
.23
.24*
.16
-.14
.03
.32**
.19*
.18*
.13
.16*
.06
.02
.11
.20**
.07


2
.21
.12
.03
-.09
-.18
.11
-.03
.08
.08
.07
-.10
-.07
-.10
.04
.21**


3
.24
.10
-.03
-.03
.14
.17*
.06
.12
.10
.05
-.13
.04
.06
.06
.17*


4
.04
.31**
.36**
.05
.04
.35**
.22**
.17*
.11
.16**
.25**
.05
.14*
.24**
.07


5
.16
.09
-.03
-.34**
-.13
.08
.09
.07
.05
.12
-.02
-.07
.01
.07
-.09


.90**
.81**
.77**
.65**


6
-.02
-.16
-.14
.08
-.19
-.21**
-.21**
-.12
-.07
-.08
-.15*
-.09
-.19**
-.09
.16*


.52**
.86**
.52**
.36**
.49**


Predictor Variables
1. LOEPAVG (N=1030)
2. LPAVGWE (N=1030)
3. LOLU (N=1030)
4. LORC (N=1030)
5. LOSM (N=1030)
6. LOES (N=1030)


1030
1030
1030
1030
1030
1030


**Correlation is significant at the
(2-tailed)


95.91
96.15
92.41
95.52
99.80
96.39


14.32
11.78
17.47
15.94
14.95
12.66


.89** .91** .85**
.83** .72**
.63**


0.01 level (2-tailed) *Correlation is significant at the 0.05 level


Because individual regressions needed to be conducted for each model in each of the 15

courses in the EAP program, the process of presenting the regression results is somewhat

lengthy. Table 4-2 presents a summary of the performance of the three competing models.

For EAP 0281 (Combined skills at level 2), none of the competing models were

significantly able to predict success as measured by final course grade: Model 1 (Average w/out

essay) R2 = .05; F (1,64) = 3.58, p = .06; Model 2 (Average w/ essay) R2 = .05; F(1,64) = 3.07, p

=.08; Model 3 (individual subtests) R2= .09; F(4,61) = 1.49, p = .22









Table 4-2. Summary performance of all competing models in study 1
Subtests of Model 3 Significantly
Contributing to the Model
Lvl Skill Course Model 1 Model 2 Model 3 LOLU LORC LOSM LOES
2 Combined EAP0281
3 Speech EAP0300 X* X X
3 Reading EAP0320 X X
3 Writing EAP0340 X X
3 Grammar EAP0360
4 Speech EAP0400 X X* X
4 Reading EAP0420 X* X
4 Writing EAP0440 X
4 Grammar EAP0460
5 Speech EAP1500 X
5 Reading EAP1520 X X X
5 Writing EAP1540
5 Grammar EAP1560
6 Reading EAP1620 X* X X
6 Writing EAP1640 X X* X X X
* Indicates preferred model when two or more models both significantly predicted success.


For EAP 0300 (Speech at level 3), Models 1 and 3 significantly predicted success in the

course as measured by final course grade: F(1,94) = 5.65, p = .02 for Model 1(Average w/out

essay) and F(4,91) = 2.55, p = .045 for model 3 (Individual subtests). However, in the third

Model, LOEP Reading was the only variable significantly contributing to the prediction. Model

performance and beta weights for the models are presented in Table 4-3. The R-squared values

for the significant competing models are .057 and .101 respectively for Models 1 and 3. This

indicates that 5.7% and 10.1%, respectively, of the variance in final course grades in EAP 0300

was explained by the models. According to Cohen (1988), this is a small effect for both models.

The adjusted R-squared values are .047 and .061 respectively for Models 1 and 3. An F-test was

used to test if the reduced model, Model 1, performed as well as the full model, Model 3.

Because the R2-change was not significant F(3,91) = 1.48, p = .224, it is assumed that the

reduced model performed as well as the full model.









Table 4-3. EAP 0300: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0300
1) Composite LOEP Test Scores LOEPAVG .016 .007 .238*
2) Composite LOEP Test Scores LPAVGWE .018 .017 .107
Averaged with Essay
3) Individual LOEP Test Scores LOLU .003 .005 .053
LORC .014 .005 .309**
LOSM -.002 .006 -.035
LOES -.000 .010 -.012
Model 1 R2 = .057; F(1,94) = 5.65, p = .020*
Model 2 R2 = .011; F(1,94) = 1.09, p = .299
Model 3 R2 = .101; F(4,91) = 2.55, p = .045*
*p <.05; **p<.01

For EAP 0320 (Reading at level 3), only Model 3 significantly predicted success in the

course as measured by final course grade, F(4,90) = 3.90, p = .006. However, in Model 3, LOEP

Reading was the only variable significantly contributing to the prediction. Model performance

and beta weights for the models are presented in Table 4-4. The R-squared value for Model 3 is

.148. This indicates that 14.8% of the variance in final course grades in EAP 0320 was explained

by Model 3. According to Cohen (1988), this is a medium effect. The adjusted R-squared value

for Model 3 was .110.

Table 4-4. EAP 0320: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0320
1) Composite LOEP Test Scores LOEPAVG .015 .010 .155
2) Composite LOEP Test Scores LPAVGWE .007 .025 .030
Averaged with Essay
3) Individual LOEP Test Scores LOLU -.004 .008 -.052
LORC .025 .007 .381**
LOSM -.009 .009 -.126
LOES -.004 .013 -.039
Model 1 R2 = .024; F(1,93) = 2.30, p = .133
Model 2 R2 = .001; F(1,93) =.085, p = .771
Model 3 R2 = .148; F(4,90) = 3.90, p = .006**
*p <.05; **p<.01









For EAP 0340 (Writing at level 3), only Model 3 significantly predicted success in the

course as measured by final course grade, F(4,69) = 2.90, p = .028. However, in Model 3 LOEP

Sentence Meaning was the only variable significantly contributing to the prediction. Model

performance and beta weights for the models are presented in Table 4-5. The R-squared value for

Model 3 is .144. This indicates that 14.4% is the proportion of explained variance of the variance

in final course grades in EAP 0340 in Model 3. According to Cohen (1988), this is a small effect.

The adjusted R-squared value for Model 3 was .094.

Table 4-5. EAP 0340: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0340
1) Composite LOEP Test Scores LOEPAVG -.013 .011 -.142
2) Composite LOEP Test Scores LPAVGWE -.021 .029 -.088
Averaged with Essay
3) Individual LOEP Test Scores LOLU .002 .009 .028
LORC .008 .007 .132
LOSM -.028 .009
.424**
LOES -.007 .015 -.076
Model 1 R2 =.020; F(1,72) = 1.48, p =.228
Model 2 R2 = .008; F(1,72)= .558, p = .457
Model 3 R2 = .144; F(4,69) = 2.89, p = .028*
*p <.05; **p<.01

For EAP 0360 (Grammar at level 3), none of the competing models were significantly able

to predict success as measured by final course grade: Model 1 R2 = .00; F(1,79) = .05, p = .82;

Model 2 R2 = .03; F(1,79) = 2.50, p = .12; Model 3 R2 = .11; F(4,76) = 2.34, p = .063.

For EAP 0400 (Speech at level 4), Models 1 and 3 significantly predicted success in the

course as measured by final course grade: F(1,177) = 19.91, p < .001 for Model 1 and F(4,174) =

7.37, p <.001 for Model 3. However, in Model 3, LOEP Reading was the only variable

significantly contributing to the prediction. The beta weights for the models are presented in

Table 4-6. The R-squared values for the competing models are .101 and .145 respectively for









Models 1 and 3. This indicates that 10.1% and 14.5%, respectively, of the variance in final

course grades in EAP 0400 was explained by the models. According to Cohen (1988), this is a

small effect for Model 1 and a medium effect for Model 3. The adjusted R-squared values for the

two models were .096 and .125 respectively for Models 1 and 3. An F-test was used to test if the

reduced model, Model 1, performed as well as the full model, Model 3. Model 1 did not perform

as well as Model 3. The reduced model had a significantly lower R2, F(3,174) = 2.98, p = .03

Table 4-6. EAP 0400: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0400
1) Composite LOEP Test Scores LOEPAVG .039 .009 .318**
2) Composite LOEP Test Scores LPAVGWE .026 .019 .107
Averaged with Essay
3) Individual LOEP Test Scores LOLU .012 .007 .146
LORC .023 .005 .326**
LOSM -.002 .008 -.025
LOES -.004 .010 -.035
Model 1 R2 = .101; F(1,177) = 19.91, p < .001**
Model 2 R2 = .011; F(1,177) = 2.03, p =.156
Model 3 R2 = .145; F(4,174) = 7.37, p <.001**
*p <.05; **p<.01

For EAP 0420 (Reading at level 4), Models 1 and 3 significantly predicted success in the

course as measured by final course grade: F(1,182) = 6.63, p = .01for Model 1 and F(4,179) =

3.08, p = .02 for Model 3. However, in Model 3, none of the variables were shown as

significantly contributing to the prediction. The beta weights for the models are presented in

Table 4-7. The R-squared values for the competing models are .035 and .064 respectively for

Models 1 and 3. This indicates that 3.5% and 6.4%, respectively, of the variance in final course

grades in EAP 0420 was explained by the models. According to Cohen (1988), this is a small

effect for both models. The adjusted R-squared values were .030 and .043 respectively for

Models 1 and 3. An F-test was used to test if the reduced model, Model 1, performed as well as









the full model, Model 3. Because the R2-change was not significant F(3,179) = 1.85, p = .14, it is

assumed that the reduced model performed as well as the full model.

Table 4-7. EAP 0420: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0420
1) Composite LOEP Test Scores LOEPAVG .023 .009 .188*
2) Composite LOEP Test Scores LPAVGWE -.007 .019 -.027
Averaged with Essay
3) Individual LOEP Test Scores LOLU -.001 .007 -.015
LORC .011 .006 .154
LOSM .002 .008 .022
LOES -.017 .011 -.141
Model 1 R2 =.035; F(1,182)= 6.63, p = .011
Model 2 R2 = .001; F(1,182) = .132, p = .716
Model 3 R2 = .064; F(4,179) = 3.08, p = .018*
*p <.05; **p<.01

For EAP 0440 (Writing at level 4), only Model 1 significantly predicted success in the

course as measured by final course grade, F(1,155) = 5.14, p = .03. Model performance and beta

weights for the models are presented in Table 4-8. The R-squared value for Model 1 is .032. This

indicates that 3.2% of the variance in final course grades in EAP 0440 was explained by Model

1. According to Cohen (1988), this is a small effect. The adjusted R-squared value for Models 1

was .026.

Table 4-8. EAP 0440: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP0440
1) Composite LOEP Test Scores LOEPAVG .022 .009 .179*
2) Composite LOEP Test Scores LPAVGWE .022 .023 .077
Averaged with Essay
3) Individual LOEP Test Scores LOLU .009 .007 .116
LORC .013 .007 .174
LOSM -.000 .008 -.005
LOES .002 .013 .013
Model 1 R2 = .032; F(1,155) = 5.14, p = .025*
Model 2 R2 = .006; F(1,155) = .931, p = .336
Model 3 R2 = .042; F(4,152) = 1.66, p = .163









*p <.05; **p<.01

For EAP 0460 (Grammar at level 4), none of the competing models were significantly able

to predict success as measured by final course grade: Model 1 R2 = .02; F(1,155) =2.67, p = .10;

Model 2 R2 = .01; F(1,155) =1.00, p = .32; Model 3 R2 = .02; F(4,152)= .86, p = .49.

For EAP 1500 (Speech at level 5), only Model 1 significantly predicted success in the

course as measured by final course grade, F(1,224) = 5.76, p = .02. Model performance and beta

weights for the models are presented in table 4-9. The R-squared value for Model 1 is .025. This

indicates that 2.5% of the variance in final course grades in EAP 1500 was explained by Model

1. According to Cohen (1988), this is a small effect. The adjusted R-squared value for Model

lwas .021.

Table 4-9. EAP 1500: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP1500
1) Composite LOEP Test Scores LOEPAVG .021 .009 .158*
2) Composite LOEP Test Scores LPAVGWE .017 .017 .067
Averaged with Essay
3) Individual LOEP Test Scores LOLU .001 .007 .012
LORC .012 .006 .139
LOSM .007 .008 .070
LOES .000 .009 .002
Model 1 R2 = .025; F(1,224) = 5.76, p = .017*
Model 2 R2 = .004; F(1,224)= .997, p = .319
Model 3 R2 = .031; F(4,221) = 1.75, p = .139
*p <.05; **p<.01

For EAP 1520 (Reading at level 5), only Model 3 significantly predicted success in the

course as measured by final course grade, F(4,232) = 6.40, p < .001. However, in Model 3,

LOEP Language Use and Reading Comprehension were the only variables significantly

contributing to the prediction. Model performance and beta weights for the models are presented

in Table 4-10. The R-squared value for Model 3 is .099. This indicates that 9.9% of the variance









in final course grades in EAP 1520 was explained by Model 3. According to Cohen (1988), this

is a small effect. The adjusted R-squared value for Models 3 was .084.

Table 4-10. EAP 1520: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP1520
1) Composite LOEP Test Scores LOEPAVG .005 .010 .060
2) Composite LOEP Test Scores LPAVGWE -.028 .018 -.102
Averaged with Essay
3) Individual LOEP Test Scores LOLU -.018 .007 -.171*
LORC .024 .007 .235**
LOSM -.007 .008 -.052
LOES -.016 .009 -.122
Model 1 R2 =.004; F(1,235)= .835, p =.362
Model 2 R2 = .010; F(1,235) = 2.45, p = .119
Model 3 R2 = .099; F(4,232) = 6.40, p < .001**
*p <.05; **p<.01

For EAP 1540 (Writing at level 5), none of the competing models were significantly able

to predict success as measured by final course grade: Model 1 R2 = .00; F(1,192) = .06, p = .81;

Model 2 R2 = .01; F(1,192)= .94, p = .33; Model 3 R2 = .02; F(4,189)= .94, p =.44.

For EAP 1560 (Grammar at level 5), none of the competing models were significantly able

to predict success as measured by final course grade: Model 1 R2 = .01; F(1,200) = 2.34, p = .13;

Model 2 R2 = .01; F(1,200) = 2.13, p = .15; Model 3 R2 = .05; F(4,197) = 2.33, p = .06.

For EAP 1620 (Reading at level 6), Models 1 and 3 significantly predicted success in the

course as measured by final course grade: F(1,211) = 8.34, p = .004 for Model 1 and F(4,208) =

3.28, p =.01for Model 3. However, in Model 3, only LOEP Reading Comprehension

significantly contributed to the prediction. Model performance and beta weights for the models

are presented in Table 4-11. The R-squared values for the competing models are .038 and .059

respectively for Models 1 and 3. This indicates that 3.8% and 5.9%, respectively, of the variance

in final course grades in EAP 1620 was explained by the models. According to Cohen (1988),









this is a small effect for both models. The adjusted R-squared values were .033and .041

respectively for Models 1 and 3. An F-test was used to test if the reduced model, Model 1,

performed as well as the full model, Model 3. Because the R2-change was not significant

F(3,208) = 1.55, p = .20, it is assumed that the reduced model performed as well as the full

model.

Table 4-11. EAP 1620: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP1620
1) Composite LOEP Test Scores LOEPAVG .045 .016 .195**
2) Composite LOEP Test Scores LPAVGWE .013 .022 .043
Averaged with Essay
3) Individual LOEP Test Scores LOLU .005 .012 .030
LORC .031 .010 .226**
LOSM .004 .012 .023
LOES -.004 .011 -.022
Model 1 R2 =.038; F(1,211)= 8.34, p = .004**
Model 2 R2 = .002; F(1,211)= .398, p = .529
Model 3 R2 = .059; F(4,208) = 3.28, p =.012*
*p <.05; **p<.01

For EAP 1640 (Writing at level 6), Models 2 and 3 significantly predicted success in the

course as measured by final course grade: F(1,177) = 7.99, p = .005 for Model 2 and F(4,174) =

4.84 p = .001for Model 3. However, in Model 3, only LOEP Language Use, Sentence Meaning

and Essay significantly contributed to the prediction. Model performance and beta weights for

the models are presented in Table 4-12. The beta weights for Model 3 suggest that LOEP

Language Use contributes the most to predicting success followed by Essay and Sentence

Meaning. The R-squared values for the competing models are .043 and .100 respectively for

Models 2 and 3. This indicates that 4.3% and 10%, respectively, of the variance in final course

grades in EAP 1640 was explained by the models. According to Cohen (1988), this is a small

effect for both models. The adjusted R-squared values were .043 and .079respectively for









Models 2 and 3. An F-test was used to test if the reduced model, Model 2, performed as well as

the full model, Model 3. Model 2 did not perform as well as Model 3. The reduced model has a

significantly lower R2, F(3,174) = 3.67, p = 0.013

Table 4-12. EAP 1640: Summary of simultaneous multiple regression analyses for models
predicting successful placement as measured by final course grades
Course Model Variable B SEB p
EAP1640
1) Composite LOEP Test Scores LOEPAVG .015 .016 .069
2) Composite LOEP Test Scores LPAVGWE .059 .021 .208**
Averaged with Essay
3) Individual LOEP Test Scores LOLU .042 .013 .259**
LORC .015 .010 .115
LOSM -.024 .012 -.157*
LOES .030 .011 .212**
Model 1 R2 = .005; F(1,177) = .847, p = .359
Model 2 R2 = .043; F(1,177) = 7.99, p = .005**
Model 3 R2 = .100; F(4,174) = 4.84, p = .001**
*p <.05; **p<.01

In the first study, none of the models was consistently able to predict success in all of the

EAP courses. Model 3 was able to predict success in the greatest number of courses followed by

Model 1. None of the models were able to predict successful placement in EAP grammar

courses. Model 3 was consistently able to predict success in EAP reading courses. Model 1 was

consistently able to predict success in EAP speech courses. Finally, Models 1 and 3 were

sometimes able to predict success in EAP writing courses, Model 3 being the better of the two at

predicting writing. It should be mentioned that none of the models was able to account for more

than 15% of the variance in final course grades, with the majority of them accounting for less

than 10% of the variance across courses.

Study 2

The second study used all willing EAP students taking courses at Valencia during the

Summer A and C terms of 2006. Similar to the first study, multiple regressions were conducted









to compare the abilities of the three competing models at predicting success in EAP courses as

measured by final course grades. In the second study, however, the additional outcome variable

of teacher evaluation of placement was added. Furthermore, two new variables were analyzed for

their predictive abilities: Locus of Control and Computed Generation 1.5 status.

It was hoped that this second study could replicate findings in the first and add to those

findings. Unfortunately, none of the models in the second study were found to be significant

predictors of success as measured by final course grades. Even if the models had been found to

significantly predict success, the low numbers of students in their first semester led to critically

low numbers in each course. Although there were originally 470 students surveyed, only 131 of

those students were in their first semester. Furthermore, because some of these students failed to

take all subtests or because information was missing from the database, only 121 students had

complete LOEP subtest scores and end of final course grades.

In the second study, multiple regressions were also conducted to compare the abilities of

the same three variables at predicting success in EAP courses as measured by teacher evaluation

of placement. The means, standard deviations, and correlations can be found in Table 4-13. All

three models significantly predicted successful placement as measured by teacher evaluation of

placement: F(1,118) = 184.2, p <.001 for Model 1, F(1,118) = 312.4, p <.001 for Model 2, and

F(4,115) = 79.5, p <.001for Model 3. In the third model, all LOEP subtests contributed

significantly to prediction. Model performance and beta weights are presented in Table 4-14. The

R-squared values for the competing models are .610, .726, and .734 respectively for Models 1, 2,

and 3. This indicates that 61.0%, 72.6%, and 73.4%, respectively, of the variance in teacher

evaluation of placement was explained by the models. According to Cohen (1988), this is a large

effect for all models. Because the first two models were not nested, an F-test could not be









conducted; however, the R-squared values indicated that Model 2 accounted for 11.6% more of

the variance and therefore is the better model. Two F-tests were conducted to test if the reduced

models, Model 1 and 2, performed as well as the full model, Model 3. Model 1 did not perform

as well as Model 3, with a significantly lower R2, F(3,115) = 17.87, p < .001. However, it can be

assumed that Model 2 did perform as well as Model 3 because the R2-change was not significant

F(3,115) = 1.15, p = .33. Given that there is no significant difference between Models 2 and 3,

the simpler model was selected as the preferred model.

Table 4-13. Means, standard deviations, and intercorrelations for teacher evaluation of placement
and predictor variables for first semester survey respondents
Correlation with TCHRPLC
Variable N M SD
TCHRPLC 121 4.55 .965 1.0


Predictor Variables
1. LOEPAVG 121 94.25 11.38 .781**
2. LPAVGWE 121 95.29 9.06 .852**
3. LOLU 121 90.88 14.89 .693**
4. LORC 121 95.02 13.31 .581**
5. LOSM 121 97.16 12.94 .667**
6. LOES 121 95.54 13.35 .648**
7. LOCSCAL 106 8.67 3.63 -.074
8. GN15CMPT 131 .275 .448 .271**
**. Correlation is significant at the 0.01 level (2-tailed)
*. Correlation is significant at the 0.05 level (2-tailed)
Using the LPAVGWE as the preferred model, the additional variables (LOCSCAL,

GN15CMPT) were each tested individually to see if they improved prediction. LOCSCAL and

GN15CMPT did not significantly improve prediction.

Although the second study failed to replicate findings in the first study using final course

grades as an outcome variable, the addition of the second outcome variable did add to the

findings. In the second study, Model 2 was selected as the preferred model because of its ability

to perform as well as Model 3. In the first study, Model 3 was able to predict success in the









greatest number of courses followed by Model 1. All of the subtests in Model 3 in the second

study were significantly able to predict success as measured by teacher evaluation of placement,

with LOES contributing the most to the prediction, and followed by LOLU, LORC, and LOSM

respectively.

Table 4-14. Summary of simultaneous multiple regression analyses for models predicting
successful placement as measured by teacher evaluation of placement
Model Variable B SEB p

1) Composite LOEP Test Scores LOEPAVG .066 .005 .781**
2) Composite LOEP Test Scores LPAVGWE .091 .005 .852**
Averaged with Essay
3) Individual LOEP Test Scores LOLU .016 .004 .251**
LORC .018 .004 .247**
LOSM .018 .005 .244**
LOES .039 .005 .392**
Model 1 R2 .610; F(1,118)= 184.20, p <.001**
Model 2 R2 = .726; F(1,118) = 312.43, p <.001**
Model 3 R2= .734; F(4,115) = 79.49, p <.001**
*p <.05; **p<.01

Although the second study failed to replicate findings in the first study using final course

grades as an outcome variable, the addition of the second outcome variable did add to the

findings. In the second study, Model 2 was selected as the preferred model because of its ability

to perform as well as Model 3. In the first study, Model 3 was able to predict success in the

greatest number of courses followed by Model 1. All of the subtests in Model 3 in the second

study were significantly able to predict success as measured by teacher evaluation of placement

with LOES contributing the most to the prediction followed by LOLU, LORC, and LOSM

respectively.









CHAPTER 5
DISCUSSION

In addition to identifying descriptive information about Valencia's EAP population, this

research sought to identify the most effective practices for placing students into EAP courses at

Valencia by finding answers to the following questions:

1. What are the student and teacher beliefs about placement at Valencia?

2. Which of the following three approaches best predicts student success as measured by final
course grades and teacher evaluation of placement: 1) Averaging the three objective LOEP
subtests, 2) Using an equally weighted average of both the objectively and subjectively
scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors?

3. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
final course grades?

4. Which of all approaches best predicts success across different language skill courses
(reading, writing, grammar, and speech) and language proficiency levels as measured by
teacher evaluation of placement?

5. Do the student variables of Locus of Control and Generation 1.5 add to the prediction of
placement in EAP courses as measured by final course grades and teacher evaluation of
placement?

Valencia's EAP Population

With the exceptions of country of origin and native language, survey responses revealed

that Valencia's EAP population, for the most part, is similar across the three campuses.

However, survey results did reveal that the West Campus had considerably more Haitian

students than the two other campuses: 28.85% of the West Campus EAP student population was

Haitian as opposed to 4.63% on the East Campus and 4.92% on the Osceola Campus. These

demographic differences, however, did not appear to affect placement. In the surveys, teachers

across campuses rated misplacement in a similar manner, and none of the teacher comments led

the researcher to believe the incidence of misplacement of students was greater on one campus

than another.









Valencia's prototypical EAP student is a 19-year-old Spanish speaker from Columbia. She

is in her second semester at Valencia and started school in the U.S. in or around 10th grade, but

is most likely not a Generation 1.5 student. She has been in the U.S. about three years and is not

the first person in her family to go to college; however, she is more than likely the first person in

her family to go to college in the U.S. and has probably not attended college outside the U.S. She

rates her abilities to write papers and do research in English as average. Only a few of the people

she does things with every week are native English speakers. However, she does use English

most of the time to speak with her friends. On the other hand, her family does not often use

English in the home, and in that home there are fewer than 25 books. She does, however, own a

computer and rates her ability to use it as above average.

Student and Teacher Beliefs about Placement

This section of the chapter discusses findings relevant to the first research question, "What

are the student and teacher beliefs about placement at Valencia?" When students were asked

their beliefs on placement, 21% of the 470 students surveyed felt that they had been misplaced.

One might expect that a larger percentage of students who had been placed into a developmental

program rather than regular college courses to feel that they have been misplaced. In the surveys,

teachers indicated that 17% of their students were misplaced. It is interesting to note that student

beliefs about the incidence of misplacement are similar to those of instructors, 21% and 17%

respectively.

Prior to the study, anecdotal evidence (emails, teacher complaints at meetings, and

discussions about placement with colleagues) indicated teacher dissatisfaction with the way

students were being placed. In response to survey questions on placement, faculty members felt

that Valencia did an average job at placing students; 14 of 19 survey respondents (74%) selected

the term "Average" to describe Valencia's accuracy at placing students. 74% indicated that they









"Sometimes" feel that students in their EAP classes might be better placed in a different level.

The responses to this particular question about how often misplacement occurs were interesting

in that not one of the instructors felt that students were "Rarely" or "Never" misplaced,

indicating that instructors do feel that misplacement is an ongoing phenomenon. Of those

surveyed, 68% felt that a few of the students in their courses during the semester in which

surveys were conducted should have been placed in a different level. And while 10% indicated

that "None" of their students would have benefited from different placement, 21% indicated

more than a few students should have been placed differently.

In terms of teacher responses to the open-ended questions, three of the eight major

categories were related to placement: one specifically dealing with misplacement, one on

placement in general, and one on other placement comments. To summarize teacher comments

about misplacement, many of the comments indicated differing opinions about students being

placed above and below their ability levels: "I believe that students who are misplaced are more

frequently under-prepared than over-prepared," and "Sometimes we get students that just seem

way beyond the level of EAP." In the category of Teacher Comments on Placement, responses

were generally positive regarding placement at Valencia. For example, one instructor wrote "I

believe that most students are well placed within the EAP program." In the category Other

Placement Comments there was some indication that teachers felt students placed in writing

courses lacked sufficient grammar skills. A typical response was that "some of my level 6

students who have placed directly into level 6 lack the grammar and sentence structure skills

necessary to be successful in Advanced Comp." This was supported by instructors who

commented on both weak skills and a total lack of skills. Also, in the same category of Other

Placement Comments, another instructor wrote "I never had grammar students misplaced."









Further research would need to be conducted verify this, but perhaps there is a mismatch

between the specific discrete point grammar topics taught in EAP grammar courses and what is

measured by the LOLU and LOSM subtests.

Other teacher comments that should be mentioned here dealt with teaching-related

problems but indirectly reflect issues with placement. For example, comments like "It is

uncomfortable-and sometimes embarrassing-to have completely native sounding students in

classes. I don't feel I am necessarily meeting their needs," reveal a wide variety of language

proficiencies in courses. A comment such as,

The problem I have with writing is high school grads know essay organization but have
problems with grammar and mechanics. On the other hand, other students aren't familiar
with any organization, mechanics, or sentence structure

further demonstrates the variety of skills and background knowledge that students have when

they enter courses at Valencia. The variety of language proficiencies and differences in skills and

background knowledge demonstrated by students could simply be the result of placing students

into skills classes at a single level rather than into skills classes across levels. It could also

indicate a mismatch between the LOEP subtests and the school curriculum.

In summary, the majority of faculty members surveyed felt that only a few of their

students were misplaced during the study period. Although there were indications that placement

problems existed, the findings failed to reflect the anecdotal reports in terms of the incidence of

misplacement. It is possible that with the lighter course load over the summer (with fewer

courses, fewer students in courses, shorter work weeks) teachers did not feel or vent frustrations

about misplaced students over the summer in the same manner as they did during the fall or

spring semesters.









The Preferred Approach

This section discusses findings relevant to the second research question, "Which of the

following three approaches best predicts student success as measured by final course grades and

teacher evaluation of placement: 1) Averaging the three objective LOEP subtests, 2) Using an

equally weighted average of both the objectively and subjectively scored LOEP subtests, or 3)

Using the four LOEP subtests as individual predictors?"

To help improve the successful placement of EAP students into the program, research

Question 2 sought to find which of three approaches best predicted student success as measured

by final course grades and by teacher evaluation of placement. Model 1 studied the variable of

the simple average of the three objectively scored LOEP subtests. In the first study using the data

including 1,030 students, Model 1 was able to significantly predict success as measured by final

course grades in six of the 15 courses analyzed. The first model significantly predicted success in

all EAP Speech courses, levels 3, 4, and 5. Speech courses are not offered at levels 2 or 6. The

first model also predicted success in Reading at levels 4 and 6, and Writing at level 4. In three of

those courses (level 3 Speech, level 4 Reading, and level 6 Reading) Model 1 was selected for its

simplicity, but this model could never account for more than 10% of the variance in final course

grades. Model 1 failed to predict success in any of the grammar courses.

As mentioned earlier, perhaps there is a mismatch between content assessed on the

LOLU/LOSM subtests and the content taught in EAP grammar courses. Another explanation

could be that perhaps grammar is not a skill to itself; perhaps it is a subcomponent of other skills.

A final reason for the inability of these subtests to predict success in Grammar courses may have

something to do with the different populations who do well on grammar tests. Some students'

study of English prior to arrival in the U.S. has focused extensively on memorization of

vocabulary and grammatical rules. These students tend to do well on grammar tests, but do not









write or speak well in English. Other students may have had greater access to English-speaking

models with a greater focus on communicative tasks and less focus on grammar and vocabulary.

An example would be Generation 1.5 students who speak with near-native proficiency but lack

grammar knowledge. The differences in these populations could be affecting the ability of the

LOEP to place students appropriately.

In the second study using the data from 470 surveyed students, Model 1 was found to

significantly predict successful placement as measured by teacher evaluation, but it was

outperformed by Models 2 and 3. This is reasonable because this model did not include the

LOEP Essay subtest, the subtest that was found to be most predictive of teacher evaluation of

placement.

Model 2, which used an equally weighted average of both the objectively and subjectively

scored LOEP subtests, was found to be a poor predictor in the first study. This model was able to

significantly predict success in only one 15 EAP course (level 6 writing), and it was able to

account for only 4% of the variance in that regression. However, in the second study Model 2

performed as well as Model 3, which considered the four subtest variables individually.

Model 3, which entered the four LOEP subtests as individual predictors, performed

moderately well in both studies. Although it accounted only 15% of the variance in the first

study using the population of 1,030 students, it significantly predicted success in eight of the 15

EAP courses as measured by final course grade. The eight courses included Speech at levels 3

and 4, Reading at levels 3, 4, 5, and 6, and Writing at levels 4 and 6. In the second study, Model

3 accounted for the greatest amount of variance using teacher evaluation of placement. All of

the LOEP subtests in Model 3 were found to be significant predictors of success as measured by

teacher evaluation of placement. The Essay performed the best, followed by Reading, Sentence









Meaning, and Language Use, respectively. The performance of the Essay variable was

interesting but not surprising; in survey responses teachers welcomed the reinstatement of the

Essay variable. One instructor even noted, "...students seem to be better placed since we began

reading LOEP essays." Because the Essay is the only direct measure of language proficiency, it

is not surprising that it is able to predict teacher evaluations of placement.

A partial explanation for the ineffectiveness of the Essay variable in the first study could

be that single-prompt essays are highly unstable as variables. This instability could have led to

low reliability and thus negatively affected the correlation. Future research could address the

comparative effectiveness of holistically scored single prompt essays and multi-prompt short

essay measures. If holistically scored multi-prompt short essay measures are found to increase

the reliability of written assessment, their use could enhance placement practices.

In the first study, none of the variables in Model 3 showed high correlations with final

course grades, the highest correlation being .36 for LOEP Reading with EAP 0320 (Level 3

Reading). These low correlations may be explained by low reliability of course grades,

restriction of range of test scores, or both. Because of the different ways that teachers evaluate

students and the variety of personal, social, economic, and academic factors involved in

assigning student grades, final course grades may not be a reliable indicator of successful

placement. It has been suggested that low reliability of course grades can depress correlations

(Sawyer, 1989). For example, if the reliability of a test score is high, e.g., .90, and the reliability

of course grades is low, e.g., .40, the maximum correlation between the measures is .60. Another

possible explanation is restriction of range (American College Testing Program, 1990;

Armstrong, 1994). In other words, because students in EAP courses only include students above

the test cut-off scores and below other benchmark scores, predictor scores do not include the









students at the lower or upper ends of the range of scores. A variety of studies use one or both of

these explanations to explain low correlations (American College Testing Program, 1990;

Armstrong, 1994; College Board, 1990; College of the Canyons, 1994; Feldt, 1989; Kesler,

1987). Another likely explanation for the failure of the models to show a large effect is that

because the LOEP test scores had actually been used to place students into classes, most of the

variance attributed to the placement measures had already been explained.

Reading Subtest Preferred

This section of the chapter discusses findings relevant to the third research question,

"Which of all approaches best predicts success across different language skill courses (reading,

writing, grammar, and speech) and language proficiency levels as measured by final course

grades?"

In response to Question 3, the LOEP Reading subtest was the best at predicting success

across different skills and levels as measured by final course grades. In the first study, LOEP

Reading contributed significantly to Model 3 in Reading courses at levels 3, 5 and 6 and in

Speech courses at levels 3 and 4.

In a post hoc analysis, the LOEP Reading subtest was predictive of all skills: reading,

writing, grammar, and speech. The test was also able to predict success in at least one course at

each level. The post hoc analysis tested each of the LOEP subtests in isolation for ability to

predict success as measured by final course grades using the 1,030 students in the first study.

Table 5-1 details the results comparing the seven variables, the three original models and the four

LOEP subtests as individual, single variable models. In the event that one of the LOEP subtests

and Model 3 both predicted success, an F-test was conducted to compare the models. In the event

that two or more subtest variables predicted success, whichever model accounted for the greatest

amount of variance was selected as the preferred model. As with the results of study 1, none of









the new variables was able to perform well across all skills and levels. However, LOEP Reading

(LORC) was the best predictor of success. It was predictive in all Speech courses (Levels 3, 4,

and 5), all Reading courses (levels 3, 4, 5, and 6), and also was predictive in level 4 writing and

level 6 grammar.

Table 5-1. Post-hoc results comparing the 3 original models with each subtest added as a
competing model.
Levl Skill Course Model 1 Model 2 Model 3 LOLU LORC LOSM LOES
2 Combined EAP0281
3 Speech EAP0300 X X X*
3 Reading EAP0320 X X*
3 Writing EAP0340 X X*
3 Grammar EAP0360
4 Speech EAP0400 X X X X* X
4 Reading EAP0420 X X X* X
4 Writing EAP0440 X* X
4 Grammar EAP0460
5 Speech EAP1500 X X*
5 Reading EAP1520 X X* X
5 Writing EAP1540
5 Grammar EAP1560 X X*
6 Reading EAP1620 X X X*
6 Writing EAP1640 X X X X*
* The X indicates significant results on F-test; *Indicates best correlation


In terms of a theory of assessing language competence, this study does not appear to lend

any evidence to support Oller's (1992) suggestion that in the early stages of second language

learning, distinct dimensions of listening, writing, and reading ability may resolve into further

sub-component traits. The reading, writing, and grammar subtests in this study were not able to

predict success more efficiently for students at the lower proficiency levels than at the higher

proficiency levels. In fact, all of these tests failed to consistently predict success in courses

across levels and skills. However, it could be that the range of EAP student language proficiency

levels is narrower than what Oller was considering in his description of the differences among









students in the early and later stages of second language acquisition. It could also be that the

LOEP subtests are not accurate enough to detect small variations in these proficiency levels.

A number of recent studies have examined the relationship of reading and placement.

Although reading placement tests have shown negligible or modest correlations with grades in

credit level college courses (American College Testing Program, 1990; Armstrong, 1994;

College of the Canyons, 1994; Feldt, 1989; Kesler, 1987), it has been suggested that the reason

for this weak relationship may be a result of the fact that these tests are grounded in a domain-

generic model of comprehension that assumes "a good reader is a good reader," no matter the

content (Behrman, 2005). Results of other studies have found evidence that domain-specific

factors are important (Alexander & Judy, 1988; Byrnes, 1995), and placement tests using

domain-specific readings have demonstrated greater efficiency than domain-generic reading tests

at predicting student success (Behrman, 2005). The current research reveals that the domain-

generic reading comprehension subtest is the most effective of all the LOEP subtests analyzed at

predicting success as measured by final course grades, but how might domain-specific testing

measures fare with EAP students? Future research may address whether or not EAP students are

equitably assessed and placed by these measures given that language proficiency or lack of

background knowledge could lead to lack of test reliability. For example, a domain-specific test

using passages selected from literature courses may perform better than a domain-generic test at

predicting success in composition and literature courses for native English speakers, but how

would different background knowledge and different cultural perspectives bias an EAP student's

results on the domain specific test?

From a theoretical perspective of background knowledge in reading, it appears that first

language literacy and grammatical knowledge account for approximately 50% of the variance in









second language performance (Bernhardt, 2005). Future research analyzing affect, interest in

second language text, and alternative conceptions of literacy may add to the amount of variance

already accounted for in second language performance. Would EAP students employ strategies

of cognate knowledge in a domain-specific test of science passages, or would they be negatively

affected by a large amount of unknown vocabulary?

In terms of practical placement, future investigations of reading as a predictor may want to

include different reading measures. For example, it may be interesting to analyze other measures

of reading that are gathered and stored in community college databases. In Florida, all students

are required to take the CPT. How well do EAP student scores on the CPT Reading and Sentence

Skills subtests correlate with success in EAP courses? This question was asked by James (2006)

in her predictive validity study of the Accuplacer subtests, but to date no related research has

been reported.

Predicting Evaluation of Placement

This section of the chapter discusses findings relevant to the fourth research question,

"Which of all approaches best predicts success across different language skill courses (reading,

writing, grammar, and speech) and language proficiency levels as measured by teacher

evaluation of placement?" Unfortunately, in the second study, in which teacher evaluation of

placement data were gathered as a variable, the low numbers of students in their first semester

led to critically low numbers in each course. Although there were originally 470 students

surveyed, only 131 of those students were in their first semester. Furthermore, because some of

these students failed to take all of the subtests or because information was missing from the

database, only 121 students had complete LOEP subtest scores and final course grade data. Even

the least conservative recommendation for case-to-predictor ratios in regression analyses

suggested a minimum case-to-predictor ratio of 15-to-1 (Schmidt, 1971). Therefore, regression









analyses across skills and levels using teacher evaluation of placement as an outcome variable

were not suggested. Similar to the post hoc analyses conducted in response to Question 3, post

hoc analyses were conducted using the individual LOEP subtests in isolation as predictors and

teacher evaluation of placement as the outcome variable. However, none of the individual LOEP

subtests in isolation was able to perform as well as LOEP Average with essay (Model 2).

LOC and Generation 1.5

This section of the chapter discusses findings relevant to the fifth research question, "Do

the student variables of Locus of Control and Generation 1.5 add to the prediction of placement

in EAP courses as measured by final course grades and teacher evaluation of placement?

In study 2, these two variables were not found to add to the prediction. However, to

determine whether these variables would have had better predictive power with the full set of

survey respondents rather than the limited subset having LOEP scores, both were tested in

isolation. When tested for their abilities to predict success as measured by teacher evaluation of

placement, both were found to be significant predictors: LOCSCAL R2 = .020; F(1,338) = 6.98,

p =.009 and GN15CMPT 2 R2 = .056; F(1,412) = 24.58, p <.001. Generation 1.5 accounted for

the greatest amount of variance in teacher evaluation of placement. Table 5-2 lists model

performance and beta weights for the models.

Table 5-2. Summary of simultaneous multiple regression analyses for LOCSCAL &
GN15CMPT at predicting successful placement as measured by teacher evaluation of
placement
Model Variable B SEB B

1) Scores on the Trice LOC Scale LOCSCAL -.047 .018
.142**
3) Computed Generation 1.5 Status GN15CMPT .588 .119 .237**
Model 1 R2 = .020; F(1,338) = 6.98, p =.009**
Model 2 R2 = .056; F(1,412) = 24.58, p <.001**
*p <.05; **p<.01









The Trice Locus of Control scale (Trice, 1985) used in this study was developed and

validated on native English speaking college students with similar gender and age characteristics

as the EAP students in this study. In Trice's original study (1985), the mean scores for education

and psychology students were 12.46 and 13.22, respectively; the mean score for EAP students

surveyed in this study was 8.83. However, of the 470 students surveyed, 140 students did not

complete the Trice LOC index and were therefore not included in the data analysis for the study.

To examine the possible role of English proficiency in the students' ability to complete the

Trice LOC index, a post hoc analysis was conducted. The researcher calculated percentages of

students within each course who failed to answer questions on the LOC inventory. Table 5-3

displays these percentages.

Table 5-3 Percentages of students in each course failing to answer questions on the Trice LOC
Index.
Level Skill Course N Percentages
3 Speech EAP0300 34 8.82
3 Reading EAP0320 35 8.57
3 Writing EAP0340 27 7.41
3 Grammar EAP0360 29 6.90
4 Speech EAP0400 62 8.06
4 Reading EAP0420 61 4.92
4 Writing EAP0440 54 12.96
4 Grammar EAP0460 62 8.06
5 Speech EAP1500 105 14.29
5 Reading EAP1520 109 15.60
5 Writing EAP1540 104 18.27
5 Grammar EAP1560 109 15.60
6 Reading EAP1620 141 12.06
6 Writing EAP1640 124 16.94


Contrary to what might be expected, lower proficiency students were not the only students

failing to answer questions on the LOC inventory. In fact, as proficiency level increased, so did

the percentages of students not answering questions on the LOC measure. Whether or not

students at the lower levels understood the questions is unknown. Perhaps higher proficiency









students perceived elements of ambiguity in the items and were unable to respond. Future

research could investigate the role of language and culture in EAP student responses to LOC

questions. For example, perhaps some of the questions were perceived as too personal in nature,

or perhaps students from different cultural groups that tend to prefer consensus building rather

than individual decision making felt uncomfortable with some of the questions and opted not to

answer them. If linguistic and cultural variables are identified to influence results, perhaps a new

LOC scale could be developed and validated for ESL/EAP students at the college level.

Valencia's Generation 1.5 population was found to be relatively small in comparison to

Generation 1.5 populations discussed in other research (Blumenthal, 2002; Lay, Carro, Tien,

Niemann, & Leong, 1999). However, because the relative size of Generation 1.5 populations at

other institutions within the state is unknown, it is difficult to say if Valencia is representative of

other schools. Another finding that deserves comment is that teachers rated 71 out of 470

students (approximately 15%) as Generation 1.5. The computed indicator rated 146 out of 470

students (approximately 31%) as Generation 1.5. What accounts for this discrepancy? Does the

computed indicator include more error, or did the objective nature of the indicator find things

teachers missed? For example, it is possible the computed indicator identified students as

Generation 1.5, but due to affective, cultural, or personal reasons, these students have not

acquired high levels of spoken proficiency or adapted to American culture, which could have

caused teachers not to rate them as Generation 1.5. On the other hand, perhaps because of

affective, cultural, or personal reasons, some students failed to participate in class, thereby

concealing their language proficiency levels and their Generation 1.5 status.

In the open-ended responses to teacher surveys, Generation 1.5 was one of the major

categories. This indicates that even though the actual size of the population is small, teachers are









concerned about this population. Teachers expressed thoughts about proper placement of these

students and how these students are affecting their classes. Teachers made comments such as,

"Once again, the problem arises with 1.5s; all other students are placed right." A few indicated

how Generation 1.5 students were bored in their courses. Two instructors suggested the

possibility of developing specific courses for these students. Teacher comments in the Speech

subsection could also be considered indirect comments about Generation 1.5 students. One

teacher summed things up nicely by saying "Speech is where the battles begin!" Many of the

comments made about students in Speech courses indicated that many Generation 1.5 students

didn't need the course at all or had spoken proficiency higher than other students in the course.









CHAPTER 6
CONCLUSIONS AND RECOMMENDATIONS

This study assumes that effective placement for developmental education programs can

increase both student success and retention, but only if placement measures are valid and can

therefore accurately predict student success in courses. While there are a variety of studies

analyzing the validity of Accuplacer tests at predicting success for native English speakers

(Hirschy & Mack, 2001; James, 2006; Saunders, 2000; and Smittle, 1993), there is a relative

paucity of research looking at how well these test function for ESL students. This study

contributes to the growing body of research on placement and computer adaptive testing by

investigating the predictive characteristics of the Accuplacer LOEP subtests on student

performance in EAP classes at the community college level. It further informs research on

Generation 1.5 students by developing and applying a survey measure for identification of this

population. This research also raises questions for future research which have theoretical

underpinnings to language competence, learner motivation, and the importance of background

knowledge. For example, how do level of language proficiency and background knowledge

interact and affect the validity of placement tests? What percentage of the variance in final

course grades is accounted for by student motivation? And to what extent does culture influence

assessment of proficiency and placement?

On a more practical note, this study has provided information for the researcher and

decision makers at Valencia Community College on the effectiveness of a variety of current and

potential placement practices. Based on the results of this study, Valencia and other similar

institutions may wish to consider a number of recommendations. For example, if students could

be identified as Generation 1.5 early in the placement process, counselors could intervene and









help to place them more effectively into Speech courses. A discussion of other placement

recommendations follows.

Recommendations for Testing and Placement

This study indicated that students are best placed into courses using individual subtests

rather than composite scores or aggregates of subtests. This research compared three models, two

using averaged subtest scores and one using individual subtest scores. In both studies, the model

using individual subtests was the best predictor of success as measured by both final course

grades and teacher evaluation of placement. Therefore, when institutions identify a subtest as

predictive of a particular skill, schools should not weaken its predictive capabilities by averaging

it with other tests that are not predictive of the skill. Schools should simply use the individual

skill variable for placement into same skill courses. For example, the Reading subtest should be

used to place students into reading courses.

The Reading subtest was found to be the most efficient predictor of success as measured

by final course grades; this finding was true across language skills and levels. When Valencia re-

evaluates placement practices, if it is decided to not use the Reading subtest in isolation to place

students into Reading courses, any new composite models that are developed could benefit from

giving the Reading subtest added weight.

The Essay was found to be the most efficient predictor of success as measured by teacher

evaluation of placement. Therefore, the writing sample should continue to be used in placement

practices. Other experts in the field agree that including an essay provides important information

in placing students into EAP courses. In fact, the CCCC Committee on Second Language

Writing stated:

Decisions regarding the placement of second language writers into writing courses should

be based on students' writing proficiency rather than their race, native language background,









nationality, or immigration status. Nor should the decisions be based solely on the scores from

standardized tests of general language proficiency or of spoken language proficiency. Instead,

scores from the direct assessment of students' writing proficiency should be used, and multiple

writing samples should be consulted whenever possible. (2001)

If the use of a writing sample is found to be too costly or time consuming, perhaps the

Essay could be administered but assessed only when accuracy of placement is of the utmost

importance. For example, at Valencia the decision to place students into level 5 is of greater

importance than the decision to place students into levels 3 or 4 because at level 5 students earn

college credit for the courses they are taking. Therefore, essays could be read only prior to

admitting students into level 5.

Given the inability of any of the subtests to predict success in EAP grammar courses,

Valencia's program could benefit from an analysis of the curricular goals of its grammar courses.

Perhaps there is a mismatch between the LOEP subtests and curricular goals. If this is the case,

perhaps an in-house subtest could lead to more predictable placement into grammar courses.

Other Recommendations

Although not a direct finding of this study, the review of research for this study led the

researcher to findings that are fundamental in their application to placement practices. Because

student populations and curricula vary from school to school, each institution should identify

which placement tests or combinations of placement tests provide the greatest accuracy for their

curricula and establish cut scores and placement mechanisms accordingly. Furthermore, students

are not negatively affected by being placed across levels based on test results. In other words, it

is not only acceptable, but desirable, to place students into different skills based on their

demonstrated proficiencies in those skills. Students do not need to be placed into all skills at one

level. This concept is supported by the College Board (2003).









In addition, students need to be aware of the importance of placement tests; therefore, they

should be advised accordingly and given explicit information about the costs both in time and

money that they could incur as a result of poor performance on the placement tests. Also, to

ensure optimal test performance, limits should be placed on the number of tests a student can

take in one day, or scheduled breaks should be added to the test-taking timeline.

Finally, because cut-scores have never been normed for the LOEP subtests at Valencia, a

final recommendation would be to conduct a cut-score study. If Valencia were to undertake such

a study, a subcomponent of the study should add the existing but currently unused LOEP

Listening subtest and perhaps the LOEP Write Placer ESL subtest to identify their predictive

capabilities. Developing cut-scores for and using these other tests (like the LOEP Listening,

Write Placer ESL and the CPT) in addition to the current LOEP subtests could lead to enhanced

placement of EAP students at Valencia.














APPENDIX A
APPENDIX A: LOEP ESSAY RUBRIC

Native-like control of language. Second language errors may be present
Level 7 but do not interfere with communication. Organization facilitates a clear
and well supported message.

Although some second language errors are evident, they rarely interfere
Level 6 with communication. In addition, the reader is not troubled by occasional
errors of vocabulary, spelling, punctuation, and/or grammar. Organization
and details, although simplistic, enable the reader to follow the message.

Second language errors, from a variety linguistic categories, are evident
and sometimes interfere with communication. The reader is sometimes
Level 5
troubled by errors of vocabulary, spelling, punctuation, and/or grammar.
Organization has been attempted, but may be unsuccessful.

Second language errors frequently hinder communication. Errors of
Level 4 vocabulary, spelling, punctuation, and/or grammar often trouble the
reader. Organization may or may not be present.

Widespread second language errors in virtually every category. While
some evidence of basic grammatical structures is present, errors of
Level 3
vocabulary, spelling, punctuation, and/or grammar consistently trouble the
reader.

The reader sees rudimentary control of vocabulary, spelling, punctuation,
Level 2 and/or grammar. However, the paper has no unified meaning, and errors
constantly interfere with the readers' ability to comprehend.

The reader sees no evidence of control of vocabulary, spelling,
Level 0 punctuation, and grammar; hence, there is no sense of linguistic
competence. Words appear on page but do not communicate meaning.















APPENDIX B
APPENDIX B: SURVEY INSTRUMENTS USED IN THE STUDY

Letter to Instructors

Dear EAP Instructor,

I am a fellow EAP faculty member here at Valencia Community College. During the last ten
years there have been significant changes made to the English as a second language program
offered here at Valencia, including changes to the number and types of courses offered, how
students are placed into these courses, and how Valencia as an institution monitors the
effectiveness of this placement.

I am conducting research on the effectiveness of the placement process, and as an instructor here
at Valencia you are in a position to provide valuable insight into how the system is working.
Along with this informed consent form, you have been given a survey. The data gathered from
this survey will provide me, the researcher, with the information necessary to evaluate current
practices and make recommendations to enhance practices in the future. You do not have to
answer any questions that you do not wish to answer on this survey. It is important that you know
that your identity will be kept confidential to the extent provided by law. The survey you fill will
be stored in a locked file cabinet in my office and will be destroyed after the relevant data have
been recorded.

There is no anticipated risk, compensation, or other direct benefit to you as a participant in this
research. You do not have to answer any questions you do not wish to answer or allow me access
to your records. You are free to withdraw your consent to participate and may discontinue your
participation in the research at any time without consequence. If you have any questions about
this research, please contact James May by e-mail jmay@valencia.cc.fl.us or phone 407-582-
2047, or my supervisor, Dr. Candace Harper at charper@coe.ufl.edu or (352) 392-9191. If you
have any questions or concerns about your rights as a research participant, you can also contact
the UFIRB office, University of Florida, Box 112250, Gainesville, FL 32611; ph (352) 392-0433.

Please sign the Informed Consent form before taking the survey. By signing this form you give
me permission to gather your scores and grades from your records and report your responses
anonymously in my research. Thank you for your help.


Sincerely,



James May, Valencia Community College










Informed Consent: Instructor


(Please sign & return with survey)

I have read the letter explaining the purpose and procedures involved in this EAP placement
study. I voluntarily agree to participate in the study and to have my anonymous responses, scores,
and grades included in the research report.



Signature of Research Participant Date










Instructor Survey

Instructors Name Course Information Campus Information
Date Semester

Instructor Survey Section I

Instructions: Please answer the following questions.

1) Circle the term that best describes Valencia's accuracy in general at placing second language
students into the EAP courses they require.

Poor Below Average Average Above Average Excellent

2) How often do you have students in your EAP classes that you feel might have been better
placed in a different level?

Never Rarely Sometimes Often Every Semester

3) This semester, how many of your students do you feel should have been placed in a different
level?

None A few Several Most All

4) Please provide any comments that qualify or explain your responses above.










Instructor Survey Section II


Instructions: Give your opinion of the current placement for each of the students in the list below.
If you feel the student has been well placed, put a check in the well placed column next to the
student's name. If you feel the student would have been more appropriately placed in a different
level, indicate which level (for example, EAP levels 2 through 6, College Prep rather than EAP,
College Composition, etc.)

Student Name Well Placed Not Well Placed Should have
been placed in
1 Sample Student
2 Sample Student
3 Sample Student
4 Sample Student
5 Sample Student
etc Sample Student










Generation 1.5 Survey


Dear fellow EAP Instructor:

Thank you very much for your assistance in the LOEP Placement research. As a direct result of
your assistance, we were able to survey 470 EAP students and 27 EAP Instructors. Coding of the
data has begun, and we hope to have some answers by August.

As I spoke with each of you, I gained valid insight into many of the issues we are facing. One of
the more obvious issues many of you expressed concern about was Generation 1.5. In the
surveys, students were asked a variety of demographic and personal history questions. It is hoped
that we can use some of these questions to reliably identify Gen 1.5 students early in the
placement process. However, to create a valid instrument that will reliably identify this
population. We need one more vital piece of evidence from you. I hope you will grant us this one
last petition of your time.

Below you will find a definition for Generation 1.5 students. Attached you will find your class
rosters with a check box titled "Generation 1.5?" Please read the definition below and then place
a check next to the names of the students you feel are members of Generation 1.5. Because you
know your students quite well by this point in the semester, this should take you no more than a
few minutes of your time. After you have identified the Generation 1.5 students, please place the
survey back in the envelope and mail it back to James May at Mail code 3-20. Thank you in
advance for your assistance.

Thanks Again

James May
Professor of English as a Second Language


Generation 1.5 Defined

Generation 1.5 students usually have come to the United States in their early teen or pre-teen
years. They have often attended U.S. schools, and many of these students have even graduated
from American high schools. While attending American schools, these students have had time to
acquire informal English. Many of them use American idiomatic expressions, and some may even
have American accents. Errors in their language are detectable, but they do not interfere with
understanding, and these students are comfortable speaking English and do so with relative ease.
Their reading, grammar, and writing skills on the other hand are usually behind that of their
college-ready peers. They are not what you may consider true ESL students, but they are not true
native English speaking students either.










Letter to Students


Dear EAP Student,

I am an EAP faculty member here at Valencia Community College. During the last ten years
there have been significant changes made to the English as a second language program offered
here at Valencia, including changes to the number and types of courses offered, how students are
placed into these courses, and how Valencia as an institution monitors the effectiveness of this
placement. Students and teachers have commented that some students are placed in classes that
are too easy or too difficult for them.

As a student here at Valencia you are in a position to provide valuable insight into how the
system is working. Along with this informed consent form, you have been given a survey. The
data gathered from this survey will provide me, the researcher, with the information necessary to
evaluate current practices and make recommendations to enhance practices in the future. You do
not have to answer any questions that you do not wish to answer on this survey. In addition to the
survey, I would also like to access your LOEP placement test scores and final course grades. You
are not required to allow me access to these records; however, this information is necessary for
the research and would be greatly appreciated. It is important that you know that your identity
will be kept confidential to the extent provided by law. The survey you fill out and the scores and
grades accessed in your records will be stored in a locked file cabinet in my office and will be
destroyed after the relevant data have been recorded.

There is no anticipated risk, compensation, or other direct benefit to you as a participant in this
research. You do not have to answer any questions you do not wish to answer or allow me access
to your records. You are free to withdraw your consent to participate and may discontinue your
participation in the research at any time without consequence. If you have any questions about
this research, please contact James May by e-mail jmay@valencia.cc.fl.us or phone 407-582-
2047, or my supervisor, Dr. Candace Harper at charper@coe.ufl.edu or (352) 392-9191. If you
have any questions or concerns about your rights as a research participant, you can also contact
the UFIRB office, University of Florida, Box 112250, Gainesville, FL 32611; ph (352) 392-0433.

Please sign the Informed Consent form before taking the survey. By signing this form you give
me permission to gather your scores and grades from your records and report your responses
anonymously in my research. Thank you for your help.

Sincerely,



James May, Valencia Community College










Informed Consent: Student


EAP Placement Study

I have read the letter explaining the purpose and procedures involved in this EAP placement
study. I voluntarily agree to participate in the study and to have my anonymous responses, scores,
and grades included in the research report.



Signature of Research Participant Date










Student Survey


Section I: Course and Student Identification Information

Directions: Circle the most appropriate answer, check the box, or fill in the blank with the
appropriate information (please print)

1. Campus: East 0 West D Osceola 0
2. Instructor's Name:
3. Course: EAP
4. Today's Date:
5. Your Last Name:
6. First Name:
7. Country of Origin:
8. Native Language:
9. Age:
10. Gender: Male D Female D

Section II: Student Background Information

Directions: Circle the most appropriate answer or fill in the blank with the appropriate
information (please print)

11. This semester is my (1st, 2nd, 3rd, 4th, ) semester at Valencia Community
College.
12. How many years have you lived in the Continental United States (or Alaska/Hawaii)?

13. What grade were you in when you started school in the United States?
14. (If the answer is college, circle college.)
15. What year did you graduate from high school? (or receive your
GED)?
16. Are you the first person in your family to go to college? (Yes No)
(If no, who was?)
17. Are you the first in your family to go to college in the United States? (Yes No)
(If no, who was?)
18. Have you gone to college outside of the United States? (Yes No)
19. If yes, for how many years?
20. How would you rate your abilities to write papers and do research in English? (Circle
one)
Poor Below average Average Above average Expert
21. How many of your good friends (people you do things with every week) are native
English speakers? (Circle one)
None A few Some Most All
22. How often do you use English when speaking with your friends? (Circle one)
Never Not often Sometimes Most of the time Always
23. How often does your family use English in your home? (Circle one)
Never Not often Sometimes Most of the time Always
24. Approximately how many books do you have in your home? (Circle one)
25 or less 25- 50 50- 75 75 100 100 or more
25. Do you have a computer at home? (Yes No)
26. How would you rate your abilities to use the computer?










Average Above average


Section III: Locus of Control
Directions: Read each statement on this page and record your answer in the space provided on the
left of each item using the following answer key:

T = True, I agree with this statement
F = False, I do not agree with this statement.

27. College grades most often reflect the effort you put into classes.
28. I came to college because it was expected of me.
29. I have largely determined my own career goals.
30. Some people have a knack for writing, while others will never write well no matter
how hard they try.
31. At least once, I have taken a course because it was easy to get a good grade.
32. Professors sometimes make an early impression of you and then no matter what
you do, you cannot change that impression.
33. There are some subjects in which I could never do well.
34. Some students, such as student leaders and athletes, get free rides in college
classes.
35. I sometimes feel that there is nothing I can do to improve my situation.
36. I never feel really hopeless there is always something I can do to improve my
situation.
37. I would never allow social activities to affect my studies.
38. There are many more important things for me than getting good grades.
39. Studying every day is important.
40. For some courses it is not important to go to class.
41. I consider myself highly motivated to achieve success in life.
42. I am a good writer.
43. Doing work on time is always important to me.
44. What I learn is more determined by college and course requirements than by what
I want to learn.
45. I have been known to spend a lot of time making decisions which others do not
take seriously.
46. I am easily distracted.
47. I can be easily talked out of studying.
48. I get depressed sometimes and then there is no way I can accomplish what I know
I should be doing.
49. Things will probably go wrong for me some time in the near future.
50. I keep changing my mind about my career goals.
51. I feel I will someday make a real contribution to the world if I work hard at it.
52. There has been at least one instance in school where social activity impaired my
academic performance.
53. I would like to graduate from college, but there are more important things in my
life.
54. I plan well and stick to my plans.

Section IV: Placement

Directions: Circle the most appropriate answer or fill in the blank with the appropriate
information (please print)


Poor Below average


Expert











55. Do you feel that the EAP class you are in right now is at the right level for you? (Yes -
No)
56. If no, do you think you should be in a higher or lower level? (Higher Lower) Please
explain why you feel this way.














57. Are you currently taking any other courses? (Yes No)
58. If you are taking other courses, please list the courses) you are taking in the spaces
provided below and indicate how you feel about being placed in listed coursess. If you
think you were well placed, check the "Well Placed" box. If you think you should have
been placed in a different level, write in the level or course you think you should have
been placed into.


Should have been placed in?


Course Name


Well Placed?










Survey Script


Hello, my name is James May, and I am an EAP professor here at Valencia. During the last ten
years there have been significant changes made to the English as a second language program
offered here at Valencia, including changes to the number and types of courses offered, how
students are placed into these courses, and how Valencia as an institution monitors the
effectiveness of this placement.

As teachers and students here at Valencia you are in a position to provide valuable insight into
how the system is working. In the packet you have been given, you will find a brief letter
detailing the most important points of what I am telling you now, an informed consent form, and
a brief survey that will help Valencia to make decisions about our EAP program.

You do not have to answer any questions that you do not wish to answer on this survey. In
addition to the survey, I would also like to access your LOEP placement test scores and final
course grades. You are not required to allow me access to these records; however, this
information is necessary for the research and would be greatly appreciated.

It is important that you know that your identity will be kept confidential to the extent provided by
law. The survey you fill out and the scores and grades accessed in your records will be stored in a
locked file cabinet in my office and will be destroyed after the relevant data have been recorded.

You are free to withdraw your consent to participate and may discontinue your participation in
the research at any time without consequence. If you have any questions about this research,
please feel free to contact me. My contact information is on the front sheet of your packet. Please
rip off that sheet at this time and keep it for your records.

Are there any questions before I go any further?

If not, and you are willing to participate in the study, sign the informed consent form, and begin
the survey. When you are finished, please raise your hand, and I will come around and collect it.
If you have any questions during the survey, raise your hand and I will be around to answer them.
Thank you in advance for your help in this research.















APPENDIX C
APPENDIX C: COMPLETE RESULTS OF STUDENT/TEACHER SURVEYS

Table C-1. Native languages of EAP placement survey respondents listed by campus

All East Osceola West
respondents campus campus campus

Native (n = 466) (n = 109) (n = 122) (n = 210)
Language

Spanish 51.5 63.3 74.6 31.9
Creole 14.4 1.8 4.1 25.2
Arabic 7.3 7.3 5.7 8.6
Portuguese 3.4 1.8 3.3 4.8
French 3.0 2.8 1.6 4.3
Russian 2.8 0.9 3.3 2.9
Chinese 2.4 0.9 1.6 3.8
Vietnamese 2.1 2.8 3.3
English 1.3 1.8 0.8 1.0
Korean 1.1 1.8 1.4
Tagalog 1.1 1.8 0.8 0.5
Bengali 0.9 1.9
Gujarati 0.9 1.8 1.0
Farsi 0.6 0.9 1.0
Hindi 0.6 2.5
Moldavian 0.6 0.9 1.0
Polish 0.6 0.9 1.0
Urdu 0.6 0.9 0.8 0.5
Amharic 0.4 1.0
Bulgarian 0.4 0.9 0.5
Japanese 0.4 1.0
ASL 0.2 0.8
Armenian 0.2 0.5
Burmese 0.2 0.9
Dutch 0.2 0.9
Georgian 0.2 0.9
Indonesian 0.2 0.9
Krio 0.2 0.5
Latvian 0.2 0.5
Lithuanian 0.2 0.5
Papiamento 0.2 0.9









Table C-1: continued.
Serbian 0.2 0.5
Somali 0.2 0.9
Swahili 0.2 0.5
Thai 0.2 0.5
Tswana 0.2 0.5
Ukrainian 0.2 0.9
Note: Numbers represent the percentage of survey respondents in each category.

Table C-2. Countries of origin for EAP placement survey respondents listed by campus
All East Osceola West
respondents campus campus campus

Country of (n = 464) (n = 108) (n = 122) (n = 208)
Origin

Columbia 17.24 18.52 25.41 11.54
Haiti 17.03 4.63 4.92 28.85
Puerto Rico 11.21 18.52 18.85 3.85
Morocco 4.74 3.70 4.92 5.77
Peru 4.53 4.63 7.38 3.37
Dominican 4.09 4.63 6.56 2.4
Republic
Brazil 3.45 1.85 3.28 4.81
Venezuela 3.02 0.93 5.74 2.4
Cuba 2.80 4.63 1.64 1.44
USA 2.37 3.70 1.64 2.4
Vietnam 2.16 2.78 3.37
Ecuador 1.94 1.85 3.28 0.96
India 1.51 1.85 2.46 0.96
Russia 1.51 0.93 2.46 0.96
China 1.08 1.64 1.44
Jordon 1.08 1.92
Korea 1.08 1.85 1.44
Nicaragua 1.08 0.93 2.46 0.48
Philippines 1.08 1.85 0.82 0.48
Bangladesh 0.86 1.92
Pakistan 0.86 0.93 1.64 0.48
Uruguay 0.86 0.93 0.82
Chile 0.65 0.93 0.82 0.48
Iran 0.65 0.93 0.96
Mexico 0.65 1.85 0.48
Panama 0.65 0.93 0.96
Poland 0.65 0.93 0.96
Argentina 0.43 0.93 0.48
Table C-2: continued









Bulgaria 0.43 0.93 0.48
Egypt 0.43 0.96
Estonia 0.43 0.96
Ethiopia 0.43 0.96
Hong Kong 0.43 0.93
Japan 0.43 0.96
Lebanon 0.43 0.93 0.48
Mongolia 0.43 0.93 0.48
Paraguay 0.43 0.93 0.82
Surinam 0.43 0.93 0.48
Taiwan 0.43 0.96
Uzbekistan 0.43 0.82
Africa 0.22 0.48
Armenia 0.22 0.48
Aruba 0.22 0.93
Belarus 0.22 0.48
Botswana 0.22 0.48
Burma 0.22 0.93
Costa Rica 0.22 0.48
Denmark 0.22 0.93
El Salvador 0.22 0.48
Guatemala 0.22 0.48
Honduras 0.22 0.48
Latvia 0.22 0.48
Lithuania 0.22 0.48
Moldova 0.22 0.48
Palestine 0.22 0.82
Qatar 0.22 0.93
Russian Georgia 0.22 0.93
Serbia 0.22 0.48
Sierra Leone 0.22 0.48
Somalia 0.22 0.93
Syria 0.22 0.93
Thailand 0.22 0.48
Tunisia 0.22 0.82
Spain 0.22 0.93
Indonesia 0.22 0.93
Cameroon 0.22 0.48
Note: Numbers represent the percentage of survey respondents in each category.









Table C-3. Gender of EAP placement survey respondents listed by campus

All respondents East campus Osceola campus West campus

Gender (n = 467) (n = 109) (n = 122) (n = 211)

Female 59.5 61.5 66.4 55.9
Male 40.5 38.5 33.6 44.1
Note: Numbers represent the percentage of survey respondents in each category.

Table C-4. Semester of enrollment for EAP placement survey respondents listed by
campus

All East Osceola West
respondents campus campus campus

Semester (n = 466) (n = 107) (n = 123) (n = 210)

1st Semester 28.1 32.7 24.4 28.6
2nd
Semester 34.3 34.6 35.0 34.3
3rd Semester 26.6 22.4 30.9 24.8
4th Semester 7.7 6.5 4.9 10.0
5th Semester 1.7 2.8 1.6 1.4
6th Semester 0.9 0.9 1.6 0.5
7th Semester 0.4 0.8 0.5
8th Semester 0.2 0.8
Note: Numbers represent the percentage of survey respondents in each category.

Table C-5. Ages of EAP placement survey respondents listed by campus

All respondents East campus Osceola campus West campus

Age (n = 451) (n = 105) (n = 121) (n = 201)


17
18
19
20
21
22
23
24
25
26
27
28


0.44
6.65
11.97
9.98
11.97
4.43
5.54
3.99
3.99
4.21
1.55
2.44


11.43
13.33
11.43
8.57
6.67
5.71
2.86
3.81
7.62
1.90
0.95


0.83
9.09
15.70
11.57
13.22
4.13
1.65
3.31
4.96
3.31


0.50
2.99
9.95
9.45
12.44
3.98
7.46
4.98
3.48
2.49
2.49
3.98


1.65










Table C-5: continued
29 2.22
30 2.44
31 2.00
32 3.10
33 2.00
34 0.89
35 2.22
36 1.77


0.95

1.90
3.81
1.90
1.90
0.95
2.86


1.65
2.48
2.48
3.31
3.31
0.83
0.83
1.65


3.48
2.49
1.99
2.99
1.49
0.50
2.99
1.49


37 0.44 1.00
38 1.77 0.95 0.83 2.99
39 2.66 0.95 1.65 3.48
40 2.22 2.86 0.83 2.49
41 0.67 0.83 1.00
42 1.11 1.65 1.49
43 0.67 0.95 1.00
44 0.44 0.83 0.50
45 1.11 0.95 0.83 1.00
46 0.67 0.95 0.83
47 0.89 0.95 0.83 1.00
48 0.22 0.50
49 0.89 3.31
50 0.44 0.95 0.50
51 0.22 0.83
52 0.44 0.95
53 0.67 0.95 0.83 0.50
55 0.44 0.50
59 0.22 0.50
Note: Numbers represent the percentage of survey respondents in each category.










Table C-6. Number of Years in the U.S. for placement survey respondents listed by
campus

All East campus Osceola West campus
respondents campus

Years in (n = 461) (n = 107) (n = 122) (n = 207)
U.S.

< Year 1.3 3.7 1.0
1 Year 7.2 8.4 5.7 8.2
2 Years 12.4 10.3 11.5 12.1
3 Years 15.4 15.0 16.4 14.5
4 Years 10.8 13.1 9.8 10.6
5 Years 12.8 15.0 11.5 12.1
6 Years 10.6 11.2 7.4 13.0
7 Years 6.1 2.8 5.7 8.7
8 Years 3.5 3.7 3.3 3.4
9 Years 2.0 3.3 2.4
10 Years 4.1 3.7 6.6 2.4
11 Years 1.1 2.5 1.0
12 Years 1.7 0.9 3.3 1.0
13 Years 1.1 1.9 1.6 0.5
14 Years 1.1 0.9 1.6 1.0
15 Years 2.4 1.9 3.3 2.4
16 Years 0.9 1.9 0.5
17 Years 0.9 0.9 1.6 0.5
18 Years 0.9 1.9 1.0
19 Years 0.4 0.8 0.5
20 Years 0.9 1.9
22 Years 0.4 0.9 0.8
23 Years 0.4 0.8 0.5
25 Years 0.4 0.8
27 Years 0.2 0.8
28 Years 0.2 0.5
29 Years 0.2 0.9
30 Years 0.2 0.5
33 Years 0.2 0.9
34 Years 0.2 0.8
Note: Numbers represent the percentage of survey respondents in each category.










Table C-7. Survey responses to, "What year did you graduate from high school/earn
GED?" listed by campus

All East Osceola West
respondents campus campus campus

Year (n = 435) (n = 102) (n = 114) (n = 195)

2006 0.2 0.9
2005 19.8 25.5 22.8 16.9
2004 12.2 11.8 14.9 11.8
2003 8.3 7.8 8.8 8.2
2002 5.5 2.9 5.3 6.2
2001 5.1 3.9 4.4 6.2
2000 6.0 3.9 3.5 8.2
1999 4.8 6.9 2.6 4.6
1998 2.8 3.9 3.5 1.0
1997 2.3 3.5 2.1
1996 2.1 3.9 0.9 2.1
1995 3.7 4.9 2.6 4.1
1994 3.4 1.0 3.5 4.6
1993 2.1 1.8 3.1
1992 1.8 2.9 2.6
1991 2.3 3.9 2.6 1.0
1990 1.4 1.0 1.8 1.0
1989 1.1 1.8 1.5
1988 0.7 1.0 0.9 0.5
1987 1.4 2.0 2.1
1986 1.4 0.9 2.6
1985 1.4 1.0 1.5
1984 1.4 1.0 1.8 1.5
1983 1.8 2.0 2.6
1982 1.1 1.0 2.6 0.5
1981 0.7 2.0 0.5
1980 1.4 2.0 2.6 0.5
1979 0.2 0.5
1978 0.2 1.0
1977 0.5 1.0 0.9
1975 0.7 2.6
1974 0.5 1.0
1973 0.5 1.0 0.9
1972 0.2 0.9
1971 0.5 1.0 0.9
1970 0.2
1969 0.5 1.0
Note: Numbers represent the percentage of survey respondents in each category.









Table C-8. Survey responses to, "Are you the first person in your family to go to
college?"

All respondents East campus Osceola campus West campus

Answer (n = 465) (n = 109) (n = 122) (n = 208)

No 72.26 70.64 77.87 69.71
Yes 27.74 29.36 22.13 30.29
Note: Numbers represent the percentage of survey respondents in each category.

Table C-9. Survey responses to, "If you weren't the first person in your family to go to
college, who was?"


All Respondents


Answer (n = 295)
My child has gone to college 2.71
My siblings/cousins or spouse has gone to college 50.51
My parents or their siblings have gone to college 41.02
My grandparents have gone to college 2.71
Everyone in my family has gone to college 2.71

East Campus

Answer (n = 68)
My child has gone to college 2.71
My siblings/cousins or spouse has gone to college 50.51
My parents or their siblings have gone to college 41.02
My grandparents have gone to college 2.71
Everyone in my family has gone to college 2.71

Osceola Campus

Answer (n = 82)
My child has gone to college 2.71
My siblings/cousins or spouse has gone to college 50.51
My parents or their siblings have gone to college 41.02
My grandparents have gone to college 2.71
Everyone in my family has gone to college 2.71

West Campus

Answer (n = 129)


My child has gone to college
My siblings/cousins or spouse has gone to college


2.71
50.51









Table C-9: continued
My parents or their siblings have gone to college 41.02
My grandparents have gone to college 2.71
Everyone in my family has gone to college 2.71
Note: Numbers represent the percentage of survey respondents in each category.


Table C-10. Survey responses to, "Are you the first person in your family to go to college
in the U.S.?"

All respondents East campus Osceola campus West campus

Answer (n = 464) (n = 108) (n = 121) (n = 209)

No 37.50 40.74 37.19 35.89
Yes 62.50 59.26 62.81 64.11
Note: Numbers represent the percentage of survey respondents in each category.


Table C-11. Survey responses to, "If you weren't the first person in your family to go to
college, who was?"


All Respondents


Answer (n = 147)
My child has gone to college 4.08
My siblings/cousins or spouse has gone to college 82.31
My parents or their siblings have gone to college 12.93
My grandparents have gone to college 0.68

East Campus

Answer (n = 41)
My child has gone to college 2.44
My siblings/cousins or spouse has gone to college 78.05
My parents or their siblings have gone to college 17.07
My grandparents have gone to college 2.44

Osceola Campus

Answer (n = 37)


My child has gone to college
My siblings/cousins or spouse has gone to college
My parents or their siblings have gone to college
My grandparents have gone to college


5.41
91.89
2.70









Table C- 1: continued


West Campus


Answer (n = 61)
My child has gone to college 3.28
My siblings/cousins or spouse has gone to college 80.33
My parents or their siblings have gone to college 16.39
My grandparents have gone to college
Note: Numbers represent the percentage of survey respondents in each category.


Table C-12. Survey responses to, "Have you gone to college outside of the United
States?"

All respondents East campus Osceola campus West campus

Answer (n = 464) (n = 109) (n = 121) (n = 210)

No 60.78 61.47 66.94 60.00
Yes 39.22 38.53 33.06 40.00
Note: Numbers represent the percentage of survey respondents in each category.


Table C-13. Number of years spent in college outside of the U.S. by placement survey
respondents listed by campus

All respondents East campus Osceola campus West campus

Years (n = 185) (n = 44) (n = 42) (n = 82)

< 1Year 1.1 2.3 2.4
1 Year 16.8 13.6 28.6 13.4
2 Years 20.5 20.5 23.8 19.5
3 Years 22.2 18.2 11.9 28.0
4 Years 17.8 25.0 14.3 17.1
5 Years 11.9 13.6 7.1 13.4
6 Years 4.9 2.3 11.9 1.2
7 Years 1.6 2.3 2.4
8 Years 1.6 2.4
9 Years 0.5 2.3
10 Years 1.1 2.4
Note: Numbers represent the percentage of survey respondents in each category.









LIST OF REFERENCES


AACC. (2007). American Association of Community Colleges: Fast Facts. Retrieved May 30,
2007, from
http://www.aacc.nche.edu/Content/NavigationMenu/AboutCommunityColleges/FastFac
ts /Fast_Facts.htm Web site: http://www2.aacc.nche.edu/pdf/factsheet2007_updated.pdf

Abedi, J. (2006).Psychometric issues in ELL assessment and special education eligibility.
Teachers College Record. 108(11), 2282-2303.

Abedi, J., Lord, C., Hofstetter, C. & Baker, E. (2000). Impact of accommodation strategies on
English language learners test performance. Education Measurement: Issues and Practice.
19(3), 16-26.

Abedi, J., Lord, C., Hofstetter, C. (1998). Impact of selected background variables on students
NAEP math performance (CSE Tech Rep. No. 478). Los Angeles: University of
California, National Center for Research on Evaluation, Standards and Student Testing.

Abedi, J., Lord, C., Kim-Boscardin, C., & Miyoshi, J. (2000). The effects of accommodations on
the assessment of LEP students in NAEP (CSE Tech. Rep. No.537). Los Angeles:
University of California, National Center for Research on Evaluation, Standards and
Student Testing

Abedi, J. Lord, C., & Plummer, J. (1997). Language background as a variable in NAEP. Los
Angeles: University of California, National Center for Research on Evaluation, Standards
and Student Testing.

Abraham, A. A. & Creech, J. D. (2002). Reducing remedial education: What progress are states
making? Educational Benchmarks 2000 Series. Retrieved Nov 10 2003 from
http://www.sreb.org/main/Benchmarks2000/remedial.pdf

Abramson, L.Y., Seligman, M.E.P., & Teasdale, J.D. (1978). Learned helplessness in humans:
Critique and reformulation. Journal of Abnormal Psychology. 87, 49-74.

Abstien, J. (1998) Be it ever so humble, there's no place like... yada, yada, yada. Florida
Community College Advocate, 1(4), 8-9.

Alexander, P. A., & Judy, J. E. (1988) The interaction of domain-specific and strategic
knowledge in academic performance. Review of Educational Research, 58, 375-404.

American College Testing Program. (1990). ASSET technical manual for use with forms B and
C. Iowa City, IO: Author

American Educational Research Association. American Psychological Association, & National
Council on Measurement in Education. (1999) Standards for educational and
psychological testing. Washington, DC: American Educational Research Association.









Armstrong, W. B. (1994). English placement testing, multiple measures, and disproportionate
impact: An analysis of the criterion- and content-related validity evidence of the reading
& writing placement tests in the San Diego Community College District. San Diego, CA:
San Diego Community College District, Research and Planning. (ERIC Document
Reproduction Service No. ED 398 965)

Artiles, A. J., Rueda, R., Salazar, J. J., & Higareda, I. (2005). Within-group diversity in minority
disproportionate representation: English language learners in urban school districts.
Exceptional Children, 71(3), 1-17.

Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford: Oxford
University Press.

Bachman, L. and Palmer, A.S. (1983). The construct validity of the FSI Oral Interview. In J.W.
Oller, Jr., (Ed.), Issues in language testing research (pp. 154-169). Rowley, MA:
Newbury House.

Bachman, L. F. & Palmer, A. S. (1989). The construct validation of self ratings of
communicative language ability. Language Testing, 6, 14-20.

Banta,T. W., Rudolph, L. B., Van Dyke, J. & Fisher, H. S. (1996). Performance funding comes
of age in Tennessee. Journal of Higher Education, 67, 23-45.

BEBR. (2005). Population Projections by Age, Sex, Race, and Hispanic Origin for Florida and
Its Counties 2005-2030. Bureau of Economic and Business Research. FPS Bulletin 145.

Belcher, M. J. (1993). Preparedness of high school graduates for college: A statewide look at
basic skills tests results 1990-91 [Information Capsule No. 93-01C]. Miami, FL: Office
of Institutional Research at Miami-Dade Community College. (ERIC Document
Reproduction Service No. ED 366 394).

Behrman, E. H. & Street, C. (2005). The validity of using a content specific reading
comprehension test for college placement. Journal of College Reading and Learning, 35
(2), Spring 2005.

Blanche, P., & Merino, B. (1989). Self-assessment of foreign language skills: Implications for
teachers and researchers. Language Learning, 39, 313-340.

Blumenthal, A. J. (2002). English as a Second Language at the Community College: An
Exploration of Context and Concerns. New Directions for Community Colleges, 117, p
45.

Byrnes, J.P. (1995). Domain specificity and the logic of using general ability as an independent
variable or covariate. Merrill-Palmer Quarterly, 41, 1-24.

Canale, M. & Swain, M. (1980). Theoretical bases of communicative approaches to second-
language teaching and testing. Applied Linguistics, 1 (1), 1-47.









Carroll, J.B. (1961). Fundamental considerations in testing for English proficiency of foreign
students. In Testing the English proficiency of foreign students (pp. 31-40). Washington,
D.C.: Center for Applied Linguistics. Also in H.B. Allen and R.N. Campbell, (Eds.),
Teaching English as a second language: A book of readings (pp. 313-320). New York:
McGraw Hill.

Carroll, J.B. (1983). Psychometric theory and language testing. In J.W. Oller, Jr., (Ed.), Issues in
language testing research (pp. 80-107). Rowley, MA: Newbury House.

CCCC Committee on Second Language Writing. (2001). CCCC statement on second-language
writing and writers. College Composition and Communication, 52(4) 669-674.

Chomsky, N.(1957) Syntactic structures. The Hague, Mouton & Co.

Christensen,L, Fitzpatrick, R., Murie,R. & Zhang, X. (2005). Building voice and developing
academic literacy for multilingual students: The commanding English model. In J. L.
Higbee, D. B. Lundell, & D.R. Arendale (Eds.) The General College vision: Integrating
intellectual growth, multicultural perspectives and student development (ppl55-184).
Minneapolis, MN: Center for Research on Developmental Education and Urban Literacy,
General College, University of Minnesota.

Clark, J. L. D. (1981). Language. In T. S. Barrows, S. M. Ager, M. F. Bennett, H. I. Braun, J. L.
D.Clark, L. G. Harris, and S. F. Klein (25-35). College Students' Knowledge and Beliefs:
A Survey of Global Understanding. New Rochelle, NY: Change Magazine Press.

Clifford, M. M. (1976). A revised measure of locus of control. Child Study Journal, 6, 85-90.

Cohen, J. (1988). Statistical power and analysis for the behavioral sciences (2nd ed.) Hillsdale,
NJ: Lawrence Erlbaum Associates.

Cohen, A. (2002). America's community colleges: On the ascent. Retrieved April 1, 2007, from
U.S Department of State: Ejournal U.S.A. Web site:
http://usinfo.state.gov/jourals/itsv/0602/ijse/cohen.htm#top

College Board. (2001). ACCUPLACER Coordinators Guide. New York, NY: Author.

College Board. (2003). ACCUPLACER Online: Technical Manual. New York, NY: Author.

College Board. (2004). ACCUPLACER Coordinators Guide. New York, NY: Author.

College of the Canyons. (1994). Predictive validity study of the APS writing and reading tests
and validating placement rules for the APS writing test. (Eric Document Reproduction
Service No. ED376915).

College of the Canyons Office of Institutional Development. (1996). Disproportionate impact
study. Valencia, CA: Office of Institutional Development for College of the Canyons.
(ERIC Document Reproduction Service No. ED 401 982).









Coombe, C. (1992). The relationship between self-assessment ratings of functional skills and
basic English skills results in adult refugee ESL learners. Unpublished doctoral
dissertation, Ohio State University, Columbus.

Crandall, V. C., Katkovsky, W. A. & Crandall, V. J. (1965). Children's belief in their own
control of reinforcements in intellectual-achievement situations. Child Development,
1965, 69, 91-109.

Culbertson, W.L. (1997) Improving predictive accuracy for placement in entry level college
mathematics courses using available student information (Doctoral Dissertation, The
University of Toledo, 1997) Dissertation Abstracts International, 58, A0395.

Cummings, S. W. (1991). The English placement practices of fifteen selected Southern
California community colleges (Doctoral Dissertation, University of Southern California,
1991). Dissertation Abstracts International, A52/06, 1958.

Cunningham, J. M. (1983). An evaluation of English placement instruments for first term
freshmen at Embry-Riddle Aeronautical University [CD-ROM]. Abstract from: UMI
ProQuest Digital Dissertations: Dissertation Abstract Item: 8315061.

Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self determination in human
behavior. New York: Plenum.

Dornyei, Z. (2003). Attitudes, orientations, and motivations in language learning: Advances in
theory, research, and applications. Ann Arbor: Blackwell.

Estrada, L., Dupoux, E., Wolman, C. (2005). The Personal-Emotional Social Adjustment of
English-Language Learners to a Community College. Community College Journal of
Research & Practice, Vol. 29 Issue 7, p 557.

Evola, J., E. Mamer, and Lentz, B. (1980). Discrete point versus global scoring for cohesive
devices. In J.W. Oller, Jr. and Kyle Perkins (Eds.), Research in Language Testing, (pp.
177-181). Rowley, MA: Newbury House.

Farhady, H. (1983). On the plausibility of the unitary language proficiency factor. In J.W. Oller,
Jr., (Ed.), Issues in language testing research (pp. 11-28). Rowley, MA: Newbury House.

Feldt, R. C. (1989) Reading Comprehension and critical thinking as predictors of course
performance. Perceptual and motor skills, 68, 642.

Florida Department of Education. (2005). Developmental education in Florida community
colleges: Appendix A. Retrieved Feb 13, 2007, from Florida Department of Education
Web site: http://www.fldoe.org/cc/Vision/PDFs/PR2005_05App.pdf

Florida Statutes. (2003) Title XL VIII K-20 code, chapter 1008. Retrieved June 15, 2004, from
http://www.flsenate.gove/Statutes/index.cfm?Appmode=Display_Statute&URL=Ch 00
8/chl008.htm









Ford, D. Y. (1994). An investigation of the paradox of underachievement among gifted black
students. Roeper Review, 16(2), 78-85.

Fouly, K.A, Bachman, L.F., and Cziko, G.A. (1990). The divisibility of language competence: A
confirmatory approach. Language Learning, 40 (1), 1-21.

Galbraith, F. L. (1986). The use of multiple choice items and holistically scored writing samples
to assess student writing ability [CD-ROM]. Abstract from: UMI ProQuest Digital
Dissertations: Dissertation Abstract Item: 8626947.

Gardner, R. C., & Lambert, W. (1972). Attitudes and motivation in second language learning.
Rowley, MA: Newburry House.

Garrow, J. R. (1989). Assessing and improving the adequacy of college composition placement
[CD-ROM]. Abstract from: UMI ProQuest Digital Dissertations: Dissertation Abstract
Item: 8921432.

Gerald, D. (2000). Projections of education statistics to 2010 (chapter 2) Retrieved September
10th 2001 from http://nces.ed.gov/pubs2000/projections/chapter2.html

Gibian, G. (1951) College English for foreign students college English 13 (3) 157-160.

Goen, S., Porter, P., Swanson, D., & Vandommelen, D. (2002). Generation 1.5. The CATESOL
Journal, 14(1), 103-105.

Goodman, J.F., Freed, B., and McManus, W.J. (1990). Determining exemptions from foreign
language requirements: Use of the Modem Language Aptitude Test. Contemporary
Educational Psychology, 15 (2),131-141.

Grant, R. A., & Wong, S. D. (2003). Barriers to literacy for language-minority learners: An
argument for change in the literacy education profession. Journal of Adolescent & Adult
Literacy, 46(5), 386-394.

Grunder P. G. & Hellmich, D. M. (1996). Academic persistence and achievement of remedial
students in a community colleges college success program. Community college review,
24(2) 21-33.

Harklau, L., (2000). From the "good kids" to the "worst:" Representations of English language
learners across educational settings. TESOL Quarterly, 34 (1), 35-67.

Harklau, L. (2003). Generation 1.5 students in college writing. Retrieved December 26,2004,
from Washington DC: Center for applied linguistics Web site:
http://www.cal.org/resources/digest/0305harklau.html

Harklau, L. Losey, K. M., & Siegal, M. (Eds.). (1999). Generation 1.5 meets college
composition: Issues in the teaching of writing to U.S. educated learners of ESL. Mahwah,
NJ: Earlbaum.









Harris, R.J. (1975). A primer of multivariate statistics. New York: Academic.


Hirshy, P., & Mack, Q. (2001). Comprehension of student success among Asnunstuck
Community College elementary algebra students placed by ACCUPLACER, Scholastic
Aptitude Test Score, or prerequisite course. Paper presented at the North East Association
for Institutional Research Annual Conference, Cambridge M.A. (ERIC Document
Reproduction Service No. ED465362)

Hymes, D. (1972). On Communicative Competence. In J.B. Pride & J.Holmes (Eds.).
Sociolinguistics. Harmondsworth, England: Penguin Books.

Ignash, J. M. (1995). Encouraging ESL student persistence: The influence of policy on
curriculum design. Community College Review, 23(3) 17-34.

Isonio, S. (1994). Relationship between APS writing test scores and instructor preparedness
ratings: Further evidence for validity. Huntington Beach, CA: Golden West College.
(ERIC Document Reproduction Service No. ED 370 617).

Ives, S. (1953). Help for the foreign students. College Composition and Communication, 4 (11)
141-144.

James, C. (2006). Validating a computerized scoring system for assessing writing and placing
students in composition courses. Assessing Writing, 11(3), 167 178.

Jones, J., & Jackson, R. (1991). The impact of writing placement testing and remedial writing
programs on student ethnic populations at Oxnard College [Research Report #91-02].
Oxnard, CA: Oxnard College. (ERIC Document Reproduction Service No. ED 335 081).

Kessler, R. P. (1987). Can reading placement scores predict classroom performance? A
discriminate analysis. (Santa Ana, CA: Rancho Santiago Community College District.
(ERIC Document Reproduction Service No.291440)

Krashen, S. (1982). Principles and practice in second language acquisition. Oxford, UK:
Pergamon Press.

Lao, R. C. (1980). Differential factors affecting male and female academic performance in high
school. Journal of Psychology, 104, 119-127.

Lay, N. D. S., Carro, G., Tien, S., Niemann, T. C., & Leong, S. (1999). Connections: High
school to college. In L. Harklau, K. Losey, & M. Siegal (Eds.), Language minority
students, ESL, and college composition (pp. 175-190). Mahwah, NJ: Erlbaum.

LeBlanc, R., & Painchaud, G. (1985). Self-assessment as a second language placement
instrument. TESOL Quarterly, 19, 673-687.









Lee, Y. (2005). A summary of construct validation of an English for academic purposes
placement test. Retrieved May 6, 2006 from Working Papers in Second or Foreign
Language Assessment Website.
http://www.lsa.umich.edu/eli/spaan/papers2005/spaan working_papers v3 FULL.pdf

Lefcourt, H. M., VonBaeyer, C. L., Ware, E. E., & Cox, D. J. (1979). The multidimensional-
multiattributional causality scale: the development of a goal-specific locus of control
scale. Canadian Journal of Behavioral Science, 11,286-304.

Lofland, J. & Lofland, L. H. (1995) Analyzing social settings: A guide to qualitative observation
and analysis (3rd ed.) Belmont, CA: Wadsworth Publishing Co.

Lynch, B., Davidson, F., and Henning, G. (1988). Person dimensionality in language test
validation. Language Testing, 5(2), 206-219.

Matsuda, P. K., Canagarajah, A. S., Harklau, L. Hyland, K., & Warschauer, M. (2003).
Changing currents in second language writing research: A colloquium. Journal of Second
Language Writing, 12(2), 151-179.

Miele, C. (2003). Bergen Community College Meets Generation 1.5. Community college Journal
of Research and Practice, 27: 603-612,

Moore, R. and Christiansen, L. (2005). Academic Behaviors and performances of Generation 1.5
Students who succeed in college science courses. The Learning Assistance Review.
10(2), 17-29.

Mortiz, C. (1995). Self-assessment of foreign language proficiency: A critical analysis of issues
and a study of cognitive orientations of French learners. Unpublished doctoral
dissertation, Cornell University, Ithaca, NY.

Myles, J. (2002). Second language writing and research: The writing process and error analysis
in student texts. Retrieved December 13, 2004, from TESL-EJ Web site:
http://cwp60.berkely.edu: 16080/TESL-EJ/ej22/al.html

Nunnally, J. (1978). Psychometric theory (2nd ed.) New York: McGraw-Hill.

Odgden, E.P., & Trice, A. D. (1986). The predictive validity of the Academic Locus of Control
Scale for College Students; Freshman outcomes. Journal of Social Behavior and
Personality, 1, 649-652.

Oller, J.W., Jr. (1972). Scoring methods and difficulty levels for cloze tests of proficiency in
English as a second language. Modem Language Journal, 56, 151-158.

Oller, J.W., Jr. (1979). Language tests at school. London: Longman.

Oller, J.W., Jr. (1983). A consensus for the 80's? In J.W. Oller, Jr., (Ed.), Issues in language
testing research (pp. 351-356). Rowley, MA: Newbury House.









Oiler, J.W., Jr. and Perkins, K, (Eds.), (1978). Language in Education: Testing the Tests.
Rowley, MA: Newbury House.

Oller, J.W., Jr. and Perkins, K, (Eds.), (1980). Research in Language Testing. Rowley, MA:
Newbury House.

Oller, J. W. (1992). Language testing research: lessons applied to LEP students and programs.
Proceedings of the Second National Research Symposium on Limited English Proficient
Student Issues: Focus on Evaluation and Measurement. OBLEMA.

Oltman, P.K, Stricker, L.J., and Barrows, T.S. (1990). Analyzing test structure by
multidimensional scaling. Journal of Applied Psychology, 75, 21-27.

Ortiz, A.A. and Yates, J.R. (1983). Incidence of exceptionality among Hispanics: Implications
for manpower planning. NABE Journal 7, 41-54.

Padron, E. J. (1997) Entry-Level Placement Scores for the 1996-97 Academic Year. Miami-Dade
Community College, FL. June 1, 1997 Memorandum. ED409927.

Park, Y. S. (1998). Locus of Control: Attributional style and academic achievement comparative
analysis of Korean-Chinese and Chinese students. Asian Journal of Social Psychology.
Vol. 1(2).

Peirce, B. M., Swain, M., & Hart, D. (1993). Self-assessment, French immersion, and locus of
control. Applied Linguistics, 14, 25-42.

Phakiti (2005)An empirical investigation into the nature or and factors affecting test takers'
calibration within the context of an English placement test. Retrieved May 16th 2006
from Working Papers in Second or Foreign Language Assessment Website
http://www.lsa.umich.edu/eli/spaan/papers2005/spaan working_papers v3 FULL.pdf

Rand, E. (1972). Integrative and discrete-point tests at UCLA. Work papers in TESL: UCLA, 6,
67-78.

Reid, J.M., (1997). Which nonnative speaker? Differences between international students and
U.S. resident (language minority) students. New Directions for Teaching and Learning,
70, 17-27.

Reimanis, G. (1980). Locus of control and anomie in Western and African cultures. Journal of
Social Psychology, 112:2.

Rita, E. S. (1997). The effect of computer-assisted student development programs on entering
freshmen locus of control. College Student Journal, 31(1), 80.

Roberge, M. M. (2002).California's Generation 1.5 immigrants: What experiences,
characteristics, and needs do they bring to our English classes?. The CATESOL Journal.
14(1), 107-129.









Ross, S. (1998). Self-assessment in second language testing: A meta analysis and analysis of
experiential factors. Language Testing, 15(1), 1-20.

Rotter, J. B. (1966). Generalized expectancies for internal versus external control of
reinforcement. Psychological monographs, 80, 128.

Rotter, J. B. (1975). Some problems and misconceptions related to the construct of internal
versus external control of reinforcement. Journal of Consulting and Clinical Psychology,
48, 56-67.

Ruiz-de-Velasco, J., & Fix, M. (2000). Overlooked and Underserved: Immigrant students in U.S.
Secondary Schools. Washington, DC: The Urban Institute Press.

Rumbaut, R. G. & Ima, K. (1988). The adaptation of southeast Asian refugee youth. A
comparative study. Final report to the office of resettlement. San Diego: San Diego State
University. ERIC Document Service Reproduction Service No. ED 299 372.

Schmidt, F.L. (1971). The relative Efficiency of regression in simple unit predictor weights in
applied differential psychology. Educational and Psychological Measurement, 31, 699-
714.

Smith, J. M. (1973). A quick measure of achievement motivation. British Journal of Social and
Clinical Psychology. 12, 137-143.

Smith, N. B. (1983). The relationship of selected variables to persistence and achievement in a
community junior college (Doctoral Dissertation, Auburn University, 1983). Dissertation
Abstracts International, 44,A0078.

Smittle, P. (1996). ACCUPLACER: Foundation for high school/college testing project to
enhance college readiness. Presentation at the annual meeting of the Florida
developmental education association. Daytona Beach, FL.

Smoke, T. (2001). Mainstreaming writing: What does this mean for ESL students? In G.
McNenny (Ed.), Mainstreaming basic writers: Politics and pedagogies of access (pp.
193-214). Mahwah, NJ: Erlbaum.

Spack, R. (2004). The acquisition of academic literacy via second language: A longitudinal case
student, updated. In V. Zamel & R. Spack (Eds.), Crossing the curriculum: Multilingual
learners in college classrooms (pp.3-17). Mahwah, NJ: Lawrence Erlbaum Associates.

Spann-Kirk, E. M. (1991). A validity assessment of a mathematics placement test used with
entering students at Mott Community College (placement tests) Doctoral dissertation,
Wayne State University, 1991) Dissertation Abstracts International, 53, A1044

Saunders, P. I. (2000) Meeting the needs of entering students through appropriate placement in
entry-level writing courses. Saint Louis, MO: Saint Louis Community College at Forest
Park. (ERIC Document Reproduction Service No. ED447505)









Sawyer, R. (1989). Validating the use of ACT Assessment scores and high school grades for
remedial course placement in college (ACT Research Report Series 89-4). Iowa City, IO:
ACT. (ERIC Document Reproduction Service No. ED 322 163)

Shepard, S. (2006). Locus of control and academic achievement in high school students.
Psychological Reports, 98(2), 318-322.

Smittle, P. (1995). Academic performance predictors for community college student assessment.
Community College Review, 23(2), 37-43.

Swanson, C. (2004). Between old country and new: Academic advising for immigrant students.
In I.M. Duranczyk, J. L. Higbee, & D. B. Lundell (Eds.) Best practices for access and
retention in higher education (pp. 73-81) Minneapolis, MN: Center for Research on
Developmental Education and Urban Literacy, General College, University of Minnesota.

Thonus, T. (2003). Serving generation 1.5 learners in the university writing center. TESOL
Journal, 12(1), 17-24.

Tremblay, P. F. & Gardner, R. (1995). Expanding the motivation construct in language learning.
Modern Language Journal, 79, 505-518.

Trice, A. D. (1985). An academic locus of control scale for college students. Perceptual and
Motor Skills, 61, 1043-1046.

Trice, A. D., Ogden, E.P., Stevens, W., & Booth, J.V. (1987). Concurrent Validity of the
academic locus of control scale. Educational and Psychological Measurement.

Troike, R. (1969). Receptive competence, productive competence and performance. In papers of
the 20th Georgetown university round table. Washington D.C. Georgetwon University
Press.

Upshur, J. A. (1973). Context for language testing. In J. W. Oller & K. Perkins (Eds.), Research
in language testing (200-213). Rowley, Mass.: Newbury House.

Upshur, J.A and Homburg, T.J. (1983). Some relations among language tests at successive
ability levels. In J.W. Oller, Jr., (Ed.), Issues in language testing research (188-202).
Rowley, MA: Newbury House.

Valdes, G. (1992) Bilingual minorities and language issues in writing. Written Communication,
9, 85-136.

Valencia Community College. (2007). VCC college catalog. Retrieved April 1, 2007, from 2006-
2007 Official Catalog Web site: http://valenciacc.edu/catalog/05-06/default.htm (2006).

Valencia Community College. (2006). Who are we: Fast facts & statistics. Retrieved Feb 13,
2007, from Valencia Community College Web site:
http://valenciacc.edu/AboutUs/whoweare/fastfacts.cfm









Wang, D, (2005) Students' Learning and Locus of Control in Web-Supplemental Instruction.
Innovative Higher Education,30(1), 67-83.

Weiner, B. (1992). Human motivation: Metaphors, theories, and research. Newbury Park, CA:
Sage

Wink, J. (1999). Critical pedagogy: Notes from the real world (2nd ed.). Boston: Allyn & Bacon.

White, E. M. (1990). Language and reality in writing assessment. CCC, 41(2), 187-200.

Wolcott, W. (1996). Evaluating a basic writing program. Journal of Basic Writing 15(1), 57-69.

Wolcott, W., & Legg, S. M. (1998). An overview of writing assessment: Theory, research and
practice. Urbana, IL: National Council of Teachers of English.

Zamel, V. (2004). Strangers in academia: The experiences of faculty and ESOL students across
the curriculum. In V. Zamel & R. Spack (Eds.) Crossing the curriculum: Multilingual
learners in college classrooms (pp.3-17). Mahwah, NJ: Lawrence Erlbaum Associates.

Zinn, A. (1988). Ideas in practice: Assessing writing in the developmental classroom. Journal of
Developmental Education, 22(2), 28-39.









BIOGRAPHICAL SKETCH

James May has been an instructor of English for academic purposes (EAP) at Valencia

Community College in Orlando, Florida for the past 6 years. Before that, he worked for three

years as an adjunct ESL instructor at Santa Fe Community College in Gainesville, Florida. James

earned his Master of Education degree in ESOL from the University of Florida in 1999. Prior to

that, he was a Korean and Spanish linguist for the U.S. Army. His undergraduate degree is also

from the University of Florida, a B.A. in Spanish literature. James has traveled the world

extensively, has taught English as a foreign language in both Mexico and Korea, and has studied

Spanish, Portuguese, Korean, and American Sign Language. While earning tenure at Valencia,

James developed a special English as a second language program to teach English to the deaf.

Although his true passions are learning and teaching language, James also writes, trains, and

consults on how best to work with second language populations at the community college level.





PAGE 1

ANALYZING THE PLACEMENT OF COMMUNITY COLLEGE STUDENTS IN ENGLISH AS A SECOND LANGUAGE FOR AC ADEMIC PURPOSES (EAP) COURSES By JAMES S. MAY A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION UNIVERSITY OF FLORIDA 2007

PAGE 2

2 2007 by James S. May

PAGE 3

3 To Sharon, Aidan, Collin, & Sabrina

PAGE 4

4 ACKNOWLEDGMENTS This dissertation would not have been possibl e without the support of my wife. Sharon has been the inspiration for many of my achievements. In addition, her extraordinary support in every area of our lives including taking on extra responsibilities w ith and providing for the needs of our three beautiful children Aidan, Collin, and Sabrina has made it possible for me to complete this project. For as long as I can remember, my parents, No rman and Judy, have always been there to support my academic endeavors. Their unwaveri ng support and constant prodding have helped me to get this dissertation finishe d. I also want to thank my sister, Patrice, and my wifes parents, Lois and Dick. Their continued su pport has meant a lot to me. The guidance, support, and encouragement give n by my dissertation advisor, Dr. Candace Harper, has kept me on track and focused on my goals. Her knowledge an d experience cleared up many confusing issues surrounding the writin g of my doctoral dissertation. My committee members, Dr. Miller, Dr. Swain, and Dr. Thompson have all been instrumental in my reaching this point in my academic life. Without input fr om each of them, I would never have gotten to where I am today. My dissertation team has always been there when I have needed them. I thank them all. Other individuals who must be recognized in clude my closest colleagues at work, Maiken, Sarah, Andrew, and Summer, who have all in on e way or another supported me or been a sounding board for ideas throughout my dissertation process. Words of final thanks go to the rest of my colleagues, my students, and my frie nds who have supported and offered words of encouragement throughout the co mpletion of my dissertation.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........8 LIST OF FIGURES................................................................................................................ .......10 ABSTRACT....................................................................................................................... ............12 CHAPTER 1 INTRODUCTION..................................................................................................................14 Impetus for the Study.......................................................................................................... ....14 Purpose and Rationale.......................................................................................................... ..19 Research Questions............................................................................................................. ....20 Variables...................................................................................................................... ...........21 Dependent Variables.......................................................................................................21 Independent Variables.....................................................................................................22 Limitations.................................................................................................................... ..........24 2 REVIEW OF RESEARCH.....................................................................................................26 Language Competence............................................................................................................26 Assessment of Language Competence...................................................................................29 Recent Research in Language Testing....................................................................................32 Locus of Control............................................................................................................... ......34 Generation 1.5................................................................................................................. ........36 Differences and difficulties.............................................................................................37 Program placement fo r generation 1.5............................................................................39 The Need for Better Placement Procedures............................................................................41 Program Models................................................................................................................. .....45 Current Placement Practices...................................................................................................50 Pilot Study.................................................................................................................... ..........52 The LOEP and Its Recommended Use...................................................................................55 Holistically Scored Essays..................................................................................................... .57 3 Materials and Methods.......................................................................................................... .58 Introduction................................................................................................................... ..........58 Study Setting.................................................................................................................. .........58 Participants................................................................................................................... ..........59 Materials and Data Collection................................................................................................59 Procedures..................................................................................................................... ..........60

PAGE 6

6 4 RESULTS........................................................................................................................ .......64 Survey Results................................................................................................................. .......64 Question 1..................................................................................................................... ..........66 Student opinions..............................................................................................................66 Teacher opinions.............................................................................................................66 Analysis of open-ended teacher responses......................................................................67 Advice......................................................................................................................67 Speech courses.........................................................................................................68 Misplacement...........................................................................................................69 Generation 1.5..........................................................................................................70 Teaching problems...................................................................................................71 LOEP........................................................................................................................72 Placement in general................................................................................................73 Other placement comments......................................................................................74 Questions 2.................................................................................................................. .......75 Study 1........................................................................................................................ .....75 Study 2........................................................................................................................ .....85 5 DISCUSSION..................................................................................................................... ....89 Valencias EAP Population....................................................................................................89 Student and Teacher Beliefs about Placement........................................................................90 The Preferred Approach.........................................................................................................93 Reading Subtest Preferred......................................................................................................96 Predicting Evaluation of Placement........................................................................................99 LOC and Generation 1.5.......................................................................................................100 6 CONCLUSIONS AND RECOMMENDATIONS...............................................................104 Recommendations for Testing and Placement.....................................................................105 Other Recommendations......................................................................................................106 APPENDIX A LOEP Essay Rubric............................................................................................................ ..108 B Survey Instruments Used in the Study..................................................................................109 Letter to Instructors.......................................................................................................... .....109 Informed Consent: Instructor................................................................................................110 Instructor Survey.............................................................................................................. ....111 Generation 1.5 Survey..........................................................................................................113 Letter to Students............................................................................................................. .....114 Informed Consent: Student...................................................................................................115 Student Survey................................................................................................................. .....116 Survey Script.................................................................................................................. ......119

PAGE 7

7 C Complete Results of Student/Teacher Surveys.....................................................................120 LIST OF REFERENCES.............................................................................................................130 BIOGRAPHICAL SKETCH.......................................................................................................141

PAGE 8

8 LIST OF TABLES Table page 2-1 Number of Florida community coll eges employing common practices in the placement of ESL students with many colleges using multiple practices.........................42 2-2 Explanation and an example for placing students into EAP courses at Valencia..............54 4-1 Means, standard deviations, and correla tions for final course grades and predictor variables...................................................................................................................... .......76 4-2 Summary performance of all competing models in study 1..............................................77 4-3 EAP 0300: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................78 4-4 EAP 0320: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................78 4-5 EAP 0340: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................79 4-6 EAP 0400: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................80 4-7 EAP 0420: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................81 4-8 EAP 0440: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................81 4-9 EAP 1500: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................82 4-10 EAP 1520: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................83 4-11 EAP 1620: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................84 4-12 EAP 1640: Summary of simultaneous mu ltiple regression analyses for models predicting successful placement as measured by final course grades................................85 4-13 Means, standard deviations, and intercor relations for teacher evaluation of placement and predictor variables for first semester survey respondents...........................................87 4-14 Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by teacher evaluation of placement............................88

PAGE 9

9 5-1 Post-hoc results comparing the 3 origin al models with each subtest added as a competing model................................................................................................................97 5-2 Summary of simultaneous multiple regression analyses for LOCSCAL & GN15CMPT at predicting successful placem ent as measured by teacher evaluation of placement................................................................................................................... ..100 5-3 Percentages of students in each course failing to answer questions on the Trice LOC Index.......................................................................................................................... ......101 C-1 Native languages of EAP placement survey respondents listed by campus....................120 C-2 Countries of origin for EAP placemen t survey respondents listed by campus................121 C-3 Gender of EAP placement surv ey respondents listed by campus....................................123 C-4 Semester of enrollment for EAP placement survey respondents listed by campus.........123 C-5 Ages of EAP placement survey respondents listed by campus.......................................123 C-6 Number of Years in the U.S. for pl acement survey respondents listed by campus.........125 C-7 Survey responses to, What year di d you graduate from high school/earn GED? listed by campus...............................................................................................................126 C-8 Survey responses to, Are you the first person in your family to go to college?...........127 C-9 Survey responses to, If you werent the first person in your family to go to college, who was?...................................................................................................................... ..127 C-10 Survey responses to, Are you the first pe rson in your family to go to college in the U.S.?......................................................................................................................... ......128 C-11 Survey responses to, If you werent the first person in your family to go to college, who was?...................................................................................................................... ..128 C-12 Survey responses to, Have you gone to college outside of the United States?............129 C-13 Number of years spent in college outside of the U.S. by placement survey respondents listed by campus...........................................................................................129

PAGE 10

10 LIST OF FIGURES Figure page 2-1 Upshurs simple program model: This figure illustrates the simple program model discussed by Upshur (1973). Students en ter, receive instruction, and leave.....................45 2-2 Upshurs complex model: This figure illustrates the most complex program model discussed by Upshur (1973). It is an ex ample of a multi-level program with several types of decisions being made...........................................................................................47 2-3 Possible improved model: This model illu strates the decision stages and types of tests that could be avoided by enha ncing placement practices at VCC.............................49

PAGE 11

11 LIST OF ABBREVIATIONS ESL English as a Second Language EAP English as a Second Language for Academic Purposes, usually refers to ESL at the college level. Generation 1.5 Generation 1.5 stude nts usually have come to the United States in their early teen or pre-teen years. They have often attended U.S. schools, and many of these students have even gr aduated from American high schools. While attending American schools, these students have had time to acquire informal English. Many of them use American idiomatic expressions, and some may even have American accents. Errors in their language are detectable, but they do not interfere with understanding, and these students are comfortable speaki ng English and do so with relative ease. Their reading, grammar, and writing skills on the other hand are usually behind that of their college -ready peers. They are not what you may consider true ESL students, but they are not true native English speaking students either. LOEP Levels of English Proficiency Te st. Used to refer to the ACCUPLACER ESL battery of tests.

PAGE 12

12 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Education ANALYZING THE PLACEMENT OF COMMUNITY COLLEGE STUDENTS IN ENGLISH AS A SECOND LANGUAGE FOR ACADEMIC PURPOSES (EAP) COURSES By James S. May August 2007 Chair: Candace Harper Major: Curriculum and Instruction (ISC) The intention of this research was to increa se the effectiveness of student placement tools and strategies used by community colleges to place nonnative English speakers into courses designed to teach English for future academic pursu its. More specifically, this research sought to analyze and improve placement practices at Vale ncia Community College in Orlando, Florida by identifying placement variables that best pred icted success in various English as a second language (ESL) for Academic Purposes (EAP) courses. Locus of Control scale scores, a computed indicator of Generation 1.5 status and results from four subtests of the ACCUPLACER Levels of English Placement (LOEP) Test were tested individually and within composite models for their ability to predict su ccess as measured by final course grades and teacher evaluations of placement. These variable s were tested for their ability to predict successful placement of first semester, self-i dentified nonnative English speakers into ESL classes covering four different skills (r eading, writing, speech, and grammar) across five different levels of possible placement (EAP levels 2). Results indicated th at the reading subtest was the best predictor of student final course gr ades. The essay subtest was the best predictor of teacher evaluation of placement, and individual subtests were preferred over composite models. Furthermore, both Locus of Control and the comp uted indicator of Generation 1.5 status were

PAGE 13

13 found to be correlates of student success. Additional recommendations are suggested for how to improve placement practices.

PAGE 14

14 CHAPTER 1 INTRODUCTION Impetus for the Study There has been a long standing need for accurate placement of nonnative speakers of English into academic programs. Program administ rators and faculty have been concerned about this need for more than 50 years, as evidenced by Gibian (1951) and Iv es (1953). The goals of admitting and placing students accurately into prog rams designed to develop their language skills to a level commensurate with their college-read y peers have led to a variety of testing instruments and English as a Second Language (ESL) programs nationwide. Unfortunately, many of the tests used by colleges and schools to place students were not originally designed for this purpose. For admissions purposes, most schools use some form of language proficiency test, the most common being the TOEFL (Test of English as a Foreign Language). Although the TOEFL, much like the SAT (Scholastic Aptitude Test) and the ACT (American College Test), is a moderate predictor of overall success in college, it is not designed to serve as a placement tool. Many schools also employ other language proficiency tests fo r placement purposes, like the CELSA (Combined English Language Skills Asse ssment), the CELT (Comprehensive English Language Test), the Accuplacer LO EP (Levels of English Profic iency), and the Accuplacer CPT (College Placement Test). However, these test s are designed to assess proficiency and do not align language skills with indi vidual course objectives and out comes. More accurate placement would result from the analysis of test result s and survey data designed to identify student characteristics aligned with course objectives and outcomes. This concept is supported by The College Board and Accuplacer developers: ACCUPLACER tests are designed to assist inst itutions in placing stude nts into appropriate courses. Given that instituti ons differ greatly with respect to composition of the student

PAGE 15

15 body, faculty, and course contenteach instituti on should establish their own cut scores to facilitate placement decisions based on factors and data unique to their institution (College Board, 2003) However, most colleges do not ha ve the budget required to administ er a large battery of surveys and tests or to hire individuals who can identify student goals, prior experiences, background knowledge and abilities. Therefor e, many schools simply rely on a few multiple choice tests to make placement decisions. Another major problem with the current us e of placement tests is the way they are administered. Students often take placement tests during the application or admissions process. For many students, this is a long day where they ar e required to stand in lines for lengthy periods of time. By the time they are given their placem ent tests, many students are frustrated, tired, hungry, or simply more interested in getting back to work or play than taking tests. Therefore, results on these tests may not be accurate representations of the tr ue abilities of these students. Another confounding factor is that many student s feel pressured to take placement tests. Students are often unaware of established applic ation deadlines and arri ve late expecting to enroll in classes quickly. Rather than waiting an d applying for a subsequent semester, they often feel an urgent need to complete prerequisite testing regardless of how it might affect placement. This pressure, in conjunction with disinteres t in the test, presents a problem because many students are unaware that their results on these tests will affect their future placement into courses. For nonnative English speaking populations, these problems are compounded in that nonnative speakers often have trouble understanding in structions and the processes involved in testing. These students may also be limited by tim e. Some international students are simply traveling in the United States vi siting programs for which they ma y later seek student visas. They may not have the language skills to explain why they would rather come back and take a

PAGE 16

16 placement test on another occasion. And, nonnative English speakers may end up taking multiple tests in a single day because schools often re quire one exam for admission and another for placement into programs. Due to both an increase in ESL student popul ations and the inabil ity of existing ESL programs to adequately prepare these students fo r college courses, post-secondary institutions around the U.S. have been developing multilevele d preparatory programs for nonnative speakers of English. These programs are often significantl y more complex than the typical two-level preparatory programs for native English-speak ing high school graduate s needing academic assistance in their transition to post-secondary education. For example, each community college in the state of Florida has been able to develop a program that best suit s its unique needs. Given different populations and different needs, sc hools across the state of Florida have adopted different programs offering differe nt courses and therefore requiring different placement criteria. Due to budget constraints, timing issues, and l ack of personnel, institutions have been forced to simply do their best in developing prog rams that provide for the needs of their diverse populations. A consortium of concerned ESL/EAP professionals has been working to address problems inherent in the state s EAP programs: the variety of needs present, exit testing, placement testing, and the special needs of Ge neration 1.5 students. Generation 1.5 is a term coined by Rumbaut and Ima (1988). It refers to students from non-English speaking backgrounds who typically have lived in the U.S. for some time, have aural/oral competence in English that is near native, but read and write at a level below average in English. (For more information on Generation 1.5 students, Chapter 2). Although the consortium of ESL/EAP professiona ls is making strides to address many of the issues listed above, it has not been able to address and solve al l of the problems. For

PAGE 17

17 example, even though there are relatively standa rd courses across EAP programs, there are no common placement exams for entrance into these c ourses. In addition, even schools that use the same placement exams and offer the same course s have developed different cut-scores, i.e., scores that are used to make decisions about th e level of a program in which a student will be placed. At a consortium meeting in June of 2004 he ld at Valencia Comm unity College (VCC), it was discovered that none of the 12 colleges pres ent had empirically addr essed the troublesome issue of placement, and to this date, little has changed. However, currently the consortium is entering Phase 3 of a statewide Council on Instructional Affairs (CIA) initiati ve to standardize how EAP students are placed in community college courses within the stat e. To facilitate this initiative, on February 9, 2007, the consortium ch air presented recommenda tions to the CIA for changes to existing statutes. One recommendation was to adjust st ate statutes and administrative rules so that programs could elect to offer institutional credit for EAP courses. Another recommendation was that the administrative ru les on college-level te sting and placement be amended to read, Prior to the completion of registration, the EA P students language proficiency shall be assessed by the College Board Accuplacer LOEP or the ACT Compass ESL. Members of the consortium have called fo r any research that could assist them in developing more accurate placement that woul d allow for a stronger match between student abilities and goals and th e courses they require to achieve those goals. Students are best placed into courses that chal lenge them but allow them to earn passing grades and achieve acceptable le vels of understanding. Students are misplaced in courses that are either too difficult or too easy to provide any meaningful challenges. At many schools, students are often placed above or below their levels of ability, which ma y lead to high stress or extreme

PAGE 18

18 boredom. This stress and boredom can in turn lead to low attendance, disruptive behavior in the classroom, failure to achieve, and poor grades. Students who feel high levels of stress may also not see success as a possibility, which can lead them to withdraw from a course or program. This is often the case with Ge neration 1.5 students. Given the anecdotal and observed evidence identifying the inadequacy of current placement practices for EAP students at Valencia (See Chapter 2), there is a clear need to identify a more meaningful, comprehensive, a nd efficient system for placing students. This system should take into account student b ackground, both personal and academic, student abilities, and any other factors that can be identified to affect pl acement (i.e., level of education & literacy in the native la nguage, age, motivation, etc.). In order to do this, a placement system needs to be established that could collect multiple sources of information through the use of survey and test results. However, identifying optimum placement is a challenging endeavor. The criteria that are most often used to judge effective placement are frequently influenced by variables other than student ability and performance. Teacher beliefs about students true abilities are affected by attendance, participation, and student attitudes. Final course gr ades are also influenced by thes e same teacher beliefs and other factors like social promotion. In other words, re lying solely on teacher judgment as a variable may tell us more about student/te acher relationships and teacher perception of student attitudes than it does about student abilit ies. However, using only final course grades might be too limited because it fails to account for students who lack skills or fail to complete work. These students are then passed on because teachers are afraid to have high numbers of fa iling students or simply do not wish to see those same students again the following semest er. Therefore, in this study

PAGE 19

19 both final course grade and teacher evaluation of placement were used to identify variables that best predict stude nt success. Purpose and Rationale The intention of this research was to increa se the effectiveness of student placement tools and strategies used by community colleges to place nonnative English speakers into courses designed to teach English for future academic pursu its. More specifically, this research sought to identify variables that predicted the successf ul placement of second language students into Valencias EAP program. It was assumed that id entifying effective predic tors would provide the researcher and other decision makers with the necessary information to make decisions about existing placement mechanisms. It would also info rm other institutions as to best practices. In order to prepare students for colleg e courses, many community colleges have developmental, or preparatory, reading and writing programs. Th ese programs usually offer two levels of each skill prior to ad mitting students to the first leve l of college composition. These developmental courses are designed for native E nglish speakers who do not have the reading and writing skills necessary for college courses. In contrast to programs designed for native English speakers, the EAP program at Valencia includes fifteen (15) courses sp anning five (5) levels, covering up to four (4) different language skills per level. It should be mentioned here that EAP programs in the State of Florida can have up to 6 ability levels. However, faculty members at Valencia decided that allowing students with lan guage proficiencies below the cut-off at level 2 might negatively impact Adult Education progra ms at the county level. Therefore, students demonstrating language proficienc ies below level 2 are sent to Orange County for adult basic education. Given the large span of Valencias 15 course EAP program, accurate placement of students is a significant concern. St udents are currently placed into this matrix of courses based on an established formula for averaging scores from four parts of a placement test: one

PAGE 20

20 holistically graded essay and th ree objectively graded subtests (For more information, see Chapter 2) In short, students currently take and receiv e scores for three object ively graded LOEP subtests (Reading, Sentence Meaning, and Language Use). The three objective scores are then averaged. This averaged score is then aver aged with a number derived from the students holistically scored essay and the average is used to place students. This, however, has not always been the method of placement at Valencia. From 2001 through 2003, essays were not used. The decision to not use essays was in large part due to requests by c ounseling and admissions personnel who wanted the ability to register, test, and assign stude nts to classes in one day. EAP teachers agreed to a trial period, but as they no ted more instances of mi splacement of students, they asked to have essays reinstated. Prior to reinstating the reading of essays for placement, students were placed simply by the average of the three objectively graded subtests. Some believe Valencia should return to this older process; others believe the college should use individual subtest scores to place students in skills courses at different levels. Research Questions In addition to identifying descriptive inform ation about Valencias EAP, this research sought to identify the most effec tive aspects among several approach es to placing students into EAP courses at Valencia by finding an swers to the following questions: 1. What are the student and teacher be liefs about placement at Valencia? 2. Which of the following three approaches best pr edicts student success as measured by final course grades and teache r evaluation of placement? Model 1: Averaging the three objectively sc ored LOEP subtests without the essay Model 2: Using an equally weighted averag e of both the objectively scored LOEP subtests and the Essay test Model 3: Using the four LOEP s ubtests as individual predictors

PAGE 21

21 3. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by final course grades? 4. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by teacher evaluation of placement? 5. Do the student variables of Locus of Control and Generati on 1.5 add to the prediction of placement in EAP courses as measured by fina l course grades and teacher evaluation of placement? Variables In addition to describing Valencias EAP st udent population and eliciting both quantitative and descriptive feedback about placement at Valencia, this research employed multiple regression analyses to test the predictive abilit ies of a variety of inde pendent variables on two different dependent variables. Dependent Variables Final Course Grades: Each course name (i.e., EAP 1640) represents the final course grades that students received in each course. Final grades have been assigned the following point values: A=4, B=3, C=2, D=1, and F=0. Each course number also gives information about the course. The first number, either a or a represents whether or not a course counts for credit ( indicates college credit). The sec ond number, through 6 in dicates the level of a course, 6 indicating the highest sk ill level. The final two numbers indicate the type of course; 00 = Speech, 20 = Reading, 40 = Writing, 60 = Grammar, and 81 = Combined Skills. Teacher Evaluation of Placement: During the summer of 2006, t eachers were asked to rate the placement of their current EAP students. Th ey were asked to identify students as either well placed or to recommend an alternate place ment level. Using results from instructor surveys, students were placed into 7 different levels. Levels 2 correspond with the levels

PAGE 22

22 offered at Valencia, and levels 1 and 7 signify the teacher belief that a student is either under or over prepared for Valencias program. Independent Variables Locus of Control (LOCSCAL): This term stands for Locus of Control scale score. During the summer of 2006, 470 students to ok the Trice Locus of Control inventory as part of their student surveys. Scores ranged between 0 28. Generation 1.5 (GN15CMPT): This term is a computed in dicator of Generation 1.5 status. An attempt was made to validate a survey measur e of the variable throug h correlation with the teacher judgment of the construct. During the summer of 2006, students and teachers took surveys. Students were asked a variety ques tions thought to relate to the construct as discussed in the litera ture on Generation 1.5 students (See Chap ter 2). Teachers were asked to identify students in their courses who they thought were members of Generation 1.5. Intercorrelations between student survey re sponses and teacher ra tings of Generation 1.5 status as measured in the instructor surveys re vealed that three survey questions demonstrated small to medium correlations (Cohen, 1988) with teacher evaluation of Generation 1.5 status: The grade students started school in the U. S. r = -.35 p<.001, students age r = -.26 p<.001, and if a student had gone to college outsid e the US r = -.26 p<.001. The formula for this indicator identified students as Generation 1.5 if they answered yes to two of the following three criteria: if the student star ted school in the U.S. before 10t h grade, 2) if the student was younger than 20, and 3) if they had not gone to co llege in another country. This variable is moderately correlated with teacher id entification of 1.5 status, r = .40 p<.001. LOEP Reading (LORC): This term stands for the Levels of English Proficie ncy reading test score, a score between 1 and 120. On this test, students read passages of 50 to 90 words and then answer questions about thei r reading. They may read about the arts, science, or history.

PAGE 23

23 Half of the test questions ask about information that is stated in the passage. The other half asks students to identify the ma in ideas, fact vs. opinion, or the author's point of view. LOEP Language Use (LOLU): This term stands for the Le vels of English Proficiency language use test score, a score between 1 a nd 120. This test is designed to measure the students' understanding of the English vocabul ary. The sentences come from a variety of different subject areas. Stude nts are asked questions about basic and important idioms, particularly terms of beauty, age, greatness, and size, adverbs such as before, after, during, and prepositions of directions and place. LOEP Sentence Meaning (LOSM): This term stands for the Levels of English Proficiency Sentence Meaning test score, a score between 1 and 120. Students are aske d to fill in a blank with a word or phrase, or combine two sent ences. The skills covered are writing skills including the proper use of nouns and verbs. LOEP Essay (LOES): This term stands for the Levels of English Proficiency Essay test score, a score between 1 and 120. Students have 60 minutes to write an essay on a topic provided by the administrator. Students are aske d to organize their ideas carefully and to present them in more than one paragraph. Trai ned readers at Valencia read each students essay and rate it from 1 to 7. This number is then multiplied by 10 and added to the number 50. LOEP Average (LOEPAVG): This term stands for the computed average of the three objectively scored LOEP subtests: LORC, LO LU, and LOSM. This represents the method used to place students during the two-year period that essays were not read.

PAGE 24

24 LOEP Average with Essay (LPAVGWE): This is the current methodology used at Valencia Community College to place students. It is a com posite score derived from the average of LOEPAVG and LOES. Limitations A primary limitation of all studies identifying the predictive characteristics of tests has been summarized by the College Board (2003), T here is no perfect measure of appropriate course placement. There are a variety of r easons why students may not do well in courses including prior experience, mo tivation and background knowledge. Another limitation of studies such as the cu rrent study is that th e generalizations are somewhat restricted due to in stitution-specific data. Unless othe r institutions have matching populations and offer similar courses, the transfer ability of findings is limited. In this study, the demographics of the population and course descriptions are re ported in detail. Another limitation deals with regression analyses in genera l. In regression analyses, Schmidt (1971) suggested minimu m case-to-predictor ratios ranging in value from 15-to-1 to 25to-1. Using these ratios, this research woul d require anywhere from 15 to 100 students per regression. Nunnally (1978) stated that if there are only two or three independent variables and no pre-selection is made among them, 100 or more subjects would provide a multiple correlation with little bias. Hari s (1975) recommended that the number of subjects be greater than 50 + the number of predictor variables. Because it was unknown how many first semest er EAP students woul d be taking courses during the time period the surveys were being administered, it was not possible to identify whether or not this research would have e nough students to carry out reliable regression analyses. Therefore, two studies were proposed. The first woul d use LOEP subtest and final course grade data gathered on first semester EAP students attending Va lencia for the three

PAGE 25

25 academic years prior to the study. The second study would use data gathered from first time EAP students responding to the surveys during the su mmer of 2006. As it turned out, only 131 first semester students were in the survey group whic h would have led to critically low numbers in each course.

PAGE 26

26 CHAPTER 2 REVIEW OF RESEARCH Anecdotal reports from EAP inst ructors at Valencia have indi cated that current placement practices are not effective. If evidence can be established that teachers reports are valid and placement is ineffective, adjustments to plac ement procedures could be made now before Valencia begins to experience problems related to the forecasted growth of ESL populations. The Bureau of Economic and Business Research at the University of Florida projects a 10% increase in Spanish speaking populations in Orange and Osceola counties (Valencias feeder counties) between 2005 and 2015. This population is exp ected to increase 17% by 2030. (BEBR, 2005) However, identifying valid and reliable placem ent mechanisms for placing students into language programs depends on properly operationalizi ng the construct or cons tructs being tested. Language Competence What exactly does it mean to be competent or fluent in the use of a second language, and how is competence determined? Structural li nguists from the 1940s and 1950s often viewed language positivistically as a formal code that could be analyzed, taught, and tested. From their perspective, discrete point te sts could distinguish competent use of language. Views began to change, however, in the late 1950s with Chomsk ys (1957) distinction between competence and performance. The concept was further analyzed by Troike (1969) as receptive and productive competence and performance. Beliefs about langua ge have continued to evolve over the past quarter of a century through the works of i ndividuals such as Dell Hymes (1972). Hymes introduced the concept of communicative compet ence and argued that speakers of a language have to have more than grammatical competence in order to be able to communicate effectively in a language. Speakers also need to know how language is used by members of a speech community to accomplish their purposes. These be liefs have been expanded and have evolved

PAGE 27

27 further through the works of Oller (1979), Canale and Sw ain (1980), Bachman (1990) and others. Although Hymes introduced the concept of communicative competence, the work conducted by Canale and Swain (1980) became canon in applied linguistics. According to Canale and Swain (1980), communicative competence consists of four components: grammatical competence (sentence structure/syntax), sociolin guistic competence (appropriateness of language use), discourse competence (cohesion and cohe rence), and strategic competence (use of communication strategies). A more recent study of communicative competence by Bachman (1990) further expands the concept. In this view, communicative competence can be divided into two aspects: linguistic competencies which include phonology and orthography, grammar, vocabulary, and discourse; and pragmatic co mpetencies which include functional, sociolinguistic, inte ractional, and cult ural competence. In Bachman's view, the linguistic asp ects of communicative competence are those involved in achieving an intern alized functional knowledge of the elements of a language. Individuals who have phonological competence have the ability to produce the distinctive and meaningful sounds of a language: consonants, vo wels, tone and intona tion patterns, rhythm patterns, stress patterns, and all other suprasegme ntal features that carry meaning. Orthographic competence is closely related to phonological competence; howe ver, orthographic competence describes an individuals ability to decipher and encode the wr iting system of a language. An individual with grammatical co mpetence has the ability to rec ognize and produce the distinctive grammatical structures of a language and to us e them effectively in communication. Individuals with lexical competence have th e ability to recogni ze and use words in a language in the way similar to that of native speakers using them This includes understanding the different

PAGE 28

28 relationships among families of words, idioma tic (non-literal) expr essions, and the common collocations of words. Individuals with discours e competence have the ability to understand and construct oral and written messages from various genres: narrative, expos itory, persuasive, and descriptive. These individuals unde rstand that different genres have different characteristics that help maintain coherence and perform various functions. In Bachmans view, the pragmatic aspects of communicative competence are those that have to do with how language is used in comm unication situations to achieve the speaker's purposes. Individuals with functional competen ce have the ability to accomplish communicative purposes in a language, like gree ting people or requesting assistan ce or information. Individuals with sociolinguistic competence have the ability to interpret the social context of linguistic utterances and to use language in socially appropriate ways fo r any communication situation. An individual with inter actional competence knows how to interp ret and apply the unwritten rules for interaction in various communication situat ions within a given speech community and culture. These individuals can initiate and mana ge conversations and negotiate meaning with other people while paying specific attention to body language, eye contact, and proximity. Individuals with cultural competence have ability to understand behavior from the standpoint of the members of a culture and to behave in a wa y that would be understood by the members of the culture in the intended way. In other words, these individuals use language appropriate to the social structure of a culture and the values assumptions, and beliefs of the people. Although Accuplacer LOEP developers do not de scribe or conceptua lize the language of their tests in terms of Bachmans views of comp etence, the LOEP subtests currently offered at VCC do tend to focus more on the linguistic rath er than pragmatic aspects of communicative competence. For example, because the test is wr itten, test takers need some level orthographic

PAGE 29

29 competence to decipher and encode the writing system of English. Knowledge of grammar and vocabulary is tested in that students are re quired to recognize and produce the distinctive grammatical structures of English and to use them effectively in co mmunication. In addition, students need to recognize and use words in Englis h in ways that are similar to the ways native English speakers use them. One could argue that knowledge of discourse is another underlying competency assessed in both the reading and wr iting tests because understanding of rhetorical patterns would enhance students abilities to interpret reading passages and organize essays. Phonology and the pragmatic competencies, on the other hand, are not as readily identifiable in the LOEP tests currently used at Valencia. Because Valencia does not give the LOEP Listening test, students are not assessed on their ability to interpret and produce the distinctive and meaningful sounds of English. This could be a major weakness considering the fact that these LOEP tests are used to place EAP students into EAP Speech courses. In addition to phonology, the LOEP Listening s ubtest could also add informa tion about students pragmatic competencies; the test purports to be a measure that measures the ability to listen to and unders tand one or more people speaking in English. The conversations take place in academic environments such as lecture halls, study sessions, a computer lab, th e library, the gymnasium, and the like; and in everyday environments such as at home shopping, at a restaurant, at a dentists office, listening to the radio, reading the newspaper, and perfor ming tasks at work. (College Board, 2004) Assessment of Language Competence Bachman's expanded view of communicative competence may give a glimpse at why assessment of competence is so difficult. It is di fficult enough to develop tests of discrete skills of language, and performance of discrete skills is not necessarily an accurate indicator of competence in language. Assessment of more inte grative aspects of la nguage is even more complex.

PAGE 30

30 According to the American Educational Rese arch Association, American Psychological Association, and the National Council on Measurement in Education (1999): For all test takers, a test that employs langua ge is, in part, a measure of their language skills. This is of particular concern for test takers whose first language is not the language of the test. Test use with i ndividuals who have not sufficien tly acquired the language of the test may introduce construct-irrelevant com ponents to the testing process. In such instances, test results may not reflect accurately the qualities and competencies intended to be measured. Therefore it is important to consider language b ackground in developing, selecting, and administering tests and in interpreting test performance. (p. 91) Unfortunately not all institutions have completely understood the importance of language background and its affects on testing. Ortiz and Yates (1983) showed that Hispanic students were over-represented by 300% in classes for the mentally retarded. Oller (1992) was not surprised by this and added that this type of misdiagnosis ma y continue to go unnoticed due to what he calls "monolingual myopia" which he contends has been prevalent for more than a century and still pervades the American educational scene. In a study of within-group diversity of disproportionate representation of ELL students in Special Educati on (Artiles, Rued a, Salazar, & Higareda, 2005), it was found that ELLs identified by districts as having limited proficiency in both their native language (L1) and English (L2) s howed the highest rates of identification in the Special Education categories investigated. These students were consistently overrepresented in learning disabilities and language and speech disabi lities classes. Furtherm ore, these students had greater chances of being placed in Special Educat ion programs. Other research has demonstrated how ELLs are negatively affected by content base d assessment measures (Abedi, 2006; Abedi, Lord, Hofstetter & Baker, 2000; Abedi, Lord & Hofstetter, 1998; Abedi, Lord, Kim-Boscardin, & Miyoshi, 2000; Abedi, Lord, & Plummer, 1997). Because tests have been shown to misdiagnos e second language students, one begins to wonder if they can be used to accurately place these same students or predict their success in certain courses. Some would say no. In 1990, Goodman, Freed, and McManus found that even

PAGE 31

31 the Modern Language Aptitude Test (MLAT) was not an accurate predictor of success in foreign language courses. They speculated that perhaps the fa ilure of this test was the result of the fact that the test measured discrete points of langua ge ability but that language teaching was moving toward integrative, holistic, appr oaches to language development. In 1961, John Carroll suggested a distinction in language testing be tween discrete point and integrative approaches. With his Unitary Trait Hypothesis, Oller (1979) posited that language proficiency consisted of a single unitary ability. Ol ler himself later disconfirmed aspects of his hypothesis recognizing that the the strongest form of the unitary trait hypothesis was wrong" (Oller 1983). Some have contended that discrete point me thods are either better or at least equivalent to integrative methods (Rand, 1972); however, tests of discrete points such as syntactic rules have been shown to generate reliabilities in the range of .6 to .7 (Evola, Mamer & Lentz, 1980) while tests that are more integrative in nature generate reliabilities of .8 to .9 (Oller, 1972). In this light, how can the support for unitary language abili ty found in Oller and Perkins (1978, 1980); Oller (1992); and Carroll (1983) be expl ained? Does language testing, discrete or integrated, reveal information a bout a single trait or divisible competence? Oller (1992) posited that it is illogical to argue that tests that focus on particular rules of grammar will yield equivalent results to tests that require integrated whole grammars. There is support for each side of the argument as to which hypothesis is correct, unitary language ability or divisible competence. Support for divisible competence includes Bachman and Palmer (1983); Farhady (1983); Fouly, B achman, and Cziko (1990); and Upshur and Hombourg (1983). Support for unitary language in cludes Oller and Perkins (1978, 1980); Oller (1992); and Carroll (1983). However, Carro ll might have summarized things best: With respect to a unitary language ability hypothesis or a divisible competence hypothesis I have always assumed that the answer is so mewhere in between. That is, I have assumed

PAGE 32

32 there is general language ability but, at the same time, that language skills have some tendency to be developed and speci alized to different degrees, or at different rates so that different language skills can be separa tely recognized and measured. (p. 82) Summarizing the research of Lynch, Davids on, and Henning (1988) and Oltman, Stricker, and Barrows, (1990), Oller (1992) su ggests that in the early stag es of second language learning distinct dimensions of listening, writing, and reading ability may be observed and may even resolve into further sub-component traits, but as learners progress to more mature, native-like abilities in the target lan guage, all factors tend to conve rge on one unitary trait. Recent Research in Language Testing Recent research into the pred ictive abilities of la nguage testing has not yielded the most informative results. Lee (2005) asked the question, "To what extent does the CEEPT (Computerized Enhanced ESL Placement Test) pred ict international graduate students' academic performance?" In his study CEEPT scores were given ratings of 1 and these levels were correlated with GPA using Pearson product mo ment correlation coefficients. Lee found a correlation coefficient of .052 for the overall samp le between CEEPT scores and first semester GPA. However, the direction and magnitude of correlation varied depending on the discipline. For language oriented disciplines there were pos itive relationships, such as Business (r=.275) and Humanities r=0.35). In contrast there were ne gative relationships for non-language oriented disciplines, such as the Life Sciences (r= -0.548) and Technology (r=-0.213). Unfortunately, although Lee discussed how his qual itative data complemented his results, he failed to offer an explanation for the positive a nd negative correlations. Research on the strengths of the relationshi ps between self assessment and language test scores or abilities is mixed. Some studies have found moderate to str ong relationships (Bachman & Palmer, 1989; Clark, 1981; Coombe, 1992; Lebl anc & Painchaud, 1985). However, others have found weak relationships (Blanche & Me rino, 1989; Moritz, 1995; Peirce, Swain & Hart,

PAGE 33

33 1993). A meta-analysis of the valid ity of self-assessment as a mean s to predict future language performance by Ross in 1998 found weak to moderate links at best. In another study looking at second language placement testing, Phakiti (2005) looked at test ta kers ability to predict success based on answers to questions on an English pl acement test designed by the English Language Institute at the University of Michigan. Phakiti' s study found that, in genera l, participants tended to be overconfident in their test performance. He believes that overconfident test takers possibly stop engaging in cognitive tasks prematurely a nd under-confident test ta kers spend too much time on a task that has already been successf ully completed. Phakiti found that beginners exhibited the poorest predictive ability, which supports Blance and Merio's (1989) findings. This could be further supporting evidence of distin ct differences between beginners and more advanced students on placement tests. Research i nvestigating these differences could support or refute Oller's (1992) suggestion that in the ea rly stages of second la nguage learning, distinct dimensions of listening, writing, a nd reading ability may be observed and may even resolve into further sub-component traits, but as learners prog ress to more mature, native-like abilities in the target language, all factors tend to converge on one unitary trait. Assessment for placement is a challenging ende avor. Developing and standardizing tests that can accurately place students in to a course, or a matrix of cour ses, is difficult at best. This can be seen by very low correlations between pla cement test scores and student achievement in courses. For example, the state of California requ ires that placement tests maintain at least a 0.35 correlation with course grades (College of th e Canyons, 1994). The College of the Canyons also noted that it was not reasonable to expect placemen t tests to be very stro ngly related to course grades. Spann-Kirk (1991) conclu ded that students placed by adviso rs instead of placement tests achieved to the same degree. Smith (1983) came to the conclusion that student scores on

PAGE 34

34 placement tests were less significant in placing st udents than high school grade point average, credit hours completed during a term, and age. S imply put, placement test scores alone may not be the most effective way of placing students. There is therefore a clear need for additional information to help in the placement decision-making process. In 1997, Culbertson found that multivariate prediction models could be used to increase the predictability of placement. As a result, this study also seeks to identify characteri stics that can be used in this decision-making process. It has been suggested that le arning a second language is di fferent in many ways than learning other subjects because in addition to the learnable aspects it is also socially and culturally bound, which makes it a social event (Dornyei, 2003). The social dimension may explain why the study of L2 motivation was origin ally initiated by social psychologists. In the first comprehensive summary of L2 Motivati on, Gardner and Lambert (1972) considered the motivation to learn a second language the pr imary force behind enhancing or hindering intercultural communication and a ffiliation. Gardners theory of i ntegrative motivation laid the groundwork for other theories: self-determinatio n theory (Deci and Ryan, 1985), goal theory (Tremblay and Gardner, 1995), and attribution th eory (Weiner, 1992). According the Weiner, the attributions of motivation are ge nerally described in three dimens ions: (a) Locus, (b) Stability, and (c) Controllability. Locus of Control One variable that may offer important inform ation about students is Locus of Control (LOC). LOC is a psychological construct devel oped from the social learning theory of Julian Rotter (1966) which refers to a generalized expect ancy that people have regarding the degree to which they can control their own fate. LOC is c oncerned with whether in dividuals attribute the cause of something internally or externally. Indi viduals with an internal LOC believe that their

PAGE 35

35 behavior is directly responsible for specific outcomes; internals be lieve they are the cause of the outcomes. By contrast, individuals with an exte rnal LOC believe that their behavior and the consequences are independent; they believe that events are controlled by luck, fate, and chance or powerful others. Research findings have been quite consistent over th e years suggesting that students with an internal LOC were more likely to be successful in education than students with an external LOC (Ford, 1994; Lao 1980; Ogden & Trice, 1986; Park, 1998; Shepard, 2006; Trice, 1985; Trice, Ogden, Stevens, & Booth, 1987). Recent research on Locus of Control has iden tified relationships be tween LOC and student use of technology (Rita, 1997; Wang, 2005). Howe ver, with the exceptio n of one recent study (Estrada, Dupoux, & Wolman, 2005) there is a pauc ity of research investigating LOC scales among language minority students. Estrada et als study addresses the effects of LOC and other predictors on the personal-emotional and soci al adjustment of community college ELLs. The study found that LOC was significantly associat ed with social and personal-emotional adjustment. Other research indicates that LOC may be sensitive to cultural differences; Reimanis (1980) found that personal control was similar among individuals from comparable cultures. In 1975, Rotter posited that more precise predic tions could be made from LOC instruments that were developed with specifi c behavioral areas than from ge neralized ones. The most widely researched specific LOC scale was developed by Crandall, Katovsky, and Crandall (1965). This scale measures school childrens perceptions of th eir control in achievement/academic situations. Unfortunately, not many scales have been desi gned to be used specifically with college populations; the two that have been used w ith these populations (Clifford, 1976; Lefcourt, VonBaeyer, Ware, & Cox, 1979) have been described as giving short shrift to scheduling, class attendance, and competing activit ies (Trice, 1985). Trice indicated that these scales focused

PAGE 36

36 exclusively on effort and studying. His scale was purported to predic t a wider range of relevant college behaviors. The LOC scale used in this study was devel oped by Trice to predic t a wide range of behaviors related to college st udents. It was designed to have high reliability and construct validity with respect to Rotters LOC scale (1966) and Smith s achievement motivation (1973), while simultaneously having high predictive validity with respect to academic performance. It is a 28-item, self-report inventory using a true/false response form at. Low scores are associated with higher GPAs and high scores are associated with lower GPAs. The inventory is designed to measure beliefs in personal control over academic outcomes. The Kuder Richardson-20 reliability coefficient is reported at .70, indicating an adequate level of internal consistency. Also test/retest reliabili ty over an interval of five weeks wa s .92, and discriminate and convergent validity data seem to be adequate for research purposes (Trice, 1985). Furthermore, this LOC scale has been shown to be predictive of su ch academic outcomes as class grades, class attendance, extra credit points ea rned (Trice, 1985), freshman GP A (Ogden & Trice, 1986), class participation, homework completion, and study time (Trice, Ogden, Stevens, & Booth, 1987). Generation 1.5 The term Generation 1.5 was first used in the late 1980s to describe students who fit the description of neither first ge neration nor second generation Americans (Rumbaut & Ima, 1988). Generation 1.5 students usually have come to the U. S. in their early teen or pre-teen years. Often, they have attended U.S. schools, and many of these students have even graduated from American high schools. While attending Americ an schools, these students have had time to acquire informal English. Many of them use Am erican idiomatic expr essions, and some may even have American accents. Errors in their la nguage are detectable, but they do not interfere with understanding, and these students are comforta ble speaking English and do so with relative

PAGE 37

37 ease. Although these students develop oral fluenc y in English rather quick ly, this oral fluency often hides difficulties with academic English (Ruiz-de-Velasco & Fix, 2000). Their reading, grammar, and writing skills are usually signifi cantly below those of their college-ready peers (Goen, Porter, Swanson, & VanDommelen, 2002; Harklau, 2003; Myles, 2002; Roberge, 2002). Other academic skills, including critical thinking and general knowledge, are typically weak. Due to their lack of familiarity with formal registers of language, Generation 1.5 students may also lack ability required to discuss abstract conc epts using appropriate grammatical or rhetorical structure. Generation 1.5 students limited development of academic literacy might be due to prior lack of attention to problems and barriers that interfere with the students abilities to demonstrate what they know in writing (G rant & Wong, 2003; Hark lau, 2003). Myles (2002) suggested that a lack of prio r instruction in writing for academic purposes could cause the students lack of motivation for learning and create a negative attit ude toward English. Differences and difficulties In many ways Generation 1.5 students are si milar to native English-speaking college preparatory students. Interruptions in many immigrants schooling upon arrival in the U. S. often produce gaps in the cultural and academic knowledge expected of college students that can take several years to remedy (Spack, 2004). Unfort unately two semesters in English speaking preparatory programs are insufficient to addr ess the unique problems presented by Generation 1.5 students. Other research has shown that Ge neration 1.5 students patterns of errors differ from those of international students, and, as a re sult, should lead to different placement testing and instruction (Reid, 1997). According to Thonus (2003) many Generation 1.5 students have lost or are in the process of losing their home languages without having learned how to write well in these languages or use them academically. Therefore, teachers need to use different teaching techniques for these students given that th ere are fewer first language skills on which to

PAGE 38

38 scaffold new learning. Unfortunately, little has been done for this population given that the teachers who most often work with Generation 1.5 students, community college faculty, graduate students, and part-time instructors, are unlikel y to have background knowledge and the material resources needed to carry out research and advocacy efforts for thes e students (Matsuda, Canagarajah, Harklau, Hyland, & Warschauer, 2003). Literacy can be expressed in many different forms: functional, academic, workplace, informational, constructive, emergent, cult ural, and critical (Wink, 1999). Swanson (2004) suggests that lack of a specific type of literacy is another reason for the lack of college readiness for Generation 1.5 students. Many high school program s have stand-alone ES L classes that teach language in discrete lessons that emphasize functional l iteracy (i.e., reading and writing) rather than critical literacy (i.e. unde rstanding the social and political implications of written knowledge) which is what is commonly need ed for success in college (Swanson, 2004). Academic preparation aside, the problem s till remains; Generation 1.5 students do not fit perfectly into any of the traditional student catego ries, nor have they been a significant focus of research on students learning to write in ESL (Harklau, 2003). Community colleges are seeing increasing num bers of Generation 1.5 students. As of 1999, some schools even began to report that Gene ration 1.5 students were forming the majority of their second language stude nts (Lay, Carro, Tien, Niema nn, & Leong, 1999), and in 2002, other schools reported these same findings (Blumenthal, 2002). Generation 1.5 students complicate the issue of initial placement, given that these students do not fit the mold of traditional EAP students or traditional college prep students. This question has been debated in the areas of ESL curri culum, program design, and placement, and is reflected in the different methods and materials used from inst itution to institution (Harklau,

PAGE 39

39 2000; Smoke, 2001). Harklau, Losey, and Siegal (1999) have noted that EAP pedagogy and materials are geared toward stude nts who have recently arrived in the U.S. as adults, often with sophisticated educational backgrounds. If curricu lum and instruction have been developed with this type of student in mind, Generation 1.5 stud ents placed into these EAP courses are clearly mismatched. Many teachers feel that the curric ulum designed for traditional EAP students is often too slow for Generation 1.5 students, wh ile the curriculum de signed for preparatory students is often over their heads. They w ould likely benefit from working with EAP professionals, but if EAP classes are not a perfect fit for them, where should they be placed? Valdes (1992) believes that it is necessary for secondary and postsecondary programs to develop criteria to distinguish between students who need ESL instruction and students (like Generation 1.5 students) who have problems with academic English but dont need ESL classes. Valdes labels these two groups incipient bilingu als and functional bilingu als, respectively, and suggests that functional bilinguals should be placed into mainstream courses but still be provided specific instruction that allows them to work on the fossilized as pects of their second language. Harklau (2003) makes a variety of suggestions for working effectively with Generation 1.5 students. The most germane to this research is that it is important to be aware of students prior academic literacy experiences because research ha s shown that high school students in low track classes receive different kinds of instruction from those in higher tracks (Harklau, Losey, & Siegal, 1999). For example, low track students focu s more on substitution drills, dictation, short answer, and writing from models while high track students are taught argumentative and analytical writing and have expe rience writing research papers. Program placement for generation 1.5 Bergan Community College in New Jersey has an American Language Program (ALP) which serves the same population as many EA P programs in Florida, college-bound, nonnative

PAGE 40

40 English speakers. In the mid to late 1990s, Berg an began to notice a dramatic increase in the number of English language minority students who were graduates of American high schools and who didnt fit the traditional student molds (M iele, 2003). Realizing that these students did not fit neatly into the common three categories of college-ready students, preparatory students needing remedial English, and ESL students, faculty at Bergen developed sp ecial courses to deal with the Generation 1.5 students they labeled as crossover students. Students who had resided in the U.S. for at least eight consecutive years were given the standard plac ement test and either placed into preparatory English courses or colle ge level courses based on their results. Students with fewer than three years in U.S. high school s who were nonnative speakers of English and/or students who had resided in the U.S. for fewer th an eight consecutive ye ars were assessed using the Comprehensive English Language Tests (CELT). If these students demonstrated significant ESL characteristics in their writing samples, had reading and sentence skill scores comparable with 8th or 9th grade students, spent three or fewer years in American high schools, and were nonnative English speakers predominantly exposed to another language at home, they were designated as crossover students. These students were then advise d to take specifically designed crossover courses. Another program hosted in the General Colleg e of the University of Minnesota uses principles of developmental education and c ontent-based ESOL pedagogy to help Generation 1.5 students learn academic language (Moore & Christia nsen, 2005). Their approach is believed to be more effective than stand-alone ESL because la nguage is often learned best in the context of content area learning (Krashen, 1982; Zamel, 2004). Students are requi red to enter the program if they have been in the U.S. educational system eight or fewer years, ha ve a home language other than English, and have test scores documenting a need for English support. Most of the students

PAGE 41

41 in the program have been in the U.S. between two and eight years. This program has proven successful in both retention and su ccessful progression of students into degree-g ranting programs (Christensen, Fitzpatrick, Murie, & Zhang, 2005). The Need for Better Placement Procedures There are currently 1,202 community college s nationwide (AACC, 2007), and two-year schools can expect an increase of nearly 20% in growth by 2010 (Gerald, 2000). Cohen (2002) estimated that more than half of the community colleges nationwide o ffered ESL/EAP programs. In the 21st Century even more community colleges will be required to offer ESL instruction. One of the major issues that will need to be addressed is proper student placement. For students to get the most out of their post-secondary education, they will need to be accurately placed into programs. Unfortunately, as community colleges are scrambling to develop ESL/EAP programs, there appears to be a tendency for each college to reinvent the wheel in terms of assessment. Defining, measuring, and documenting the succe ss of ESL/EAP students is a complex and difficult task that has rarely been attempted out side individual instituti ons (Ignash, 1995), and no two colleges seem to be using the same processes for placement and assessment. According to Blumenthal (2002), procedures and policies for assessment and placement of EAP students differ widely from college to college. In 2005, Fl oridas Division of Community Colleges and Workforce Education conducted a survey seeking responses to questions about developmental education in the Florida Community College System. This survey noted, Clearly, there is not a standardized assessment and placement process for ESL students in institutions across the community college system.(Florida Department of Education, 2005). Table 2-1 below details the finding regarding placement practices at community colleges across the state.

PAGE 42

42 Table 2-1. Number of Florida community colleges employing common practices in the placement of ESL students with many colleges using multiple practices Test or Practice Utiliz ed Number of Colleges CPT/Accuplacer College Placement Test 18 ACT/SAT 7 LOEP/TOEFL/English Placement Test/Celt 15 Use of writing sample 5 Consultation with an advisor 2 CASAS 4 TABE 2 No ESL course offerings/program 4 Note: Adapted from Developmental Education in the FFCS While some colleges use holistic writing assessm ents, others use discrete point grammar tests. While some colleges require students to enroll in specific classes based on assessment results, others leave decisions up to students. Th is diversity is understand able given the varying nature of colleges and the va riable demographics and need s of the student populations. It is not only in the realm of ESL where school s have felt testing pressures. In Florida in the 1980s, minimum competency testing was esta blished in the form of the College Level Academic Skills Test (CLAST). Students had to demonstrate ability on this exam before they could be awarded an Associate in Arts degree. Cu t scores, or bench mark scores that indicate whether or not students have successfully demonstrated performance of some skill on the CLAST were a contested topic, and scores were raised in crementally until 1992 when they reached their current cut levels. Since January of 1996, further adjustments have led to the recognition of waivers for all sec tions of the CLAST. Students can receive waivers if they earn a 2.5 or higher on related course work or if they meet state mandated criteria on the placement tests as they enter the community college syst em. This adds to importance of reliable and accurate placement testing. Another increase of testing pressures in the St ate of Florida occurred in May of 1997 when legislation required each institution to set its own course requirements and to implement a

PAGE 43

43 standardized, institutionally deve loped test using stat e-developed items to serve as an exam needed to exit from preparatory programs. The state then developed various versions of a blueprinted exam to be used for these exiti ng purposes. However, they failed to mandate a statewide cut-off score to be used with the ex it tests. The Southern Regional Education Board, which is a consortium of 16 southeastern states has recommended the establishment of statewide standards for student performance and for placemen t of students into college courses (Abraham & Creech, 2002) In addition, many schools are feeling increased pr essure to place all students accurately. In August of 1997, full implementation of a major ch ange in placement testing was mandated by the Florida legislature. In conjunction with E ducational Testing Service, the state of Florida adopted a placement test and one acceptable sc ore statewide for students entering community colleges. Students scoring below the cut-off are required to take and pass remedial courses in each area of deficiency: English, reading, and ma thematics (Smittle, 1996). Students taking these remedial courses are required to pay identical fees, but do not earn credit toward graduation. Accurate placement into these remedial progr ams has become very important because these courses can not be applied to graduation credits. In addition, these remedial programs have a serious effect upon students, colleges, and the st ate because all three share the cost of these programs. Floridas House Bill 1545, which went in to effect in July of 1997, requires students to pay full instructional costs after failing a firs t attempt in college prep aratory courses. Another funding problem introduced around the same time was that as of the 1999-2000 academic year, community colleges would receive state funds based on program completions (Abstien, 1998). Whereas prior to this, funding had been depende nt on program enrollment. In spite of the

PAGE 44

44 obvious challenges resulting from this change, one significant benefit was the shift in focus to program outcomes. (Banta, Rudolph, Van Dyke & Fisher, 1996; Grunder & Hellmich, 1996). Section 1008.30(1) (formerly 240.117) of the Florida Statutes K-20 Education Code explains placement further: The State Board of Education shall devel op and implement a common placement test for the purpose of assessing the ba sic computation and communica tion skills of students who intend to enter a degree program at any public postsecondary educationa l institute (Florida Statutes Title XL VIII, 2003). The following subsection 1008.30(2) detailed that the mandate d test would assess basic competencies that are essential to perform colle ge-level work. Students not meeting the criteria fall under a directive as ou tlined in 1008.30(4) (a): Public postsecondary educational institution stud ents who have been identified as requiring additional preparation pursuant to subsection 1 shall enroll in college-preparatory or other adult education pursuant to s. 1004.93 in comm unity colleges to develop needed collegeentry skillsA passing score on a standardized, institutiona lly developed test must be achieved before a student is considered to have met basic computation and communication skill requirements. This focus on placement and testing led to addi tional challenges. As of the 1997-98 academic school year, the problems with placement crite ria for entering freshmen at Miami Dade Community College, one of the largest post secondary institutions in the state of Florida, reached such a high level that a memorandum was issu ed by the school administration asking for assistance in developing and enhanc ing initiatives that would guara ntee student success. In this memorandum, it was noted that approximately 72% of incoming students would require placement in preparatory reading classes, and 57% would require placement in preparatory writing courses (Padron, 1997). When nearly 72% of an institutions student body requires remedial training of some sort, there is clearly a problem. Increasing the num ber of preparatory courses seemed like a good solution to meeting the needs of these under-prepared students. For example, the State of Florida

PAGE 45

45 increased the number of levels for ESL students fr om three levels of preparatory English to six levels of preparatory English. So as not to ne gatively affect county programs, Valencia only adopted five of the six levels. And, in order for an expanded preparatory program to function properly, there is again a reliance on proper placement. Program Models In 1973, John Upshur discussed the educational context of language te sting with examples of different types of instructional programs. To illustrate the many problems faced by multilevel, multi-skill, preparatory programs, an abbreviate d version of his discussion follows. Upshur started by discussing a simple program in whic h students enter the program, undergo instruction, and then leave. A visual of this can be found in Figure 2.1. The two major problems with this program model are that there is no indication of whether or not th e program is appropriate for all who enter and no way of knowing if the instruction offered is effective. Figure 2-1 Upshurs simple program model: This figure illustrates the simple program model discussed by Upshur (1973). Students en ter, receive instruction, and leave. Instruction Enter Exit

PAGE 46

46 Extensions to the previous program model in cluded adding tests at key decision stages. First, an exit test was added to solve the questi on of whether or not instruction was effective. However, this failed to identify whether the pr ogram was appropriate fo r all who entered. Next, tests were given prior to entr ance. This solved the program appropriateness question, but introduced the problem of needing multiple equivale nt versions of tests so that students would not simply be retaking the same tests at entr ance and exit from the pr ogram. Upshurs most complex model (Figure 2-2) shows an example of a multi-level program with several different types of placement decisions being made. In th is model the initial decisions of admission and placement in this program are made on the basis of a single test. If students are too low, they are not admitted into the program. If they are able to demonstrate mastery of the objectives of the courses, they are exempt. Those students who ar e at an appropriate level for the program can then be placed into one of three levels of instruction based on th eir test scores. At the completion of each level of instruction, student s take an achievement test to de termine if they can progress to the next higher level. If they pass, they move forward. If they dont pass, they are given remedial work based on their areas of weakness determined by scores on the achievement test. In this program, rather than being tested again after remedial work, st udents are simply moved forward to the next step of the program. Upshur noted, ho wever, that it would be possible to test these students again. Upshurs program models were designed to show the range of testing choices/placement decisions that need to be made in a program and to illustrate the complex assessment issues involved from entrance through exit in a program. They have been used here as a basis for explaining the importance of proper placement at Valencia.

PAGE 47

47 Figure 2-2 Upshurs complex model: This figure illustrates the most complex program model discussed by Upshur (1973). It is an ex ample of a multi-level program with several types of decisions being made. Enter Do Not Enter Score too low? Exempt ? Placed w/ score Test Level1 Level 2 Level 3 Test 1 Test 2 Test 3 Pass? Remedial B Remedial C Exit Pass? Pass? Remedial A yes no no no no no

PAGE 48

48 Like Upshurs complex model, students at Valencia are placed into the program on the basis of a test [Valencias test actually includes 4 subtes ts].However, while Upshurs model has 3 levels, Valencias has 5 levels. Furthermore, at Valencia each of the five levels is subdivided again by language skills. Currently at Valencia students are tested and placed into all skills at one level. Some personnel at Valencia feel this is appropriate while othe rs believe students should be placed into skills across levels based on indivi dual subtest scores. This cross level skill placement based on individual subtest scores in each skill is also supported by The College Board (2003). A complete model of Valencias five-level program is beyond the scope of this discussion. However, the model in Figure 2-3 depicts what an improved model of Valencias third level would look like with these additional assessment points. However, if the current process is efficiently placing students into a ppropriate skills and levels, some of the decision stages could be avoided. Currently Valencia does not empl oy diagnostic and achievement tests at the beginning and end of each skill c ourse within each level. By enhancing the effectiveness of initial placement practices, Valencia may never n eed to develop these other testing measures. It should be stated here that it is not the intent of this study to an alyze diagnostic and achievement measures. This study seeks to analyze and enhance current placement practices. Valencias model has been added here to il lustrate the number of decision st ages and testing measures that could be avoided if students are pla ced properly at the time of admission.

PAGE 49

49 Figure 2-3 Possible improved model: This model illu strates the decision stag es and types of tests that could be avoided by enhancing placement practices at VCC. Confirming Diagnostic/ placement Test Go to Level 4 Level 3 Score ? R ece i ves In st r uct i o n Achvmnt Test Below 50% GotoLevel2 Reading Writing Speech Grammar Confirming Diagnostic/ placement Test Confirming Diagnostic/ placement Test Confirming Diagnostic/ placement Test Above 90% Go to Level 4 If Pre-reqs Completed Between 50 % 90% Score ? B e l ow 70% A bove 70% Score Score ? Score ? Below 50% GotoLevel2 Below 50% GotoLevel2 Below 50% GotoLevel2 Above 90% Go to Level 4 If Pre-reqs Completed Above 90% Go to Level 4 If Pre-reqs Completed Above 90% Go to Level 4 If Pre-reqs Completed Between 50% 90% Between 50% 90% Between 50% 90% R ece i ves In st r uct i o n R ece i ves In st r uc t i o n R ece i ves In st r uct i o n Achvmnt Test Achvmnt Test Achvmnt Test Score ? Score ? Score ? B e l ow 70% B e l ow 70% B e l ow 70% A bove 70% A bove 70% A bove 70%

PAGE 50

50 Current Placement Practices Valencia Community College, like most othe r community colleges, has two tracks for students who are not ready to take college level reading and writi ng courses. Prep one and Prep two courses, have been develope d to equip native English-speaki ng students with the academic reading and writing skills they wi ll need to survive in college. These skills include identifying implied and stated main ideas; recognizing tone bias, and fact or opi nion; and understanding the rhetorical structure of a five-paragraph essay. St udents are often placed into or exempted from these preparatory courses based on entrance, plac ement, or a combination of exams. Students scoring above established cut scores on placement exams are admitted directly to English composition; students scoring slightly under establis hed cut scores are placed into Prep two; and students scoring significantly below establis hed cut scores are placed into Prep one. At Valencia, most students are admitted to the school using one of the following instruments: the SAT (Scholastic Aptitude Test ), the ACT (American College Test), or the Accuplacer CPT (College Placement Test). Prosp ective students of Valencia who do not have satisfactory English and reading scores on the AC T or the SAT are required to take the stateapproved CPT, a computer adaptive placement test Moreover, students who do not have recent (within two years) ACT, SAT, or CPT scores ar e also required to take the CPT for proper placement (VCC College Catalog, 2007). All entering freshmen who do not score suffici ently well on the Accuplacer CPT (College Board, 2003) for admission and who self-identif y as nonnative English speaking students are given the Accuplacer LOEP exam (College Boar d, 2003). Prior to 2001, Valencias cutoff scores for LOEP placement were based on placement information contained in the LOEP Coordinators Guide, which outlines high, intermediate, and low le vels. However, after a curricular switch from three distinct levels of ESL to five distinct leve ls of EAP, a result of th e states initiative to

PAGE 51

51 standardize community college level ESL program s in 2001, Valencia was forced to reevaluate placement cutoff scores. It was mutually agreed upon by faculty and staff that the new cutoff scores would be created by decrea sing the intervals in the existing placement cutoff scores rather than performing expensive and time consuming statistical procedures to develop new cutoff scores. Other institutions across the state simply looked at Vale ncia as a model and adopted Valencias new cut-scores. It was understood that this woul d increase error in placement; however, program coordinators were still able to make placement decisions about individual students based on a combination of results from both the objectively scored multiple choice questions and subjectively scored essay elements of the LOEP test. In addi tion, diagnostic testing was used at the beginning of each semester for each class. It was thought that if students were misplaced at entry into the program, it could be addressed at the time of diagnostic testing. Subsequent decisions served to increase the error in placement of students into the EAP program at Valencia, thereby elevating the need to identify more accurate placement procedures. In 2001, in the interest of one-stop registration, an administrative decision was made that the writing sample of the LOEP would not be included as part of the entry assessment process for nonnative English-speaking applicants as it delayed their placem ent and registration. The five ESL faculty members on the committee felt it pedagogically unsound to discontinue evaluation of the writing sample in placement because it was the only direct, productive measure of English. However, at the behest of personnel in the asse ssment office, supported by members of student services, the committee agreed to a trial period of one year for this procedure, secure in the belief that the departmental "diagnostic" exams, which at the time were given at the beginning of each course, would allow students who had been imprope rly placed in EAP courses to move into more appropriate courses.

PAGE 52

52 Unfortunately, in 2002 course-based diagnostic testing and early movement of misplaced students were eliminated as options for EAP c oordinators. In addition, in a review of the literature conducted by the author, it was found that Valencia was using the LOEP for placement in a manner inconsistent with that recomm ended by LOEP developers. (See LOEP and Its Recommended Use on page 54) Pilot Study In 2003, in response to complaints by EAP inst ructors at Valencia about misplacement of students, discussions began again about revamping the placement process. It was suggested that it might be more appropriate to place students into the different language skill classes (reading, writing, speech, and grammar) based on scores fr om the three objectively scored LOEP tests (Reading, Language Use, and Sentence Meaning) a nd a holistically scored essay. To examine the possible impact of these changes, a limited pilot study was conducted by the researcher. This study was simply a post hoc analysis of how differently students woul d have placed into levels if individual subsections of the LOEP were used rather than the a ggregate scores. In the analysis, actual placement levels derived from the use of aggregate scores were compared to individual scores on each aspect of the LOEP test for each student in the sample population. The researcher (with the help of the Office of Institutiona l Research) obtained access to the placement test scores of 1,052 students in Valencias database from June 2002 through August 2003. The analysis of differences between students actual placement based on the aggregate score and their possible placement based on each individual skill test score (i.e., their LOEP Reading test score, LOEP Language Use test score, et c) found that if students were pla ced into levels solely on their LOEP Reading test scores instead of the aggreg ate of the three LOEP subtest scores, students would place one level above or below their aggreg ate placement level 51% of the time. It was also found that if students were placed into le vels solely on their Sentence Meaning placement

PAGE 53

53 test scores instead of the aggr egate scores, students would place one level above or below their aggregate placement level 49% of the time. And, it was found that if students were placed into levels solely on the basis of their Language Us e placement test scores instead of the aggregate scores, students would place one level above or below their aggregate placement level 50% of the time. Unfortunately, Essays we re not being evaluated during th is time period, so Essay scores were not included in this pilot study. Although this pilot study revealed differences between placement levels derived from individual LOEP scores versus aggregate scores, it did not identify which s ubtests best predicted success. However, the study did provide decisi on makers with enough information to make a change. It was decided that essay readers would be trained, and evaluation of essays would once again be used in the placement process. Currently, students are placed according to a fo rmula that was agreed upon by a committee made up of faculty and decision makers from va rious departments at Valencia. The formula was selected because it was easy to add the necessa ry fields to the database and was simple to calculate. Furthermore, it gave counselors a num ber that they could compare to existing cutscores, and it satisfied teacher re quests that a written sample of student language performance be included in the placement of EAP students. Howeve r, prior to the current study, this method was not empirically tested for its ability to place st udents accurately. Table 2-2 lists the instructions and gives steps for placing a hypothetical student based on LOEP test scores. The cut-scores Valencia employs only place stud ents within levels. Students placed into level 5 are required to take read ing, writing, grammar, and speech at that level. This presents problems when students score at di fferent levels across the subtests For example, if you look at the sample students scores in table 2-2, the st udent scored a 5 (which converts to 100) on the

PAGE 54

54 essay, a 106 on the reading, a 92 on the Senten ce Meaning, and an 84 on the Language Use. Based on this formula, this student would be placed into level 5 for all language skills. Table 2-2. Explanation and an example for plac ing students into EAP courses at Valencia. Steps Example LORC = 106 LOSM = 92 LOLU = 84 1) Student scores on the three objectively scored LOEP tests (Reading, Sentence Meaning, and Language Use) are averaged. Average = 94 Essay Rating = 5 Multiply by 10 = 50 Add 50 = 100 2) A number is derived for the holistically scored essay. Trained readers read each students essay and rate it from 1 to 7. This number is then multiplied by 10 and added to the number 50. Derived Essay Score = 100 3) The average of the numbers derived from steps 1 and 2 is used to place the student into le vels based on the Valencias cutscores. Average = 97 Valencia Cut-Scores Students scoring 65 or below are not admitted. Students scoring 66 are admitted to level 2. Students scoring 76 are admitted to level 3. Students scoring 86 are admitted to level 4. Students scoring 96 are admitted to le vel 5. Student placed in Level 5 Students scoring 106 are admitted to level 6. Students scoring 116 or higher are exempt. However, if placed by individual scores, this stud ent would be placed in level 6 for reading, level 5 for writing, and level 4 for grammar. Many of the faculty members at Valencia believe that placement across levels is more appropriate than placing a student into al l skills at one level. Furthermore, none of the currently used subtes ts addresses productive/receptive speech/listening skills. Therefore, some instructors feel a listen ing/speaking test should be added unless one of the other LOEP subtests is found to be a reliable predictor of success in speech classes.

PAGE 55

55 The LOEP and Its Recommended Use Currently at Valencia all students are require d to take the Accuplace r CPT (College Board, 2003), a computer-based placement test. However, fo r EAP students, this exam is not used for actual placement purposes. CPT scores are simply gathered and kept on record because it is a requirement in Florida for community college st udents to take the CPT. Students who selfidentify as nonnative English spea kers are required to take th e Accuplacer LOEP (Levels of English Proficiency Test). According to the LOEP Coordinators Guide (2001), Valencia is currently using the LOEP in a manner that is inconsistent with what is recommended by LOEP developers. Skills tested by the LOEP are described below. LOEP Reading Comprehension (LORC): Stude nts read passages of 50 to 90 words and then answer questions about their reading. The reading passages are about a variety of different topics. They may read about the arts, sc ience, or history. Half of the test questions ask about specific information that is stated in the passage. The other half asks students to identify the main ideas, fact vs. opi nion, or the author's point of view. LOEP Language Use (LOLU): This test is de signed to measure the students' understanding of the English vocabulary. The sentences come from a variety of different subject areas. Students are asked questions about basic and important idioms, particularly terms of beauty, age, greatness, and size, adverbs such as before, after, duri ng, and prepositions of direction and place. LOEP Sentence Meaning (LOSM): Students are as ked to fill in a blank with a word or phrase, or combine two sentences. The skil ls covered are writing skills including the proper use of nouns and verbs. LOEP Essay (LOES): Although Accuplacer offers a computer graded writing assessment, LOEP Essay at Valencia an essay exam that is graded locally using trained readers and a holistic rubric. The holistic rubr ic used for grading the essay can be found in Appendix A. Students have 60 minutes to write an essay on a topic provided by te st administrators. Students are asked to organize th eir ideas carefully and to present them in more than one paragraph. Additional LOEP tests are availa ble but are currently not used by Valencia. They include: LOEP Listening: for this test, a committee of college faculty and other educators defined the listening skills considered important for entry-level college students. Both literal comprehension and implied meaning were included, and seven listening skills were identified. Multiple-choice items were developed to measure the listening skills.

PAGE 56

56 Write Placer ESL: This is a direct measure of student writing using prompts and rubrics designed by ESL experts. Student essays are scored using the IntelliMetric artificial intelligence system, a computer graded system. As noted above, students at Valencia are currently placed into one of five levels of courses by averaging their essay scores with the average of th e three subtests of the LOEP test. This score is then compared with cut scores to place studen ts into levels 2, 3, 4, 5, and 6 (again, the cutscores being 66, 76, 86, 96, and 106 respectively). Unfortunately, the cut scores being used were never normed to Valenc ias program or student population. The cut scores were instead taken from the LOEP Coordi nators Guide and then manipulated to fit five levels instead of the three leve ls they were originally designe d for by decreasing the spread in each cut score range. However, the LOEP Coordinators Guide actually did make recommendations [which led to the current rese arch] as to how the LOEP should be used for placement of students. The three components of LOEP may be administered singly or as a group. We recommend that institutions investigate which score comb inations provide the greatest accuracy for their curricula, and establish cut scores and placement mechanisms accordingly. Particularly in the case of ESL, using indivi dual test scores for placement in the various topical areas would be a necessa ry part of establishing a pl acement system using LOEP. Our purpose here, however, is to provide evid ence that LOEP is valid (College Board, 2001) One can infer from these instructions that scores on the LOEP subtests should be used to place students into different skills at different levels. For example, the reading test should be used to place students into reading classes at different leve ls. If investigations at a particular institution found that reading subtest scores also predicted placement into other courses, like speech, then reading subtest scores could also be used to pl ace students into those courses as well. One could assume that the Language Use and Sentence Mean ing subtests might predict success in writing or grammar courses, but the College Board left it up to individual institutions to identify which

PAGE 57

57 subtests or subtest combinations provided the greatest accuracy in placement. Valencia currently places students into one skill level regardless of di fferences in individual LOEP subtest scores. Holistically Scored Essays As mentioned above, holistically scored essays are currently being used as part of the placement practices at Valencia. Valencias essa y scoring rubric can be found in Appendix A. However, some administrators, in the interest of quicker placement testing, would like to return to using only the objectively scored LOEP subt ests for placement decisions, and this issue remains a controversial one at VCC. An importa nt benefit to using onl y the objectively scored tests is cost; no expense w ould be incurred by paying read ers to score the essays. When it comes to placement into composition cour ses, it has been suggested that a timed essay exam is the preferable placement measure if the only alternative is a multiple-choice test (Garrow, 1989; Wolcott & Legg, 1998; Zinn 1988). In addition, some studies have found that placing language and ethnic minority students using only multiple-choice tests can be problematic (Belcher, 1993; College of the Ca nyons Office of Institutional Development, 1996; Garrow, 1989; Jones & Jackson, 1991; White, 1990), timed essays have been found to be more predictive of final grades in writing courses when combined with multiple choice tests (Cummings, 1991; Cunningham, 1983; Galbraith, 1986; Garrow, 1989; Isonio, 1994; Wolcott, 1996; Wolcott & Legg, 1998). Therefore, based on the research, one could conclude that Valencia is doing the right thi ng by including both the essay and the objectively graded subtests in its placement practices. Whether these pr actices are actually making a difference and justifying the additional time and co st has yet to be determined.

PAGE 58

58 CHAPTER 3 MATERIALS AND METHODS Introduction This research used survey data and data collected from Valencia Community Colleges Office of Institutional Research to examine characteristics that would lead to more efficient placement of students into EAP courses. It also sought to more accurately identify Valencias EAP student population on Valencias three ma jor campuses and elicit student and teacher feedback about placement. It compared the pred ictive values of individual LOEP subtest scores with two composite models of LO EP subtest scores. In addition, it analyzed whether or not the variables of LOC or a computed indicator of Generation 1.5 stat us functioned to assist the prediction of successful placement of students into EAP classes. Study Setting Valencia is a fairly large community college According to Valencia Community College Facts (2006), it is the third larg est community college in the stat e of Florida with an FTE (FullTime Equivalent) of 21,227 students. Fifty-eight percent (58%) of the student body is female; the national average is 59% female (American A ssociation of Community Colleges, 2007). The student body of Valencias EAP program, which includes approximately 80 sections during the summer and 120 during the fall and spring, is quite di verse, with students from different ethnic and socioeconomic backgrounds as well as ru ral, urban, and subur ban settings. Annual enrollment cost at Valencia is slightly lowe r than the national average, at $2,100 per year as opposed to the national average of $2,272. The average student age at Valencia is lower than the national average, at 24 and 29 respectively. On av erage though, Valencia is similar to the other 1,202 community colleges nationwide. Because only limited demographic data could be gathered through student records, more detailed demogr aphic data were gathered through surveys to

PAGE 59

59 inform the level of generalizability of the populat ion. This information is reported in the Results section. Participants Participants in the survey pa rt of this study were all cons enting students and instructors taking part in EAP courses (level s 3, 4, 5 and 6) at Valencias East, West, and Osceola campuses. During the 2006 Summer A & C semesters, 470 stude nts and 19 instructors participated in the survey part of this research. With the assistan ce of Valencias Office of Institutional Research, EAP student placement and final course grade data were gathered for all of the survey respondents. In addition, data were gathered fo r first time students in Valencias EAP program over the previous three years (2003 2006), yieldi ng complete LOEP placement test scores and final course grade information for an additional 1,030 students. Materials and Data Collection During the summer of 2006, the in vestigator visited and administ ered surveys in all regular EAP courses offered during the summer A & C terms at Valencia Community College. These surveys took place during the middle of the semest er; teachers were asked to take an instructor survey, and students were asked to take a student survey. The questions on the teacher survey were in tended to yield information about teacher perceptions of placement effectiveness at Valenc ia. Questions 1-3 were Likert type questions asking teachers to: 1) rate how well Valencia does at placing students 2) explain how often they have students that they feel are misplaced a nd 3) determine how many students were misplaced during the semester of the survey. The fourth question was an open-ended question allowing teachers to qualify any of their answers or ma ke comments. Teachers were given class rosters and asked to rate each current student as well pl aced or not well placed. If students were not well placed, instructors were asked to provide a place ment level for them. Teachers received only one

PAGE 60

60 survey, but the survey contained class rosters for all of the classes they were teaching during the summer semesters. The teacher survey can be found in Appendix B. Teachers were also sent a description of Ge neration 1.5 students and another set of class rosters for all of the courses they were teaching. They were then asked to read the definition and indicate which students they felt were member s of Generation 1.5. A copy of this survey has been included in Appendix B. Questions on student surveys were intended to provide information about demographics (questions 7 12 & 24), academic history (questions 11& 13), language use (questions 20 23), tech nical knowledge (questions 25), Locus of Control (Questions 274), and general fee lings on placement (question 55). A copy of the student survey can be found in Appendix B. To maintain consistency in the administration of the surveys, all surveys were administered by the investigator, and the same introduction sc ript was used in each class (See Appendix B). Survey responses for both students and instructor s were then entered into SPSS for analysis. Procedures The following represents the procedures for answering the research questions. Although not officially addressed as a re search question, the researcher was inte rested in presenting a detailed description of the composition of Valenc ias EAP population so th at findings from this study might be generalizable to other community colleges and universitie s in Florida and the U.S. Student survey responses were entered into SPSS and the data were analyzed using descriptive statistics to find fre quency distributions and measures of central tendency. Results are reported in Chapter 4. One goal of this research sought to identify the predictive abil ities of the LOEP subtests on final course grades; therefore, it was decided that only first seme ster students would be included in this part of the research because first seme ster students would have taken the LOEP subtests

PAGE 61

61 immediately prior to attending their first courses at Valenc ia. However, because it was impossible to guess the number of first semester students taking courses during the time frame of the study, two studies were proposed to gua rantee enough students for unbiased multiple regression procedures. The first study used test da ta from 1,030 first time students in Valencias EAP program over the three-year period prio r to the summer of 2006 (2003). The second study used all willing EAP students taking cour ses at Valencia duri ng the Summer A and C terms of 2006. However, only first-time students we re used in the analyses. These studies were conducted in an effort to seek an swers to the research questions: 1. What are the student and teacher be liefs about placement at Valencia? 2. Which of the following three approaches best pr edicts student success as measured by final course grades and teacher evaluation of pl acement: 1) Averaging the three objective LOEP subtests, 2) Using an equally weighted aver age of both the object ively and subjectively scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors? 3. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by final course grades? 4. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by teacher evaluation of placement? 5. Do the student variables of Locus of Control and Generati on 1.5 add to the prediction of placement in EAP courses as measured by fina l course grades and teacher evaluation of placement? Teacher comments qualifying their answers about placement were also analyzed descriptively following a simple method for c oding qualitative data (Lofland & Lofland, 1995). After teacher responses were gathered, they we re transcribed yielding 132 distinct comments. Each of the 132 comments was then run through an initial coding and focused coding process. Each comment was coded with a classifying label that assigned meaning to individual pieces of information within the token. The first pass thr ough the data yielded 46 different codes. For

PAGE 62

62 example, after reading the following sentence, My opinion is that we should either offer a level 6 grammar class or reevaluate the standards by which students test out of EAP 1560, it was initially coded Advice. However, teachers ma de a variety of comments giving advice. Therefore, on recursive passes through the data this was given a focused code of Advice on courses. After initial coding, th e other 46 original codes were reviewed in recursive passes through the data in an attempt to eliminate less useful codes, combine smaller categories into larger ones, and subdivide larger categories into more meaningful parts. Results are reported in Chapter 4. The first study used data from Valencias Office of Institutional Research; complete LOEP placement test scores and final course grad e information for 1,030 students who attended Valencia over the previous three years were ente red into SPSS. Final course grades in each of the skill courses (e.g., Reading) at each of the proficiency levels (EAP Levels 2) were used as the dependent variable. Final course grades were weighted as follows (A = 4, B = 3, C = 2, D = 1, and F = 0). For this research, withdraws (W, WF, and WP) were not used. Separate regression analyses were conducted for each c ourse using each of the three competing models as a predictor variable. When two models were found to significantly predict fina l course grades, an F-test was conducted to compare the regression models. The second study used current student/teacher da ta to run similar analyses and check the predictive abilities of the three competing mode ls. The current data also allowed for these analyses to be run using both final course grades as a dependent variable and teacher evaluation of placement as a second dependent variable. In addition, two new variables were analyzed for their predictive abilities: Locus of C ontrol and Computed Ge neration 1.5 status.

PAGE 63

63 As discussed earlier, an attempt was made to validate a survey measure of the computed Generation 1.5 variable through co rrelation with the teacher judg ment of the construct. The computed variable of Generation 1.5 status was found to be moderately correlated with teacher identification of Generation1.5 status r = .40 p<.001. In addition, to investigate whether students who were computed as Generation 1.5 differed from students not computed as Generation 1.5 in th eir ratings by professors as being members of Generation 1.5, a Chi Square statistic was used. Re sults indicated that students computed to be members of Generation 1.5 are significantly different from non-members when rated as Generation 1.5 by instructors. 2 (1, N = 470) = 75.12, p<.001. Stude nts rated as Generation 1.5 in the computed model were more likely than expected by the null hypoth esis to be rated as Generation 1.5 by professors than students who were not computed as Generation 1.5. Phi, which indicates the strength of the asso ciation between the two variables, is .40 and, thus, the effect size is considered to be medium to large according to Cohen (1998). The methodology for the second study was simila r to the first study: separate regression analyses were conducted for each c ourse using each of the three competing models as a predictor variable and final course grad es as the outcome variable. Wh en two models were found to significantly predict final course grades, an F-test was conduc ted to compare the regression models. The same procedures were used with teacher evaluation of placement as the outcome variable. Finally, the two new variables (Locus of Control & Generation 1.5) were added to the prediction models to test their predictive values. Results are reported in Chapter 4.

PAGE 64

64 CHAPTER 4 RESULTS Survey Results Because only limited demographic data could be gathered from student records, more specific demographic data were gathered through surveys to inform readers about the level of generalizability of results base d on Valencias population. Other questions on the student surveys sought to find answers to questions about academ ic history, language use, technical knowledge, and Locus of Control. This section reports the summarized results of surv ey responses to these types of questions. A complete list of results fo r the student and teacher surveys can be found in Appendix C. Survey respondents spoke 37 different language s and came from 67 countries. The top five languages spoken at Valencia were Spanish, Cr eole, Arabic, Portuguese, and French. The top five countries of origin were Columbia Haiti, Puerto Rico, Morocco, and Peru. On all three campuses, nearly half of the students surveyed were from Columbia, Puerto Rico, or Haiti. In terms of major differences, however, most of the Haitian students attended the West Campus; In fact, 28.85% of the West Ca mpus population was Haitian as opposed to 4.63% on the East Campus and 4.92% on the Osceola campus. In terms of gender, there were slight differences between campuses, but school wide 59.5% of the respondents were female. The national average is 59% female (AACC, 2007). Of the students surveyed, 89% were in th eir first, second, or third seme ster at Valencia. The ages of survey respondents ranged from 17 to 59 with a mean of 26.86, a median of 23, and two modes, 19 and 21. More than 60% of respo ndents were below the age of 26. The majority of survey respondents had been in the U. S. for five or fewer years, with the mean number of years in the U.S. being 6.1 an d the median and mode being five and three

PAGE 65

65 respectively. Fifty-nine percent of survey re spondents entered the U.S. K-20 system at the college level, only 41% reported having attended U.S. K-12 schooling. Appendix C also contains information on year of gra duation or GED completion. In addition to demographic questions, students were also asked questions about academic history, language use, technica l knowledge, and Locus of Cont rol. When asked, Are you the first person in your family to go to college?, 72.26%, reported that they were not, with 50.51% of those reporting that their si blings/cousins had gone to college and 41.02% repor ting that their parents or their parents siblin gs had gone to college. When asked, Are you the first person in your family to go to college in the U.S.?, 62.5%, reported that th ey were. Of those who were not the first in their families to go to college in the U.S., 82.31% reported someone from within the same generation, i.e., a sibling or cousin, to have been the first to go to college in the U.S. Of survey respondents, 39.22% repor ted having gone to college outsi de the U. S.; many of these respondents also reported having spent more th an two years in colleges outside the U.S. In addition to prior educational experience, students were asked questions about how often and how well they used English. Most students rated their abilities to write papers and do research in English as average. The majority reported that among their friends and peers with whom they did things with every week only a few were native English speaking, but most reported using English most of the time to speak with their friends. The majority also reported that their families did not often use English in the home. In response to the two questions about computers, 96% of respondents reported having a computer at home, and the majority of respondents rated their abilitie s to use the computer as above average or expert. Finally, the complete descriptive statistics fo r student results on the Trice Locus of Control Index can be found in Appendix C. However, for the sample of 380 EAP students, scores ranged

PAGE 66

66 from 1 to 20 with a mean score of 8.83 (SD = 3.44) Trices original stud y (1985) looked at two sample populations: 107 sophomore and junior teacher education majors, with a mean score of 12.46 (SD = 4.32) and 82 freshman general psychol ogy students, with a mean score of 13.22 (SD = 4.92). Question 1 1. What are the student and teacher be liefs about placement at Valencia? Student opinions This section reports the results of analyses of student survey data eliciting students opinions on how well Valencia is doing at plac ing them into the courses they need. When students were asked about their beliefs on placemen t, 21% felt that they had been misplaced. Teacher opinions In the teacher surveys, teachers were first as ked to select a word that best describes Valencias accuracy at placing st udents into the EAP courses they require. The choices were: (1) Poor (2) Below Average (3) Average (4) Above Average (5) Excellent. The general consensus was that Valencia does an average to above aver age job at placing studen ts. Fourteen of the 19 teachers surveyed described Valencias accur acy as Average, the remaining 5 described Valencias accuracy as Above Average. Survey responses had a mean of 3.26 and a median of 3. Teachers were then asked to comment on how often they have students in their EAP classes that they feel might be better placed in a different level. The response options were: (1) Never (2) Rarely (3) Sometimes (4) Often (5) Every Semester. Fourteen of the 19 respondents noted that students were Sometimes misplace d. One instructor responded that students were Often misplaced, and the remaining four resp onded that students are misplaced Every Semester Survey responses showed a mean of 3.47 and a median of 3.

PAGE 67

67 Teachers were also asked to comment on how many of their students they felt should have been placed in a different leve l during the semester in which the surveys were being conducted. Their choices were: (1) None (2) A few (3) Seve ral (4) Most (5) All. Two respondents indicated that None of their students should have been placed differently. Thirteen respondents indicated that A few should have been placed differen tly. Three respondents indicated Several, and one respondent indicated Most. Survey response s showed a mean of 2.16 and a median of 2. Analysis of open-ended teacher responses Finally, teachers were asked to provide any comments that qualified or explained their responses. Of the 19 participating teachers, 17 ch ose to add comments to their surveys. The coding of those comments produced 112 statements classified into eight major categories, each containing between one and five subcategories. The eight major categories were: 1) comments giving advice, which comprised 19.64% of the usef ul tokens; 2) comments about speech courses, 15.18%; 3) general comments on misplacemen t, 14.29%; 4) comments about Generation 1.5 students, 13.39% 5) comments on teaching problems, 11.61%; 6) comments on the LOEP, 8.93%; 7) comments about general placement pr ocedures, 8.93% and 8) other placement comments, 8.04%. Advice The advice category was comprised of teach ers giving advice on courses, teaching, placement, and pre/co-requisites. In giving ad vice on courses, teachers commented on the possibility of creating new course s, My opinion is that we should either offer a level 6 grammar class or reevaluate the standards by which stude nts test out of EAP 1560. Others wanted to combine EAP 1560 with EAP 1540. Still others not ed that courses should be made optional, Make EAP 1500 optional and keep only EAP 300 and 400. Advice on teaching yielded a few comments on using the labs to he lp students catch up and using ex isting Prep English resources.

PAGE 68

68 Advice on placement included ideas such as placing by skills rather than by levels and using specific parts of the LOEP to he lp make decisions about placement into specific courses, If students score into level 6 but have a weak U [LOLU] score on the LOEP, 1560 could be part of their mandate. Some teachers commented on the n eed for trained counselors to make decisions about placement, while one teacher suggested adding a different type of test altogether Where I taught in Oxford they used a 1-page CLOZE test as the only placement tool. There were only two contributors to comments about pre/co-requis ites, but one teacher felt quite strongly about creating a gate at level 5, Students should stay in level 5 until language basics are mastered. Another teacher made comments about limiting the movement from campus to campus while taking pre-requisites for classes, It is OK to take 0340 and 0360 on West, pass the classes, and then take 0440 and 0460 on East, but it is not okay to take 0360 on East and 0340 on West. Furthermore, They [the students] should not be allowed to skip a couple of semesters and not take the prerequisite. Speech courses The speech category revealed a variety of pr oblems with placement into Speech courses within the program. The instructor quote that best summarized this issue was, Speech is where all battles begin! Another res pondent said, I believe mispla cement often happens in speech classes. Comments were also made reveali ng that Speech may not be the only area with problems, I have had A students in Speech that have not passed Reading and vice versa. One instructor posited that the reason for these trouble s is that There is no Speech component to the LOEP, and although we have started to read LOEP essays, I dont think the process mirrors the intensity of the curricula. If this is the case, it could explain why another instructor said, There are at least 3 people in my classes now that would have done OK in the 1500 level (a higher level). However, students are not always misp laced into classes that are too low for them.

PAGE 69

69 Another teacher commented, For my speaking class there is one girl that could use a lower level in speaking, but I had this girl in grammar 360 last semester, and she was one of the best. Another instructor said, In the past, I have had students in EAP 1500 who I could barely understand. Then on the opposite side, I have had st udents in the EAP 300 le vel that could have done OK in a higher level. Some teachers believed that speech may not be necessary for all students, It seems that if academic speech is all a student needs, th ey would be better placed in a regular speech class or a prep speech class. I have quite a few students in 1500 who have no accent and who could make it in a regular colleg e speech class. Others placed blame on the difficulty of giving diagnostics in speech course s, Its difficult to evaluate speech on the first day of class like other subjects, while another found that even ex isting diagnostics currently are not working, I gave a diagnostic exam in 1500, and the results showed that no students should have been placed higher. However, regarding th e oral production of so me students, I think theyve been misplaced. Some instructors had issu es with the course curriculum, A few of my 1500 (and 400) students dont need pronunciation work, and a very few of them (not this term) dont even need listening work. And anothe r commented, The lingui stics section is not relevant to them. Finally, not all comments a bout speech courses were negative, they ALL need to learn how to produce an academic speec h, and They do benefit by learning to take notes on the lectures and they learn the components of a good speech. Misplacement Teachers had a variety of things to say a bout the misplacement of students into their courses. I have a number of students in my curre nt classes whom I feel should have been placed in a different level. About 25% to 30% of my st udents this semester would have benefited more from another level. Another teacher went on to say, Last semester I had a handful of students who probably should have been in a lower level cour se than they were in. As a result, at least

PAGE 70

70 partially, they struggled through the courses (and did not pass them). Some teachers had clear beliefs about the misplacement phenomenon. I believe that students who are misplaced are more frequently under-prepared than over-prepared. That is, most misplaced students belong in a lower level, not higher. This idea was suppor ted by others. Sometimes we get students whom we feel should have been placed in a lower leve l. I had some in level 3 who really belonged to level 2, but we didnt have level 2, so they were placed in level 3. However, this was not always the case. Sometimes we get students th at just seem way beyond the level of EAP, and The other one, I didnt know why he was in th ird level. His abilities seemed higher. One teacher noted that its not placemen t that is the problem, times out of 10 they have come to us by being promoted through the levels. Another teacher suggested why the misplacement leads to problems. There are some students who substa ntially lack a high enou gh proficiency level to even understand instructions or a particular tas k. Language comprehension gets in the way. And while one teacher mentioned what could be consid ered obvious issues with misplacement, What I have noticed is that some are placed in this le vel because of poor oral skills and others because of poor writing skills, another made comment s one wouldnt expect, (We shouldnt be) allowing students to exit level 5 without pa ssing the final exams. Another felt that, The students who believe that they themselves are mi splaced are often the stud ents who arent while another pointed out why we may not hear about st udent perspectives regarding misplacement, I have found that students who are misplaced ar e often gracious and dont complain about the placement. Generation 1.5 Generation 1.5, also yielded a healthy percentage of comments from teachers, Once again, the problem arises with 1.5s; all other stude nts are placed right. Some commented on how Generation 1.5 students felt, They (1.5s) were confused why they were in EAP while others

PAGE 71

71 commented on the reasons these students needed to be in the courses, W riting, its tough. They (1.5s) think they dont belong to th eir assigned levels because they were good at it in high school but then they cant pass or barely pass the class. I be t half of the class thinks they should be moved to the next level and again the problem is with 1.5s fluent, American accent, good vocabulary, but no structure: Ca nt make complete sentences, most verbs are missing, etc. Others made comments about why Generation 1.5 students were in EAP courses, As not having a listening component to the LO EP, Generation 1.5 students are pl aced into 1500 especially often [, usually] not necessarily needing the course. Some teachers asked questions while others made recommendations for how to deal with this po pulation, Should there be separate classes for 1.5ers? Some teachers think they are bored or misplaced and should be moved to a higher level. Maybe we should have 1.5s in level 5, 6 and send them straight to prep classes. Combine 1560 with 1540. It might be more me aningful for 1.5s. There was, however, consensus that teaching Genera tion 1.5 students had its difficulti es, They are shocked the way we teach structure directly and sometimes they struggle with the method more than grammar itself. Most 1.5s are bored in grammar classes but rarely do they improve their grammar skills. Level 5 is the hardest of all to teach for us and to take for them: They are bored and we cant (or its hard to) improve their speech skills. I dont see much progress with 1.5s in EAP 1500. Teaching problems When it came to problems with teaching, one instructor noted, I think my greatest difficulty as a teacher is to teach the necessary sk ills in a way that reaches all of the students. Another expressed her belief in what happens wh en students fail to connect with her or the content, There are 2 people in particular I can see getting bore d. Others didnt see the content as the issue. One teacher responded that it is easy to cover material but not so easy to diagnose why different groups of student dont u nderstand it, and how to reach the different

PAGE 72

72 groups. Another faculty member added to the di fficulties of teaching to students with a wide range of ability levels, It is uncomfortableand sometimes em barrassingto have completely native [English] sounding students in classes. I d ont feel I am necessarily meeting their needs. A different teacher felt that the problems may stem from the diversity in students preparedness/needs, The problem I have w ith writing is high school grads know essay organization but have problems with grammar an d mechanics. On the other hand, other students arent familiar with any organization or mechanic s or sentence structure. Its hard to balance between two groups! While one teacher commen ted on how EAP students are simply a difficult population to teach to, Fossilizat ion of mistakes is a major pr oblem in adult ed, another suggested that the students simply didnt care, When I point out gr ammar errors to them, I could be speaking Dutch as far as they are concerned. Some teachers believed that what could be considered problems of placement were actually pr oblems with the inability to move people, (at my other school) it was easier to move people ar ound during the first couple of weeks of term. The problem here is more that [sic], once misplacements are identified its hard to change it, especially if that student passe d the lower level and got promote d. This instructor went on to say, Last year I had a 1540 who wrote not only like a NS [Native Speake r] but like a very good writer who was a NS. However, I couldnt get he r exempted from 1640 because she bombed the state exit multiple choice test. LOEP In terms of LOEP placement, one instructor revealed a lack of knowledge about the test and how students are placed into the program, I have never seen the inst rument that was used or is used to place them. Another instructor seemed to have quite a bit to say about the problems using the LOEP for placement:

PAGE 73

73 First of all, I think the LOEP and the curricu la are out of line [ali gnment]. In addition, the writing given at placement is too short to be of consistent value. Holistic training is haphazard college-wide. Inter-rater reliability is not consistently c onducted. The absence of all of these controls weakens the use of the LOEP essays. In terms of reading, I dont know that the LOEP accurately reflects the type of sk ills being taught at the upper levels of EAP courses. In this case, I think the placement test is more rigorous than the exit tests. Other instructors gave different reasons for th e problems with the LOEP and placement. Some reasons are averaging the LOEP scores, which can cause students to place too high in one area or too low in another. Another appreciated that LOEP essays were once again being read, Anecdotally anyway, students seem to be bett er placed since we began reading LOEP essays. However, because we dont have placement by sk ill, some students are st ill in classes they may not actually need. Placement in general Not all of the comments about Valencias placement of students were negative; some comments revealed that there are students being properly placed. I believe that most students are well placed within the EAP pr ogram. In this instructors opi nion, the reason for this is, The testing instruments do an excellent job, and the r eaders help to confirm the placement. Another instructor also commented about the importance of reading a sample of student work, For the one gentleman, as soon as I saw his writing, I new [sic] he was pl aced right. In one teachers eyes, Valencia is comparable other schools, I dont think were doing a better or worse job of placement than anywhere else. However, this inst ructor did go on to reveal that I dont feel that any of the students are placed too low even if they are more advanced than the other students in the class. They still have to make adjust ments in their knowledge. Another teacher qualified his positive comments about placemen t, If this had been done last semester, it would have been much easier to answer question 3 (the number of students that should have been placed in a different level); at least at the present my students seem to be in the right place. He went on to

PAGE 74

74 say, In reference to question #3, I was, at first, concerned of [sic] a few of my students. However, after careful consideration, I realized that their achieving 75s hardly constitutes struggling through the courses. I believe my stude nts are in the right classes this time around. Other placement comments The last category was comprised of comm ents about students and placement across language skills. For example, students might writ e like a NS but they did badly on reading, or vice versa, or they speak like a NS but cant write. When it came to writing, one teacher commented, Overall, most of my EAP 1640 student s seem to be in the right place for their current skill level. However, there always seem s to be one or two students who are much below the required skill level. Others suggested ex actly what those missing skills might be, For example, several students in 1640 have never attended a grammar class. Another teacher showed agreement with this by stating, I have, however, noti ced that some of my level 6 students who have placed directly into level 6 lack the grammar and sentence structure skills necessary to be successful in Advanced Com p. for nonnative speakers. In terms of placement into grammar courses, however, there was an a ltogether different take, Grammar: I never had grammar students misplaced. In reading, teac hers made the following comments. In EAP 1520, even though many speak well, none of them are reading totally on grad e level. I consider they all need 1520. For reading, most of the pe ople can benefit from that class. Finally, one instructor mentioned how her belief that no students were misplaced was later disproved by students exempting the next level. Reading: I havent had any student s misplaced; again, I had some who passed the exemption te st at the end of level 5.

PAGE 75

75 Questions 2 This section reports the results within the two studies designed to answer the remaining research questions. 2. Which of the following three approaches best pr edicts student success as measured by final course grades and teacher evaluation of pl acement: 1) Averaging the three objective LOEP subtests, 2) Using an equally weighted aver age of both the object ively and subjectively scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors? 3. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by final course grades? 4. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by teacher evaluation of placement? 5. Do the student variables of Locus of Control and Generati on 1.5 add to the prediction of placement in EAP courses as measured by fina l course grades and teacher evaluation of placement? Study 1 The first study used an existing database of 1,030 first-time EAP students over the past three years. Multiple regressions were conducted to compare the abilities of three competing models at predicting success in EAP courses as measured by final course grades. The first model considered used only the average of the thre e objectively scored LOEP subtests: Reading (LORC), Sentence Meaning (LOSM), & Language Use (LOLU). The second model used a composite score computed by averaging the first model with the LOEP Essay Score. The third model considered used the four individual LOEP subtest scores as independent variables. The means, standard deviations, and interc orrelations can be found in Table 4-1.

PAGE 76

76 Table 4-1. Means, standard deviations, and correl ations for final course grades and predictor variables Level/Skill Variable N M SD 1 2 3 4 5 6 2 Combined EAP0281 66 2.91 .63 .23 .21 .24 .04 .16 -.02 3 Speech EAP0300 96 3.42 .64 .24* .12 .10 .31** .09 -.16 3 Reading EAP0320 95 2.91 .90 .16 .03 -.03 .36** -.03 -.14 3 Writing EAP0340 74 2.82 .83 -.14 -.09 -.03 .05 -.34** .08 3 Grammar EAP0360 81 2.79 .90 .03 -.18 .14 .04 -.13 -.19 4 Speech EAP0400 179 3.07 .94 .32** .11 .17* .35** .08 -.21** 4 Reading EAP0420 184 2.56 .95 .19* -.03 .06 .22** .09 -.21** 4 Writing EAP0440 157 2.44 .92 .18* .08 .12 .17* .07 -.12 4 Grammar EAP0460 157 2.54 .94 .13 .08 .10 .11 .05 -.07 5 Speech EAP1500 226 3.08 .86 .16* .07 .05 .16** .12 -.08 5 Reading EAP1520 237 2.56 .98 .06 -.10 -.13 .25** -.02 -.15* 5 Writing EAP1540 194 2.66 .99 .02 -.07 .04 .05 -.07 -.09 5 Grammar EAP1560 202 2.61 .94 .11 -.10 .06 .14* .01 -.19** 6 Reading EAP1620 213 2.60 1.11 .20** .04 .06 .24** .07 -.09 6 Writing EAP1640 179 2.66 1.01 .07 .21** .17* .07 -.09 .16* Predictor Variables 1. LOEPAVG (N=1030) 1030 95.91 14.32 .89** .91** 85** .90** .52** 2. LPAVGWE (N=1030) 1030 96.15 11.78 .83** .72** .81** .86** 3. LOLU (N=1030) 1030 92.41 17.47 .63** .77** .52** 4. LORC (N=1030) 1030 95.52 15.94 .65** .36** 5. LOSM (N=1030) 1030 99.80 14.95 .49** 6. LOES (N=1030) 1030 96.39 12.66 **Correlation is significan t at the 0.01 level (2-tailed) *Corre lation is significant at the 0.05 level (2-tailed) Because individual regressions needed to be conducted for each model in each of the 15 courses in the EAP program, the process of pr esenting the regression results is somewhat lengthy. Table 4-2 presents a summary of the performance of the three competing models. For EAP 0281 (Combined skills at level 2), none of the competing models were significantly able to predict success as measured by final course grade: Model 1 (Average w/out essay) R2 = .05; F (1,64) = 3.58, p = .06; M odel 2 (Average w/ essay) R2 = .05; F(1,64) = 3.07, p = .08; Model 3 (indiv idual subtests) R2 = .09; F(4,61) = 1.49, p = .22

PAGE 77

77 Table 4-2. Summary performance of all competing models in study 1 Subtests of Model 3 Significantly Contributing to the Model Lvl Skill Course Model 1 Model 2 Model 3 LOLU LORC LOSM LOES 2 Combined EAP0281 3 Speech EAP0300 X* X X 3 Reading EAP0320 X X 3 Writing EAP0340 X X 3 Grammar EAP0360 4 Speech EAP0400 X X* X 4 Reading EAP0420 X* X 4 Writing EAP0440 X 4 Grammar EAP0460 5 Speech EAP1500 X 5 Reading EAP1520 X X X 5 Writing EAP1540 5 Grammar EAP1560 6 Reading EAP1620 X* X X 6 Writing EAP1640 X X* X X X Indicates preferred model when two or more models both significantly predicted success. For EAP 0300 (Speech at level 3), Models 1 and 3 significantly pr edicted success in the course as measured by final course grade: F(1,94) = 5.65, p = .02 for Model 1(Average w/out essay) and F(4,91) = 2.55, p = .045 for model 3 (I ndividual subtests). However, in the third Model, LOEP Reading was the only variable si gnificantly contributing to the prediction. Model performance and beta weights for the models are presented in Table 4-3. The R-squared values for the significant competing models are .057 and .101 respectively for Models 1 and 3. This indicates that 5.7% and 10.1%, respectively, of th e variance in final course grades in EAP 0300 was explained by the models. According to Cohen (1988), this is a small effect for both models. The adjusted R-squared values are .047 and .061 respectively fo r Models 1 and 3. An F-test was used to test if the reduced model, Model 1, performed as well as the full model, Model 3. Because the R2-change was not significant F(3,91) = 1.48, p = .224, it is assumed that the reduced model performed as well as the full model.

PAGE 78

78 Table 4-3. EAP 0300: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0300 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .016 .007 .238* 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .018 .017 .107 3) Individual LOEP Test Scores LOLU .003 .005 .053 LORC .014 .005 .309** LOSM -.002 .006 -.035 LOES -.000 .010 -.012 Model 1 R2 = .057; F(1,94) = 5.65, p = .020* Model 2 R2 = .011; F(1,94) = 1.09, p = .299 Model 3 R2 = .101; F(4,91) = 2.55, p = .045* *p < .05; **p<.01 For EAP 0320 (Reading at level 3), only Mode l 3 significantly pred icted success in the course as measured by final course grade, F(4,90) = 3.90, p = .006. However, in Model 3, LOEP Reading was the only variable si gnificantly contributing to th e prediction. Model performance and beta weights for the models are presented in Table 4-4. The R-squared value for Model 3 is .148. This indicates that 14.8% of the variance in final course grades in EAP 0320 was explained by Model 3. According to Cohen (1988), this is a medium effect. The adjusted R-squared value for Model 3 was .110. Table 4-4. EAP 0320: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0320 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .015 .010 .155 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .007 .025 .030 3) Individual LOEP Test Scores LOLU -.004 .008 -.052 LORC .025 .007 .381** LOSM -.009 .009 -.126 LOES -.004 .013 -.039 Model 1 R2 = .024; F(1,93) = 2.30, p = .133 Model 2 R2 = .001; F(1,93) = .085, p = .771 Model 3 R2 = .148; F(4,90) = 3.90, p = .006** *p < .05; **p<.01

PAGE 79

79 For EAP 0340 (Writing at level 3), only Mode l 3 significantly predicted success in the course as measured by final course grade, F(4,69) = 2.90, p = .028. However, in Model 3 LOEP Sentence Meaning was the only variable signifi cantly contributing to the prediction. Model performance and beta weights for the models are presented in Ta ble 4-5. The R-squared value for Model 3 is .144. This indicates th at 14.4% is the proporti on of explained variance of the variance in final course grades in EAP 0340 in Model 3. A ccording to Cohen (1988), this is a small effect. The adjusted R-squared value for Model 3 was .094. Table 4-5. EAP 0340: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0340 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG -.013 .011 -.142 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE -.021 .029 -.088 3) Individual LOEP Test Scores LOLU .002 .009 .028 LORC .008 .007 .132 LOSM -.028 .009 .424** LOES -.007 .015 -.076 Model 1 R2 = .020; F(1,72) = 1.48, p = .228 Model 2 R2 = .008; F(1,72) = .558, p = .457 Model 3 R2 = .144; F(4,69) = 2.89, p = .028* *p < .05; **p<.01 For EAP 0360 (Grammar at level 3), none of the competing mode ls were significantly able to predict success as measured by final course grade: Model 1 R2 = .00; F(1,79) = .05, p = .82; Model 2 R2 = .03; F(1,79) = 2.50, p = .12; Model 3 R2 = .11; F(4,76) = 2.34, p = .063. For EAP 0400 (Speech at level 4), Models 1 and 3 significantly pr edicted success in the course as measured by final course grade: F(1,177) = 19.91, p < .001 for Model 1 and F(4,174) = 7.37, p <.001 for Model 3. However, in Model 3, LOEP Reading was the only variable significantly contributing to the prediction. The be ta weights for the models are presented in Table 4-6. The R-squared values for the compe ting models are .101 and .145 respectively for

PAGE 80

80 Models 1 and 3. This indicates that 10.1% and 14.5%, respectively, of the variance in final course grades in EAP 0400 was explained by the models. According to Cohen (1988), this is a small effect for Model 1 and a medium effect fo r Model 3. The adjusted R-squared values for the two models were .096 and .125 respectiv ely for Models 1 and 3. An F-te st was used to test if the reduced model, Model 1, performed as well as the full model, Model 3. Model 1 did not perform as well as Model 3. The reduced model had a significantly lower R2, F(3,174) = 2.98, p = .03 Table 4-6. EAP 0400: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0400 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .039 .009 .318** 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .026 .019 .107 3) Individual LOEP Test Scores LOLU .012 .007 .146 LORC .023 .005 .326** LOSM -.002 .008 -.025 LOES -.004 .010 -.035 Model 1 R2 = .101; F(1,177) = 19.91, p < .001** Model 2 R2 = .011; F(1,177) = 2.03, p = .156 Model 3 R2 = .145; F(4,174) = 7.37, p <.001** *p < .05; **p<.01 For EAP 0420 (Reading at level 4), Models 1 and 3 significantly pr edicted success in the course as measured by final course grade: F(1,182) = 6.63, p = .01for Model 1 and F(4,179) = 3.08, p = .02 for Model 3. However, in Model 3, none of the variables were shown as significantly contributing to the prediction. The be ta weights for the models are presented in Table 4-7. The R-squared values for the compe ting models are .035 and .064 respectively for Models 1 and 3. This indicates that 3.5% and 6.4%, respectively, of the variance in final course grades in EAP 0420 was explained by the models. According to Cohen (1988), this is a small effect for both models. The adjusted R-squa red values were .030 and .043 respectively for Models 1 and 3. An F-test was used to test if the reduced model, Model 1, performed as well as

PAGE 81

81 the full model, Model 3. Because the R2-change was not significan t F(3,179) = 1.85, p = .14, it is assumed that the reduced model perf ormed as well as the full model. Table 4-7. EAP 0420: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0420 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .023 .009 .188* 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE -.007 .019 -.027 3) Individual LOEP Test Scores LOLU -.001 .007 -.015 LORC .011 .006 .154 LOSM .002 .008 .022 LOES -.017 .011 -.141 Model 1 R2 = .035; F(1,182) = 6.63, p = .011* Model 2 R2 = .001; F(1,182) = .132, p = .716 Model 3 R2 = .064; F(4,179) = 3.08, p = .018* *p < .05; **p<.01 For EAP 0440 (Writing at level 4), only Mode l 1 significantly predicted success in the course as measured by final course grade, F(1,155) = 5.14, p = .03. Model performance and beta weights for the models are presented in Table 48. The R-squared value for Model 1 is .032. This indicates that 3.2% of the vari ance in final course grades in EAP 0440 was explained by Model 1. According to Cohen (1988), this is a small effect. The adjusted R-squared value for Models 1 was .026. Table 4-8. EAP 0440: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP0440 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .022 .009 .179* 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .022 .023 .077 3) Individual LOEP Test Scores LOLU .009 .007 .116 LORC .013 .007 .174 LOSM -.000 .008 -.005 LOES .002 .013 .013 Model 1 R2 = .032; F(1,155) = 5.14, p = .025* Model 2 R2 = .006; F(1,155) = .931, p = .336 Model 3 R2 = .042; F(4,152) = 1.66, p = .163

PAGE 82

82 *p < .05; **p<.01 For EAP 0460 (Grammar at level 4), none of the competing mode ls were significantly able to predict success as measured by final course grade: Model 1 R2 = .02; F(1,155) =2.67 p = .10; Model 2 R2 = .01; F(1,155) =1.00 p = .32; Model 3 R2 = .02; F(4,152) = .86, p = .49. For EAP 1500 (Speech at level 5), only M odel 1 significantly predicted success in the course as measured by final course grade, F(1,224) = 5.76, p = .02. Model performance and beta weights for the models are presented in table 49. The R-squared value for Model 1 is .025. This indicates that 2.5% of the vari ance in final course grades in EAP 1500 was explained by Model 1. According to Cohen (1988), this is a small ef fect. The adjusted R-squared value for Model 1was .021. Table 4-9. EAP 1500: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP1500 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .021 .009 .158* 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .017 .017 .067 3) Individual LOEP Test Scores LOLU .001 .007 .012 LORC .012 .006 .139 LOSM .007 .008 .070 LOES .000 .009 .002 Model 1 R2 = .025; F(1,224) = 5.76, p = .017* Model 2 R2 = .004; F(1,224) = .997, p = .319 Model 3 R2 = .031; F(4,221) = 1.75, p = .139 *p < .05; **p<.01 For EAP 1520 (Reading at level 5), only Mode l 3 significantly pred icted success in the course as measured by final course grade, F(4,232) = 6.40, p < .001. However, in Model 3, LOEP Language Use and Reading Comprehens ion were the only variables significantly contributing to the prediction. Model performance and beta weights for the models are presented in Table 4-10. The R-squared value for Model 3 is .099. This indicates that 9.9% of the variance

PAGE 83

83 in final course grades in EA P 1520 was explained by Model 3. According to Cohen (1988), this is a small effect. The adjusted R-squared value for Models 3 was .084. Table 4-10. EAP 1520: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP1520 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .005 .010 .060 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE -.028 .018 -.102 3) Individual LOEP Test Scores LOLU -.018 .007 -.171* LORC .024 .007 .235** LOSM -.007 .008 -.052 LOES -.016 .009 -.122 Model 1 R2 = .004; F(1,235) = .835, p = .362 Model 2 R2 = .010; F(1,235) = 2.45, p = .119 Model 3 R2 = .099; F(4,232) = 6.40, p < .001** *p < .05; **p<.01 For EAP 1540 (Writing at level 5), none of th e competing models we re significantly able to predict success as measured by final course grade: Model 1 R2 = .00; F(1,192) = .06, p = .81; Model 2 R2 = .01; F(1,192) = .94, p = .33; Model 3 R2 = .02; F(4,189) = .94, p =.44. For EAP 1560 (Grammar at level 5), none of the competing mode ls were significantly able to predict success as measured by final course grade: Model 1 R2 = .01; F(1,200) = 2.34, p = .13; Model 2 R2 = .01; F(1,200) = 2.13, p = .15; Model 3 R2 = .05; F(4,197) = 2.33, p = .06. For EAP 1620 (Reading at level 6), Models 1 and 3 significantly pr edicted success in the course as measured by final course grade: F(1,211) = 8.34, p = .004 for Model 1 and F(4,208) = 3.28, p =.01for Model 3. However, in Model 3, only LOEP Reading Comprehension significantly contributed to the prediction. Model performance and beta weights for the models are presented in Table 4-11. The R-squared valu es for the competing models are .038 and .059 respectively for Models 1 and 3. This indicates that 3.8% and 5.9%, respec tively, of the variance in final course grades in EAP 1620 was explained by the models. According to Cohen (1988),

PAGE 84

84 this is a small effect for both models. The adjusted R-squared values were .033and .041 respectively for Models 1 and 3. An F-test was used to test if the reduced model, Model 1, performed as well as the full model, Model 3. Because the R2-change was not significant F(3,208) = 1.55, p = .20, it is assumed that the re duced model performed as well as the full model. Table 4-11. EAP 1620: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP1620 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .045 .016 .195** 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .013 .022 .043 3) Individual LOEP Test Scores LOLU .005 .012 .030 LORC .031 .010 .226** LOSM .004 .012 .023 LOES -.004 .011 -.022 Model 1 R2 = .038; F(1,211) = 8.34, p = .004** Model 2 R2 = .002; F(1,211) = .398, p = .529 Model 3 R2 = .059; F(4,208) = 3.28, p =.012* *p < .05; **p<.01 For EAP 1640 (Writing at level 6), Models 2 and 3 significantly pr edicted success in the course as measured by final course grade: F(1,177) = 7.99, p = .005 for Model 2 and F(4,174) = 4.84 p = .001for Model 3. However, in Model 3, only LOEP Language Us e, Sentence Meaning and Essay significantly contributed to the pred iction. Model performance and beta weights for the models are presented in Table 4-12. The beta weights for Model 3 suggest that LOEP Language Use contributes the most to pred icting success followed by Essay and Sentence Meaning. The R-squared values for the compe ting models are .043 and .100 respectively for Models 2 and 3. This indicates that 4.3% and 10% respectively, of the variance in final course grades in EAP 1640 was explained by the models. According to Cohen (1988), this is a small effect for both models. The adjusted R-squa red values were .043 and .079respectively for

PAGE 85

85 Models 2 and 3. An F-test was used to test if the reduced model, Model 2, performed as well as the full model, Model 3. Model 2 did not perfor m as well as Model 3. The reduced model has a significantly lower R2, F(3,174) = 3.67, p = 0.013 Table 4-12. EAP 1640: Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by final course grades Course EAP1640 Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .015 .016 .069 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .059 .021 .208** 3) Individual LOEP Test Scores LOLU .042 .013 .259** LORC .015 .010 .115 LOSM -.024 .012 -.157* LOES .030 .011 .212** Model 1 R2 = .005; F(1,177) = .847, p = .359 Model 2 R2 = .043; F(1,177) = 7.99, p = .005** Model 3 R2 = .100; F(4,174) = 4.84 p = .001** *p < .05; **p<.01 In the first study, none of the models was consiste ntly able to predict success in all of the EAP courses. Model 3 was able to predict succ ess in the greatest number of courses followed by Model 1. None of the models were able to predict successful placement in EAP grammar courses. Model 3 was consistently able to pred ict success in EAP reading courses. Model 1 was consistently able to predict success in EAP speech courses. Finally, Models 1 and 3 were sometimes able to predict success in EAP writing courses, Model 3 being the better of the two at predicting writing. It should be mentioned that no ne of the models was able to account for more than 15% of the variance in final course grad es, with the majority of them accounting for less than 10% of the variance across courses. Study 2 The second study used all willing EAP student s taking courses at Valencia during the Summer A and C terms of 2006. Similar to the first study, multiple regressions were conducted

PAGE 86

86 to compare the abilities of the three competing models at predicting success in EAP courses as measured by final course grades. In the second study, however, the additional outcome variable of teacher evaluation of placement was added. Furt hermore, two new variables were analyzed for their predictive abilities: Locus of C ontrol and Computed Ge neration 1.5 status. It was hoped that this second study could replic ate findings in the first and add to those findings. Unfortunately, none of the models in the second study were found to be significant predictors of success as measured by final course grades. Even if the models had been found to significantly predict success, the low numbers of st udents in their first semester led to critically low numbers in each course. Although there were originally 470 students surveyed, only 131 of those students were in their first semester. Furt hermore, because some of these students failed to take all subtests or because information was missing from the database, only 121 students had complete LOEP subtest scores a nd end of final course grades. In the second study, multiple regressions were also conducted to compare the abilities of the same three variables at predicting success in EAP courses as measured by teacher evaluation of placement. The means, standard deviations, and correlations can be found in Table 4-13. All three models significantly predicted successful placement as measured by teacher evaluation of placement: F(1,118) = 184.2, p <.001 for Model 1, F(1,118) = 312.4, p <.001 for Model 2, and F(4,115) = 79.5, p <.001for Model 3. In the th ird model, all LOEP subtests contributed significantly to prediction. Mode l performance and beta weights are presented in Table 4-14. The R-squared values for the competing models ar e .610, .726, and .734 respectiv ely for Models 1, 2, and 3. This indicates that 61.0%, 72.6%, and 73.4%, respectively, of the variance in teacher evaluation of placement was explained by the models According to Cohen (1988), this is a large effect for all models. Because the first two mode ls were not nested, an F-test could not be

PAGE 87

87 conducted; however, the R-squared values indica ted that Model 2 accounted for 11.6% more of the variance and therefore is the better model. Tw o F-tests were conducted to test if the reduced models, Model 1 and 2, performed as well as th e full model, Model 3. Model 1 did not perform as well as Model 3, with a significantly lower R2, F(3,115) = 17.87, p < .001. However, it can be assumed that Model 2 did perform as well as Model 3 because the R2-change was not significant F(3,115) = 1.15, p = .33. Given that there is no si gnificant difference between Models 2 and 3, the simpler model was selected as the preferred model. Table 4-13. Means, standard deviations, and inte rcorrelations for teacher evaluation of placement and predictor variables for first semester survey respondents Variable N M SD Correlation with TCHRPLC TCHRPLC 121 4.55 .965 1.0 Predictor Variables 1. LOEPAVG 121 94.25 11.38 .781** 2. LPAVGWE 121 95.29 9.06 .852** 3. LOLU 121 90.88 14.89 .693** 4. LORC 121 95.02 13.31 .581** 5. LOSM 121 97.16 12.94 .667** 6. LOES 121 95.54 13.35 .648** 7. LOCSCAL 106 8.67 3.63 -.074 8. GN15CMPT 131 .275 .448 .271** **. Correlation is significant at the 0.01 level (2-tailed) *. Correlation is significant at the 0.05 level (2-tailed) Using the LPAVGWE as the preferred mode l, the additional variables (LOCSCAL, GN15CMPT) were each tested individually to see if they improved prediction. LOCSCAL and GN15CMPT did not signifi cantly improve prediction. Although the second study failed to replicate findings in the fi rst study using final course grades as an outcome variable the addition of the second outco me variable did add to the findings. In the second study, Model 2 was selected as the preferred model because of its ability to perform as well as Model 3. In the first st udy, Model 3 was able to predict success in the

PAGE 88

88 greatest number of courses followed by Model 1. A ll of the subtests in Model 3 in the second study were significantly able to predict success as measured by teacher evaluation of placement, with LOES contributing the most to the pred iction, and followed by LOLU, LORC, and LOSM respectively. Table 4-14. Summary of simultaneous multiple regression analyses for models predicting successful placement as measured by teacher evaluation of placement Model Variable B SEB 1) Composite LOEP Test Scores LOEPAVG .066 .005 .781** 2) Composite LOEP Test Scores Averaged with Essay LPAVGWE .091 .005 .852** 3) Individual LOEP Test Scores LOLU .016 .004 .251** LORC .018 .004 .247** LOSM .018 .005 .244** LOES .039 .005 .392** Model 1 R2 = .610; F(1,118) = 184.20, p <.001** Model 2 R2 = .726; F(1,118) = 312.43, p <.001** Model 3 R2 = .734; F(4,115) = 79.49, p <.001** *p < .05; **p<.01 Although the second study failed to replicate findings in the fi rst study using final course grades as an outcome variable the addition of the second outco me variable did add to the findings. In the second study, Model 2 was selected as the preferred model because of its ability to perform as well as Model 3. In the first st udy, Model 3 was able to predict success in the greatest number of courses followed by Model 1. A ll of the subtests in Model 3 in the second study were significantly able to predict success as measured by teacher evaluation of placement with LOES contributing the most to the pr ediction followed by LOLU, LORC, and LOSM respectively.

PAGE 89

89 CHAPTER 5 DISCUSSION In addition to identifying descriptive info rmation about Valencias EAP population, this research sought to identify the most effective practices for placing student s into EAP courses at Valencia by finding answers to the following questions: 1. What are the student and teacher be liefs about placement at Valencia? 2. Which of the following three approaches best pr edicts student success as measured by final course grades and teacher evaluation of pl acement: 1) Averaging the three objective LOEP subtests, 2) Using an equally weighted aver age of both the object ively and subjectively scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors? 3. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by final course grades? 4. Which of all approaches best predicts success across different language skill courses (reading, writing, grammar, and speech) and lan guage proficiency levels as measured by teacher evaluation of placement? 5. Do the student variables of Locus of Control and Generati on 1.5 add to the prediction of placement in EAP courses as measured by fina l course grades and teacher evaluation of placement? Valencias EAP Population With the exceptions of country of origin a nd native language, survey responses revealed that Valencias EAP population, for the most part, is similar across the three campuses. However, survey results did re veal that the West Campus ha d considerably more Haitian students than the two other campuses: 28.85% of the West Campus EAP student population was Haitian as opposed to 4.63% on the East Campus and 4.92% on the Osceola Campus. These demographic differences, however, did not appear to affect placement. In the surveys, teachers across campuses rated misplacement in a similar manner, and none of the teacher comments led the researcher to believe the in cidence of misplacement of st udents was greater on one campus than another.

PAGE 90

90 Valencias prototypical EAP st udent is a 19-year-old Spanish speaker from Columbia. She is in her second semester at Valencia and starte d school in the U.S. in or around 10th grade, but is most likely not a Generation 1.5 student. She has been in the U.S. about three years and is not the first person in her family to go to college; however, she is more than likely the first person in her family to go to college in the U.S. and has probably not attended coll ege outside the U.S. She rates her abilities to write papers and do research in English as av erage. Only a few of the people she does things with every week are native En glish speakers. However, she does use English most of the time to speak with her friends. On the other hand, her family does not often use English in the home, and in that home there are fewer than 25 books. She does, however, own a computer and rates her ability to use it as above average. Student and Teacher Beliefs about Placement This section of the chapter di scusses findings relevant to th e first research question, What are the student and teacher beliefs about plac ement at Valencia? When students were asked their beliefs on placement, 21% of the 470 students surveyed felt that they had been misplaced. One might expect that a larger percentage of st udents who had been placed into a developmental program rather than regular college courses to feel that they have been misplaced. In the surveys, teachers indicated that 17% of their students were misplaced. It is interesting to note that student beliefs about the incidence of misplacement are similar to those of instructors, 21% and 17% respectively. Prior to the study, anecdotal evidence (ema ils, teacher complaints at meetings, and discussions about placement with colleagues) indicated teacher dissatisfaction with the way students were being placed. In response to su rvey questions on placement, faculty members felt that Valencia did an average j ob at placing students; 14 of 19 su rvey respondents (74%) selected the term Average to describe Valencias accuracy at placing st udents. 74% indicated that they

PAGE 91

91 Sometimes feel that students in their EAP classe s might be better placed in a different level. The responses to this particular question about how often misplacement occurs were interesting in that not one of the instructors felt that students were Rarely or Never misplaced, indicating that instructors do feel that mi splacement is an ongoing phenomenon. Of those surveyed, 68% felt that a few of the students in their courses during the semester in which surveys were conducted should have been placed in a different level. And while 10% indicated that None of their students would have bene fited from different pl acement, 21% indicated more than a few students should ha ve been placed differently. In terms of teacher responses to the open -ended questions, three of the eight major categories were related to placement: one spec ifically dealing with misplacement, one on placement in general, and one on other placem ent comments. To summarize teacher comments about misplacement, many of the comments indi cated differing opinions about students being placed above and below their ability levels: I be lieve that students who are misplaced are more frequently under-prepared than over-prepared, a nd Sometimes we get students that just seem way beyond the level of EAP. In the category of Teacher Comments on Placement, responses were generally positive regarding placement at Va lencia. For example, one instructor wrote I believe that most students are well placed with in the EAP program. In the category Other Placement Comments there was some indication that teachers felt students placed in writing courses lacked sufficient grammar skills. A typi cal response was that some of my level 6 students who have placed directly into level 6 lack the grammar and sentence structure skills necessary to be successful in Advanced Co mp. This was supported by instructors who commented on both weak skills and a total lack of skills. Also, in the same category of Other Placement Comments, another instructor wrote I never had grammar students misplaced.

PAGE 92

92 Further research would need to be conducted verify this, but perhaps there is a mismatch between the specific discrete po int grammar topics taught in EA P grammar courses and what is measured by the LOLU and LOSM subtests. Other teacher comments that should be me ntioned here dealt with teaching-related problems but indirectly reflect issues with pl acement. For example, comments like It is uncomfortableand sometimes embarrassingto ha ve completely native sounding students in classes. I dont feel I am nece ssarily meeting their needs, reve al a wide variety of language proficiencies in courses. A comment such as, The problem I have with wr iting is high school grads know essay organization but have problems with grammar and mechanics. On th e other hand, other students arent familiar with any organization, mechanic s, or sentence structure further demonstrates the variety of skills a nd background knowledge that students have when they enter courses at Valencia. The variety of la nguage proficiencies and differences in skills and background knowledge demonstrated by students coul d simply be the result of placing students into skills classes at a single level rather than into skills classes acros s levels. It could also indicate a mismatch between the LOEP subtests and the school curriculum. In summary, the majority of faculty memb ers surveyed felt that only a few of their students were misplaced during the study period. A lthough there were indications that placement problems existed, the findings failed to reflect the anecdotal reports in term s of the incidence of misplacement. It is possible that with the li ghter course load over the summer (with fewer courses, fewer students in courses, shorter work weeks) teachers di d not feel or vent frustrations about misplaced students over the summer in the same manner as they did during the fall or spring semesters.

PAGE 93

93 The Preferred Approach This section discusses findings relevant to the second rese arch question, Which of the following three approaches best pr edicts student success as measur ed by final course grades and teacher evaluation of placement: 1) Averaging th e three objective LOEP subtests, 2) Using an equally weighted average of bot h the objectively and subjectively scored LOEP subtests, or 3) Using the four LOEP subtests as individual predictors? To help improve the successful placement of EAP students into the program, research Question 2 sought to find which of three approach es best predicted student success as measured by final course grades and by teacher evaluation of placement. Model 1 studied the variable of the simple average of the three objectively scored LOEP subtests. In the first study using the data including 1,030 students, Model 1 was able to si gnificantly predict success as measured by final course grades in six of the 15 courses analyzed. The first model significantly predicted success in all EAP Speech courses, levels 3, 4, and 5. Speech courses are not offered at levels 2 or 6. The first model also predicted success in Reading at le vels 4 and 6, and Writing at level 4. In three of those courses (level 3 Speech, level 4 Reading, and level 6 Reading) Model 1 was selected for its simplicity, but this model could never account for mo re than 10% of the variance in final course grades. Model 1 failed to predict succe ss in any of the grammar courses. As mentioned earlier, perhaps there is a mismatch between content assessed on the LOLU/LOSM subtests and the content taught in EAP grammar courses. Another explanation could be that perhaps grammar is not a skill to it self; perhaps it is a subc omponent of other skills. A final reason for the inability of these subtests to predict success in Grammar courses may have something to do with the different populations who do well on grammar tests. Some students study of English prior to arrival in the U.S. has focused extensively on memorization of vocabulary and grammatical rules. These student s tend to do well on grammar tests, but do not

PAGE 94

94 write or speak well in English. Other students may have had greater access to English-speaking models with a greater focus on communicative tasks and less focus on grammar and vocabulary. An example would be Generation 1.5 students who speak with near-native proficiency but lack grammar knowledge. The differences in these popul ations could be affect ing the ability of the LOEP to place students appropriately. In the second study using the data from 470 surveyed students, Model 1 was found to significantly predict su ccessful placement as measured by teacher evaluation, but it was outperformed by Models 2 and 3. This is reasonable because this model did not include the LOEP Essay subtest, the subtest that was found to be most predic tive of teacher evaluation of placement. Model 2, which used an equally weighted av erage of both the objectively and subjectively scored LOEP subtests, was found to be a poor predic tor in the first study. Th is model was able to significantly predict success in only one 15 EAP course (level 6 writing), and it was able to account for only 4% of the variance in that re gression. However, in the second study Model 2 performed as well as Model 3, which considered the four subtest vari ables individually. Model 3, which entered the four LOEP subtes ts as individual pr edictors, performed moderately well in both studies Although it accounted only15% of the variance in the first study using the population of 1,030 students, it signif icantly predicted success in eight of the 15 EAP courses as measured by final course grade. The eight courses included Speech at levels 3 and 4, Reading at levels 3, 4, 5, and 6, and Writi ng at levels 4 and 6. In the second study, Model 3 accounted for the greatest amount of variance us ing teacher evaluation of placement. All of the LOEP subtests in Model 3 were found to be significant predictors of success as measured by teacher evaluation of placement. The Essay pe rformed the best, followed by Reading, Sentence

PAGE 95

95 Meaning, and Language Use, respectively. Th e performance of the Essay variable was interesting but not surprising; in survey respons es teachers welcomed the reinstatement of the Essay variable. One instructor even noted, students seem to be better placed since we began reading LOEP essays. Because the Essay is the only direct measure of language proficiency, it is not surprising that it is able to pr edict teacher evaluations of placement. A partial explanation for the in effectiveness of the Essay vari able in the first study could be that single-prompt essays are highly unstable as variables. This instability could have led to low reliability and thus negativ ely affected the correlation. Future research could address the comparative effectiveness of holistically scored single prompt essays and multi-prompt short essay measures. If holistically scored multi-prom pt short essay measures are found to increase the reliability of written assessment, their use could enhance placement practices. In the first study, none of the variables in Model 3 showed high correlations with final course grades, the highest co rrelation being .36 for LOEP Reading with EAP 0320 (Level 3 Reading). These low correlations may be expl ained by low reliability of course grades, restriction of range of test scor es, or both. Because of the differe nt ways that teachers evaluate students and the variety of pe rsonal, social, economic, and academic factors involved in assigning student grades, final c ourse grades may not be a reliable indicator of successful placement. It has been suggested that low reliabil ity of course grades can depress correlations (Sawyer, 1989). For example, if the reliability of a test score is high, e.g., .90, and the reliability of course grades is low, e.g., .40, the maximum correlation between the measures is .60. Another possible explanation is restri ction of range (American College Testing Program, 1990; Armstrong, 1994). In other words, because student s in EAP courses only include students above the test cut-off scores and below other benchm ark scores, predictor scores do not include the

PAGE 96

96 students at the lower or upper ends of the range of scores. A variet y of studies use one or both of these explanations to explain low correlati ons (American College Testing Program, 1990; Armstrong, 1994; College Board, 1990; College of the Canyons, 1994; Feldt, 1989; Kesler, 1987). Another likely explanation for the failure of the models to show a large effect is that because the LOEP test scores had actually been used to place stude nts into classes, most of the variance attributed to the placement m easures had already been explained. Reading Subtest Preferred This section of the chapter discusses findings relevant to the third research question, Which of all approaches best predicts succe ss across different language skill courses (reading, writing, grammar, and speech) and language profic iency levels as measured by final course grades? In response to Question 3, the LOEP Readi ng subtest was the best at predicting success across different skills and levels as measured by final course grades. In the first study, LOEP Reading contributed significantl y to Model 3 in Reading course s at levels 3, 5 and 6 and in Speech courses at levels 3 and 4. In a post hoc analysis, the LOEP Reading subt est was predictive of all skills: reading, writing, grammar, and speech. The test was also able to predict success in at least one course at each level. The post hoc analysis tested each of the LOEP subtests in is olation for ability to predict success as measured by final course gr ades using the 1,030 students in the first study. Table 5-1 details the results compar ing the seven variables, the thre e original models and the four LOEP subtests as individual, sing le variable models. In the event that one of the LOEP subtests and Model 3 both predicted success, an F-test was conducted to compare the models. In the event that two or more subtest variab les predicted success, whichever model accounted for the greatest amount of variance was selected as the preferred model. As with the results of study 1, none of

PAGE 97

97 the new variables was able to pe rform well across all skills and levels. However, LOEP Reading (LORC) was the best predictor of success. It wa s predictive in all Spee ch courses (Levels 3, 4, and 5), all Reading courses (levels 3, 4, 5, and 6), and also was predictive in level 4 writing and level 6 grammar. Table 5-1. Post-hoc results comparing the 3 or iginal models with each subtest added as a competing model. Levl Skill Course Model 1Model 2Model 3LOLULORC LOSMLOES 2 Combined EAP0281 3 Speech EAP0300 X X X* 3 Reading EAP0320 X X* 3 Writing EAP0340 X X* 3 Grammar EAP0360 4 Speech EAP0400 X X X X* X 4 Reading EAP0420 X X X* X 4 Writing EAP0440 X* X 4 Grammar EAP0460 5 Speech EAP1500 X X* 5 Reading EAP1520 X X* X 5 Writing EAP1540 5 Grammar EAP1560 X X* 6 Reading EAP1620 X X X* 6 Writing EAP1640 X X X X* The X indicates significant results on F-test; *Indicates best correlation In terms of a theory of asse ssing language competence, this study does not appear to lend any evidence to support Ollers (1992) suggestion that in the early stages of second language learning, distinct dimensions of listening, writin g, and reading ability ma y resolve into further sub-component traits. The reading, writing, and gram mar subtests in this study were not able to predict success more efficiently for students at the lower proficiency levels than at the higher proficiency levels. In fact, all of these tests failed to consistently predict success in courses across levels and skills. However, it could be that the range of EAP student language proficiency levels is narrower than what Oller was considering in his desc ription of the differences among

PAGE 98

98 students in the early and later st ages of second language acquisition. It could also be that the LOEP subtests are not accurate enough to detect small variations in these proficiency levels. A number of recent studies have examined the relationship of reading and placement. Although reading placement tests have shown negligib le or modest correlations with grades in credit level college courses (American Co llege Testing Program, 1990; Armstrong, 1994; College of the Canyons, 1994; Feldt, 1989; Kesler, 1987), it has been suggested that the reason for this weak relationship may be a result of th e fact that these tests are grounded in a domaingeneric model of comprehension that assume s a good reader is a good reader, no matter the content (Behrman, 2005). Results of other st udies have found evidence that domain-specific factors are important (Alexander & Judy, 1988; Byrnes, 1995), and placement tests using domain-specific readings have demonstrated great er efficiency than domain-generic reading tests at predicting student success (Behrman, 2005). The current research reveals that the domaingeneric reading comprehension subtest is the most effective of all the LOEP subtests analyzed at predicting success as measured by final course grades, but how might domain-specific testing measures fare with EAP students? Future resear ch may address whether or not EAP students are equitably assessed and placed by these measures given that language proficiency or lack of background knowledge could lead to l ack of test reliability. For ex ample, a domain-specific test using passages selected from literature courses may perform better than a domain-generic test at predicting success in composition and literature courses for nativ e English speakers, but how would different background knowledge and different cultural perspect ives bias an EAP students results on the domain specific test? From a theoretical perspective of background knowledge in reading, it appears that first language literacy and grammatical knowledge account for approximate ly 50% of the variance in

PAGE 99

99 second language performance (Bernhardt, 2005). Fu ture research analyzi ng affect, interest in second language text, and alternative conceptions of literacy may add to the amount of variance already accounted for in second language perfor mance. Would EAP students employ strategies of cognate knowledge in a domain-specific test of science passages, or would they be negatively affected by a large amount of unknown vocabulary? In terms of practical placement, future investigations of read ing as a predictor may want to include different reading measures. For example, it may be interesting to analyze other measures of reading that are gathered and stored in comm unity college databases. In Florida, all students are required to take the CPT. How well do EAP student scores on the CPT Reading and Sentence Skills subtests correlate with success in EAP c ourses? This question was asked by James (2006) in her predictive validity study of the Accuplacer subtests, but to date no related research has been reported. Predicting Evaluation of Placement This section of the chapter di scusses findings relevant to the fourth research question, Which of all approaches best predicts succe ss across different language skill courses (reading, writing, grammar, and speech) and language proficiency levels as measured by teacher evaluation of placement? Unfortunately, in the second study, in which teacher evaluation of placement data were gathered as a variable, the low numbers of students in their first semester led to critically low numbers in each cour se. Although there were or iginally 470 students surveyed, only 131 of those students were in thei r first semester. Furthermore, because some of these students failed to take all of the subtes ts or because information was missing from the database, only 121 students had complete LOEP subt est scores and final course grade data. Even the least conservative recommendation for caseto-predictor ratios in regression analyses suggested a minimum case-to-predictor ratio of 15-to-1 (Schmidt, 1971). Therefore, regression

PAGE 100

100 analyses across skills and levels using teacher evaluation of placement as an outcome variable were not suggested. Similar to the post hoc an alyses conducted in response to Question 3, post hoc analyses were conducted usi ng the individual LOEP subtests in isolation as predictors and teacher evaluation of placement as the outcome variable. However, none of the individual LOEP subtests in isolation was able to perform as well as LOEP Average with essay (Model 2). LOC and Generation 1.5 This section of the chapter di scusses findings relevant to th e fifth research question, Do the student variables of Locus of Control and Generation 1.5 add to the prediction of placement in EAP courses as measured by final course grades and teacher eval uation of placement? In study 2, these two variables were not f ound to add to the prediction. However, to determine whether these variables would have ha d better predictive power with the full set of survey respondents rather than the limited subs et having LOEP scores, both were tested in isolation. When tested for their abilities to pr edict success as measured by teacher evaluation of placement, both were found to be significant predictors: LOCSCAL R2 = .020; F(1,338) = 6.98, p =.009 and GN15CMPT 2 R2 = .056; F(1,412) = 24.58, p <.001. Generation 1.5 accounted for the greatest amount of varian ce in teacher evalua tion of placement. Table 5-2 lists model performance and beta we ights for the models. Table 5-2. Summary of simultaneous multip le regression analyses for LOCSCAL & GN15CMPT at predicting successful placemen t as measured by teacher evaluation of placement Model Variable B SEB 1) Scores on the Trice LOC Scale LOCSCAL -.047 .018 .142** 3) Computed Generation 1.5 Status GN15CMPT .588 .119 .237** Model 1 R2 = .020; F(1,338) = 6.98, p =.009** Model 2 R2 = .056; F(1,412) = 24.58, p <.001** *p < .05; **p<.01

PAGE 101

101 The Trice Locus of Control scale (Trice, 1985) used in this study was developed and validated on native English speaking college studen ts with similar gender and age characteristics as the EAP students in this study. In Trices original study (1985), the mean scores for education and psychology students were 12.46 and 13.22, respect ively; the mean score for EAP students surveyed in this study was 8.83. However, of the 470 students surveyed, 140 students did not complete the Trice LOC index and were therefore not included in the data analysis for the study. To examine the possible role of English proficie ncy in the students ability to complete the Trice LOC index, a post hoc analysis was conducte d. The researcher calculated percentages of students within each course who failed to answ er questions on the LOC inventory. Table 5-3 displays these percentages. Table 5-3 Percentages of students in each course failing to an swer questions on the Trice LOC Index. Level Skill Course N Percentages 3 Speech EAP0300 34 8.82 3 Reading EAP0320 35 8.57 3 Writing EAP0340 27 7.41 3 Grammar EAP0360 29 6.90 4 Speech EAP0400 62 8.06 4 Reading EAP0420 61 4.92 4 Writing EAP0440 54 12.96 4 Grammar EAP0460 62 8.06 5 Speech EAP1500 105 14.29 5 Reading EAP1520 109 15.60 5 Writing EAP1540 104 18.27 5 Grammar EAP1560 109 15.60 6 Reading EAP1620 141 12.06 6 Writing EAP1640 124 16.94 Contrary to what might be expected, lower pr oficiency students were not the only students failing to answer questions on the LOC inventory. In fact, as prof iciency level increased, so did the percentages of students not answering que stions on the LOC measure. Whether or not students at the lower levels understood the que stions is unknown. Perhaps higher proficiency

PAGE 102

102 students perceived elements of ambiguity in the items and were unable to respond. Future research could investigate the role of language and culture in EAP st udent responses to LOC questions. For example, perhaps some of the ques tions were perceived as too personal in nature, or perhaps students from different cultural groups that tend to prefer consensus building rather than individual decision making felt uncomfortable with some of the questions and opted not to answer them. If linguistic and cultural variables are identified to influenc e results, perhaps a new LOC scale could be developed and validated fo r ESL/EAP students at the college level. Valencias Generation 1.5 popula tion was found to be relatively small in comparison to Generation 1.5 populations discussed in other re search (Blumenthal, 2002; Lay, Carro, Tien, Niemann, & Leong, 1999). However, because the re lative size of Generation 1.5 populations at other institutions within the state is unknown, it is di fficult to say if Valencia is representative of other schools. Another finding that deserves comment is that teachers rated 71 out of 470 students (approximately 15%) as Generation 1.5. The computed indicator rated 146 out of 470 students (approximately 31%) as Generation 1.5. Wh at accounts for this discrepancy? Does the computed indicator include more error, or did th e objective nature of the indicator find things teachers missed? For example, it is possible th e computed indicator identified students as Generation 1.5, but due to affective, cultural, or personal reasons, th ese students have not acquired high levels of spoken proficiency or adapted to American culture, which could have caused teachers not to rate them as Generati on 1.5. On the other hand, perhaps because of affective, cultural, or personal reasons, some st udents failed to participate in class, thereby concealing their language proficiency leve ls and their Generation 1.5 status. In the open-ended responses to teacher su rveys, Generation 1.5 was one of the major categories. This indicates that even though the ac tual size of the population is small, teachers are

PAGE 103

103 concerned about this populati on. Teachers expressed thoughts about proper placement of these students and how these students ar e affecting their classes. Teachers made comments such as, Once again, the problem arises with 1.5s; all ot her students are placed right. A few indicated how Generation 1.5 students were bored in th eir courses. Two inst ructors suggested the possibility of developing specific courses for these students. Teacher comments in the Speech subsection could also be cons idered indirect comments a bout Generation 1.5 students. One teacher summed things up nicely by saying Speech is where the battles begin! Many of the comments made about students in Speech course s indicated that many Generation 1.5 students didnt need the course at all or had spoken proficiency higher than other students in the course.

PAGE 104

104 CHAPTER 6 CONCLUSIONS AND RECOMMENDATIONS This study assumes that effective placemen t for developmental education programs can increase both student success and retention, but only if placement measures are valid and can therefore accurately predict st udent success in courses. While there are a variety of studies analyzing the validity of Accuplacer tests at predicting success for native English speakers (Hirschy & Mack, 2001; James, 2006; Saunders, 2000; and Smittle, 1993), there is a relative paucity of research looking at how well these test function for ESL students. This study contributes to the growing body of research on placement and computer adaptive testing by investigating the predictive characteristics of the Accuplacer LOEP subtests on student performance in EAP classes at the community co llege level. It further informs research on Generation 1.5 students by developi ng and applying a survey measur e for identification of this population. This research also ra ises questions for future rese arch which have theoretical underpinnings to language competence, learne r motivation, and the importance of background knowledge. For example, how do level of la nguage proficiency and background knowledge interact and affect the validity of placement tests? What percen tage of the variance in final course grades is accounted for by student motivati on? And to what extent does culture influence assessment of proficiency and placement? On a more practical note, this study has pr ovided information for the researcher and decision makers at Valencia Community College on the effectiveness of a variety of current and potential placement practices. Based on the result s of this study, Valencia and other similar institutions may wish to consid er a number of recomm endations. For example, if students could be identified as Generation 1.5 early in the pl acement process, counselors could intervene and

PAGE 105

105 help to place them more effectively into Sp eech courses. A discussion of other placement recommendations follows. Recommendations for Testing and Placement This study indicated that stude nts are best placed into cour ses using individual subtests rather than composite scores or aggregates of subtests. This research compared three models, two using averaged subtest scores and one using indi vidual subtest scores. In both studies, the model using individual subtests was the best predicto r of success as measured by both final course grades and teacher evaluation of placement. Therefore, when in stitutions identify a subtest as predictive of a particular skill, schools should not weaken its predic tive capabilities by averaging it with other tests that are not predictive of the skill. Schools should simply use the individual skill variable for placement into same skill course s. For example, the Reading subtest should be used to place students in to reading courses. The Reading subtest was found to be the most efficient predictor of success as measured by final course grades; this finding was true acros s language skills and levels. When Valencia reevaluates placement practices, if it is decided to not use the Readi ng subtest in isolation to place students into Reading courses, a ny new composite models that are developed could benefit from giving the Reading subtest added weight. The Essay was found to be the most efficient predictor of success as measured by teacher evaluation of placement. Therefore, the writing sa mple should continue to be used in placement practices. Other experts in the field agree that including an essay provides important information in placing students into EAP courses. In fact, the CCCC Committee on Second Language Writing stated: Decisions regarding the placemen t of second language writers into writing courses should be based on students writing pr oficiency rather than their race, native language background,

PAGE 106

106 nationality, or immigration status Nor should the decisions be ba sed solely on the scores from standardized tests of general language proficiency or of spoke n language proficiency. Instead, scores from the direct assessment of students writing proficiency should be used, and multiple writing samples should be consulte d whenever possible. (2001) If the use of a writing sample is found to be too costly or time consuming, perhaps the Essay could be administered but assessed only when accuracy of placement is of the utmost importance. For example, at Valencia the decision to place students into level 5 is of greater importance than the decision to place students into levels 3 or 4 because at level 5 students earn college credit for the courses th ey are taking. Therefore, essays could be read only prior to admitting students into level 5. Given the inability of any of the subtests to predict succ ess in EAP grammar courses, Valencias program could benefit from an analysis of the curricular goals of its grammar courses. Perhaps there is a mismatch between the LOEP subt ests and curricular goals. If this is the case, perhaps an in-house subtest could lead to more predictable placement into grammar courses. Other Recommendations Although not a direct finding of this study, the review of research for this study led the researcher to findings that are fundamental in their application to pla cement practices. Because student populations and curricula vary from sc hool to school, each institution should identify which placement tests or combinations of placemen t tests provide the greatest accuracy for their curricula and establish cut scores and placement mechanisms accordingly. Furthermore, students are not negatively affected by being placed across levels based on test results. In other words, it is not only acceptable, but desirable, to place students into different skills based on their demonstrated proficiencies in those skills. Students do not need to be placed into all skills at one level. This concept is supporte d by the College Board (2003).

PAGE 107

107 In addition, students need to be aware of the im portance of placement tests; therefore, they should be advised accordingly and given explic it information about the costs both in time and money that they could incur as a result of poor perfor mance on the placement tests. Also, to ensure optimal test performance, limits should be placed on the number of tests a student can take in one day, or scheduled breaks shoul d be added to the test-taking timeline. Finally, because cut-scores have never been no rmed for the LOEP subtests at Valencia, a final recommendation would be to conduct a cut-scor e study. If Valencia we re to undertake such a study, a subcomponent of the study should a dd the existing but currently unused LOEP Listening subtest and perhaps the LOEP Write Pl acer ESL subtest to identify their predictive capabilities. Developing cut-scores for and usin g these other tests (lik e the LOEP Listening, Write Placer ESL and the CPT) in addition to the current LOEP subtests could lead to enhanced placement of EAP students at Valencia.

PAGE 108

108 APPENDIX A APPENDIX A: LOEP ESSAY RUBRIC Level 7 Native-like control of language. Sec ond language errors may be present but do not interfere with communicati on. Organization facilitates a clear and well supported message. Level 6 Although some second language errors ar e evident, they rarely interfere with communication. In addition, the reader is not troubled by occasional errors of vocabulary, spelling, punctua tion, and/or grammar. Organization and details, although simplistic, enable the reader to follow the message. Level 5 Second language errors, from a variet y linguistic categories, are evident and sometimes interfere with communication. The reader is sometimes troubled by errors of vocabulary, sp elling, punctuation, and/or grammar. Organization has been attempted, but may be unsuccessful. Level 4 Second language errors frequently hinder communication. Errors of vocabulary, spelling, punctuation, a nd/or grammar often trouble the reader. Organization may or may not be present. Level 3 Widespread second language errors in virtually every category. While some evidence of basic grammatical st ructures is present, errors of vocabulary, spelling, punctuation, and/or grammar consistently trouble the reader. Level 2 The reader sees rudimentary contro l of vocabulary, sp elling, punctuation, and/or grammar. However, the paper has no unified meaning, and errors constantly interfere with the re aders ability to comprehend. Level 0 The reader sees no evidence of control of vocabulary, spelling, punctuation, and grammar; hence, there is no sense of linguistic competence. Words appear on page but do not communicate meaning.

PAGE 109

109 APPENDIX B APPENDIX B: SURVEY INSTRU MENTS USED IN THE STUDY Letter to Instructors Dear EAP Instructor, I am a fellow EAP faculty member here at Va lencia Community College. During the last ten years there have been significant changes made to the English as a second language program offered here at Valencia, including changes to the number and types of courses offered, how students are placed into these courses, and how Valencia as an institution monitors the effectiveness of this placement. I am conducting research on the effectiveness of the placement process, and as an instructor here at Valencia you are in a position to provide valu able insight into how the system is working. Along with this informed consent form, you have been given a survey. The data gathered from this survey will provide me, the researcher, with the information necessar y to evaluate current practices and make recommendations to enhance practices in the future. You do not have to answer any questions that you do not wish to answer on this survey. It is important that you know that your identity will be kept confidential to the extent provided by law. The survey you fill will be stored in a locked file cabinet in my office and will be destroyed after the relevant data have been recorded. There is no anticipated risk, compensation, or othe r direct benefit to you as a participant in this research. You do not have to answer any questions you do not wish to answer or allow me access to your records. You are free to withdraw your consent to participate and may discontinue your participation in the research at any time without consequence. If you ha ve any questions about this research, please contact James May by email jmay@valencia.cc.fl.us or phone 407-5822047, or my supervisor, Dr. Candace Harper at charper@coe.ufl.edu or (352) 392-9191. If you have any questions or concerns about your rights as a research participant, you can also contact the UFIRB office, University of Florida, B ox 112250, Gainesville, FL 32611; ph (352) 392-0433. Please sign the Informed Consent form before taking the survey. By signing this form you give me permission to gather your scores and grades from your records and report your responses anonymously in my research. Thank you for your help. Sincerely, James May, Valencia Community College

PAGE 110

110 Informed Consent: Instructor (Please sign & return with survey) I have read the letter explaining the purpose a nd procedures involved in this EAP placement study. I voluntarily agree to participate in the st udy and to have my anonymous responses, scores, and grades included in the research report. ___________________________________________ ________________ Signature of Research Participant Date

PAGE 111

111 Instructor Survey Instructors Name Course Information Campus Information Date Semester Instructor Survey Section I Instructions: Please answer the following questions. 1) Circle the term that best describes Valencia s accuracy in general at placing second language students into the EAP courses they require. Poor Below Average Average Above Average Excellent 2) How often do you have students in your EAP classes that you feel might have been better placed in a different level? Never Rarely Sometimes Often Every Semester 3) This semester, how many of your students do you feel should have been placed in a different level? None A few Several Most All 4) Please provide any comments that qualify or explain your responses above. ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

PAGE 112

112 Instructor Survey Section II Instructions: Give your opinion of the current pl acement for each of the students in the list below. If you feel the student has been well placed, put a check in the well placed column next to the students name. If you feel the student would have been more appropriately placed in a different level, indicate which level (for example, EAP levels 2 through 6, College Prep rather than EAP, College Composition, etc.) Student Name Well Placed Not Well Placed Should have been placed in 1 Sample Student 2 Sample Student 3 Sample Student 4 Sample Student 5 Sample Student etc Sample Student

PAGE 113

113 Generation 1.5 Survey Dear fellow EAP Instructor: Thank you very much for your assistance in the LO EP Placement research. As a direct result of your assistance, we were able to survey 470 EAP students and 27 EAP Instructors. Coding of the data has begun, and we hope to have some answers by August. As I spoke with each of you, I gained valid insigh t into many of the issues we are facing. One of the more obvious issues many of you expressed concern about was Generation 1.5. In the surveys, students were asked a variety of demogr aphic and personal history questions. It is hoped that we can use some of these questions to reliably identify Gen 1.5 students early in the placement process. However, to create a valid instrument that will reliably identify this population. We need one more vital piece of evidence from you. I hope you will grant us this one last petition of your time. Below you will find a definition for Generation 1.5 students. Attached you will find your class rosters with a check box titled Generation 1.5? Please read the definition below and then place a check next to the names of the students you feel are members of Generation 1.5. Because you know your students quite well by this point in th e semester, this should take you no more than a few minutes of your time. After you have identif ied the Generation 1.5 students, please place the survey back in the envelope and mail it back to James May at Mail code 3-20. Thank you in advance for your assistance. Thanks Again James May Professor of English as a Second Language Generation 1.5 Defined Generation 1.5 students usually have come to the United States in their early teen or pre-teen years. They have often attended U.S. schools, and many of these students have even graduated from American high schools. While attending Amer ican schools, these students have had time to acquire informal English. Many of them use American idiomatic expressions, and some may even have American accents. Errors in their language are detectable, but they do not interfere with understanding, and these students are comfortable sp eaking English and do so with relative ease. Their reading, grammar, and writing skills on the other hand are usually behind that of their college-ready peers. They are not what you may consider true ESL students, but they are not true native English speaking students either.

PAGE 114

114 Letter to Students Dear EAP Student, I am an EAP faculty member here at Valencia Community College. During the last ten years there have been significant changes made to the English as a second language program offered here at Valencia, including changes to the numbe r and types of courses offered, how students are placed into these courses, and ho w Valencia as an institution mon itors the effectiveness of this placement. Students and teachers have commented that some students are placed in classes that are too easy or too difficult for them. As a student here at Valencia you are in a pos ition to provide valuable insight into how the system is working. Along with this informed consent form, you have been given a survey. The data gathered from this survey will provide me the researcher, with the information necessary to evaluate current practices and make recommendati ons to enhance practices in the future. You do not have to answer any questions that you do not wi sh to answer on this survey. In addition to the survey, I would also like to access your LOEP placement test scores and final course grades. You are not required to allow me access to these record s; however, this information is necessary for the research and would be greatly appreciated. It is important that you know that your identity will be kept confidential to the extent provided by law. The survey you fill out and the scores and grades accessed in your records w ill be stored in a locked file cabinet in my office and will be destroyed after the relevant data have been recorded. There is no anticipated risk, compensation, or othe r direct benefit to you as a participant in this research. You do not have to answer any questions you do not wish to answer or allow me access to your records. You are free to withdraw your consent to participate and may discontinue your participation in the research at any time without consequence. If you ha ve any questions about this research, please contact James May by email jmay@valencia.cc.fl.us or phone 407-5822047, or my supervisor, Dr. Candace Harper at charper@coe.ufl.edu or (352) 392-9191. If you have any questions or concerns about your rights as a research participant, you can also contact the UFIRB office, University of Florida, B ox 112250, Gainesville, FL 32611; ph (352) 392-0433. Please sign the Informed Consent form before taking the survey. By signing this form you give me permission to gather your scores and grades from your records and report your responses anonymously in my research. Thank you for your help. Sincerely, James May, Valencia Community College

PAGE 115

115 Informed Consent: Student EAP Placement Study I have read the letter explaining the purpose a nd procedures involved in this EAP placement study. I voluntarily agree to participate in the st udy and to have my anonymous responses, scores, and grades included in the research report. ___________________________________________ ________________ Signature of Research Participant Date

PAGE 116

116 Student Survey Section I: Course and Stude nt Identification Information Directions: Circle the most appropriate answer, check the box, or fill in the blank with the appropriate information (please print) 1. Campus: East West Osceola 2. Instructors Name: __________________________ 3. Course: EAP ________ 4. Todays Date: __________________________ 5. Your Last Name: __________________________ 6. First Name: __________________________ 7. Country of Origin: __________________________ 8. Native Language: __________________________ 9. Age: __________________________ 10. Gender: Male Female Section II: Student Background Information Directions: Circle the most appropriate answer or fill in the blank with the appropriate information (please print) 11. This semester is my (1st, 2nd, 3rd, 4th, ________) semester at Valencia Community College. 12. How many years have you lived in the Continental United States (or Alaska/Hawaii)? ________ 13. What grade were you in when you started school in the United States? 14. (If the answer is college, circle college.) ________ 15. What year did you graduate fro m high school?________ (or receive your GED)?________ 16. Are you the first person in your family to go to college? (Yes No) (If no, who was?)________________ _________________ _______________ 17. Are you the first in your family to go to college in the United States? (Yes No) (If no, who was?)___________ __________________ ___________________ 18. Have you gone to college outside of the United States? (Yes No) 19. If yes, for how many years? ________ 20. How would you rate your abilities to write papers and do research in English? (Circle one) Poor Below average Average Above average Expert 21. How many of your good friends (people you do things with every week) are native English speakers? (Circle one) None A few Some Most All 22. How often do you use English when speaking with your friends? (Circle one) Never Not often Sometimes Most of the time Always 23. How often does your family use English in your home? (Circle one) Never Not often Sometimes Most of the time Always 24. Approximately how many books do you have in your home? (Circle one) 25 or less 25 50 50 75 75 100 100 or more 25. Do you have a computer at home? (Yes No) 26. How would you rate your abilities to use the computer?

PAGE 117

117 Poor Below average Average Above average Expert Section III: Locus of Control Directions: Read each statement on this page and record your answer in the space provided on the left of each item using the following answer key: T = True, I agree with this statement F = False, I do not agree with this statement. 27. _____ College grades most often reflect the effort you put into classes. 28. _____ I came to college because it was expected of me. 29. _____ I have largely determined my own career goals. 30. _____ Some people have a knack for writing, while others will never write well no matter how hard they try. 31. _____ At least once, I have taken a course because it was easy to get a good grade. 32. _____ Professors sometimes make an early impression of you and then no matter what you do, you cannot change that impression. 33. _____ There are some subjects in which I could never do well. 34. _____ Some students, such as student lead ers and athletes, get free rides in college classes. 35. _____ I sometimes feel that there is nothing I can do to improve my situation. 36. _____ I never feel really ho peless there is always something I can do to improve my situation. 37. _____ I would never allow social activities to affect my studies. 38. _____ There are many more important things for me than getting good grades. 39. _____ Studying every day is important. 40. _____ For some courses it is not important to go to class. 41. _____ I consider myself highly motivated to achieve success in life. 42. _____ I am a good writer. 43. _____ Doing work on time is always important to me. 44. _____ What I learn is more determined by college and course requirements than by what I want to learn. 45. _____ I have been known to spend a lot of time making decisions which others do not take seriously. 46. _____ I am easily distracted. 47. _____ I can be easily talked out of studying. 48. _____ I get depressed sometimes and then there is no way I can accomplish what I know I should be doing. 49. _____ Things will probably go wrong for me some time in the near future. 50. _____ I keep changing my mind about my career goals. 51. _____ I feel I will someday make a real contribution to the world if I work hard at it. 52. _____ There has been at least one instance in school where social activity impaired my academic performance. 53. _____ I would like to graduate from college, but there are more important things in my life. 54. _____ I plan well and stick to my plans. Section IV: Placement Directions: Circle the most appropriate answer or fill in the blank with the appropriate information (please print)

PAGE 118

118 55. Do you feel that the EAP class you are in right now is at the right level for you? (Yes No) 56. If no, do you think you should be in a hi gher or lower level? (Higher Lower) Please explain why you feel this way. ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ 57. Are you currently taking any other courses? (Yes No) 58. If you are taking other courses, please list the course(s) you are taking in the spaces provided below and indicate how you feel a bout being placed in listed course(s). If you think you were well placed, check the Well Placed box. If you think you should have been placed in a different level, write in the level or course you think you should have been placed into. Course Name Well Placed? Should have been placed in?

PAGE 119

119 Survey Script Hello, my name is James May, and I am an EAP professor here at Valencia. During the last ten years there have been significant changes made to the English as a second language program offered here at Valencia, including changes to the number and types of courses offered, how students are placed into these courses, and how Valencia as an institution monitors the effectiveness of this placement. As teachers and students here at Valencia you ar e in a position to provide valuable insight into how the system is working. In the packet y ou have been given, you will find a brief letter detailing the most important points of what I am telling you now, an informed consent form, and a brief survey that will help Valencia to make decisions about our EAP program. You do not have to answer any questions that you do not wish to answer on this survey. In addition to the survey, I would also like to access your LOEP placement test scores and final course grades. You are not required to allow me access to these records; however, this information is necessary for the research and would be greatly appreciated. It is important that you know that your identity w ill be kept confidential to the extent provided by law. The survey you fill out and the scores and gr ades accessed in your records will be stored in a locked file cabinet in my office and will be destro yed after the relevant data have been recorded. You are free to withdraw your consent to partic ipate and may discontinue your participation in the research at any time without consequence. If you have any questions about this research, please feel free to contact me. My contact information is on the front sheet of your packet. Please rip off that sheet at this time and keep it for your records. Are there any questions before I go any further? If not, and you are willing to participate in the st udy, sign the informed consent form, and begin the survey. When you are finish ed, please raise your hand, and I will come around and collect it. If you have any questions during the survey, raise y our hand and I will be around to answer them. Thank you in advance for your help in this research.

PAGE 120

120 APPENDIX C APPENDIX C: COMPLETE RESULTS OF STUDENT/TEACHER SURVEYS Table C-1. Native languages of EAP placemen t survey respondents listed by campus All respondents East campus Osceola campus West campus Native Language (n = 466) (n = 109) (n = 122) (n = 210) Spanish 51.5 63.3 74.6 31.9 Creole 14.4 1.8 4.1 25.2 Arabic 7.3 7.3 5.7 8.6 Portuguese 3.4 1.8 3.3 4.8 French 3.0 2.8 1.6 4.3 Russian 2.8 0.9 3.3 2.9 Chinese 2.4 0.9 1.6 3.8 Vietnamese 2.1 2.8 3.3 English 1.3 1.8 0.8 1.0 Korean 1.1 1.8 1.4 Tagalog 1.1 1.8 0.8 0.5 Bengali 0.9 1.9 Gujarati 0.9 1.8 1.0 Farsi 0.6 0.9 1.0 Hindi 0.6 2.5 Moldavian 0.6 0.9 1.0 Polish 0.6 0.9 1.0 Urdu 0.6 0.9 0.8 0.5 Amharic 0.4 1.0 Bulgarian 0.4 0.9 0.5 Japanese 0.4 1.0 ASL 0.2 0.8 Armenian 0.2 0.5 Burmese 0.2 0.9 Dutch 0.2 0.9 Georgian 0.2 0.9 Indonesian 0.2 0.9 Krio 0.2 0.5 Latvian 0.2 0.5 Lithuanian 0.2 0.5 Papiamento 0.2 0.9

PAGE 121

121 Table C-1: continued. Serbian 0.2 0.5 Somali 0.2 0.9 Swahili 0.2 0.5 Thai 0.2 0.5 Tswana 0.2 0.5 Ukrainian 0.2 0.9 Note: Numbers represent the percentage of survey respondents in each category. Table C-2. Countries of origin for EAP pl acement survey respondents listed by campus All respondents East campus Osceola campus West campus Country of Origin (n = 464) (n = 108) (n = 122) (n = 208) Columbia 17.24 18.52 25.41 11.54 Haiti 17.03 4.63 4.92 28.85 Puerto Rico 11.21 18.52 18.85 3.85 Morocco 4.74 3.70 4.92 5.77 Peru 4.53 4.63 7.38 3.37 Dominican Republic 4.09 4.63 6.56 2.4 Brazil 3.45 1.85 3.28 4.81 Venezuela 3.02 0.93 5.74 2.4 Cuba 2.80 4.63 1.64 1.44 USA 2.37 3.70 1.64 2.4 Vietnam 2.16 2.78 3.37 Ecuador 1.94 1.85 3.28 0.96 India 1.51 1.85 2.46 0.96 Russia 1.51 0.93 2.46 0.96 China 1.08 1.64 1.44 Jordon 1.08 1.92 Korea 1.08 1.85 1.44 Nicaragua 1.08 0.93 2.46 0.48 Philippines 1.08 1.85 0.82 0.48 Bangladesh 0.86 1.92 Pakistan 0.86 0.93 1.64 0.48 Uruguay 0.86 0.93 0.82 Chile 0.65 0.93 0.82 0.48 Iran 0.65 0.93 0.96 Mexico 0.65 1.85 0.48 Panama 0.65 0.93 0.96 Poland 0.65 0.93 0.96 Argentina 0.43 0.93 0.48 Table C-2: continued

PAGE 122

122 Bulgaria 0.43 0.93 0.48 Egypt 0.43 0.96 Estonia 0.43 0.96 Ethiopia 0.43 0.96 Hong Kong 0.43 0.93 Japan 0.43 0.96 Lebanon 0.43 0.93 0.48 Mongolia 0.43 0.93 0.48 Paraguay 0.43 0.93 0.82 Surinam 0.43 0.93 0.48 Taiwan 0.43 0.96 Uzbekistan 0.43 0.82 Africa 0.22 0.48 Armenia 0.22 0.48 Aruba 0.22 0.93 Belarus 0.22 0.48 Botswana 0.22 0.48 Burma 0.22 0.93 Costa Rica 0.22 0.48 Denmark 0.22 0.93 El Salvador 0.22 0.48 Guatemala 0.22 0.48 Honduras 0.22 0.48 Latvia 0.22 0.48 Lithuania 0.22 0.48 Moldova 0.22 0.48 Palestine 0.22 0.82 Qatar 0.22 0.93 Russian Georgia 0.22 0.93 Serbia 0.22 0.48 Sierra Leone 0.22 0.48 Somalia 0.22 0.93 Syria 0.22 0.93 Thailand 0.22 0.48 Tunisia 0.22 0.82 Spain 0.22 0.93 Indonesia 0.22 0.93 Cameroon 0.22 0.48 Note: Numbers represent the percentage of survey respondents in each category.

PAGE 123

123 Table C-3. Gender of EAP placement su rvey respondents listed by campus All respondents East campus Osceola campus West campus Gender (n = 467) (n = 109) (n = 122) (n = 211) Female 59.5 61.5 66.4 55.9 Male 40.5 38.5 33.6 44.1 Note: Numbers represent the percentage of survey respondents in each category. Table C-4. Semester of enrollment for EAP placement survey respondents listed by campus All respondents East campus Osceola campus West campus Semester (n = 466) (n = 107) (n = 123) (n = 210) 1st Semester 28.1 32.7 24.4 28.6 2nd Semester 34.3 34.6 35.0 34.3 3rd Semester 26.6 22.4 30.9 24.8 4th Semester 7.7 6.5 4.9 10.0 5th Semester 1.7 2.8 1.6 1.4 6th Semester 0.9 0.9 1.6 0.5 7th Semester 0.4 0.8 0.5 8th Semester 0.2 0.8 Note: Numbers represent the percentage of survey respondents in each category. Table C-5. Ages of EAP placement survey respondents listed by campus All respondents East campus Osceola campus West campus Age (n = 451) (n = 105) (n = 121) (n = 201) 17 0.44 0.83 0.50 18 6.65 11.43 9.09 2.99 19 11.97 13.33 15.70 9.95 20 9.98 11.43 11.57 9.45 21 11.97 8.57 13.22 12.44 22 4.43 6.67 4.13 3.98 23 5.54 5.71 1.65 7.46 24 3.99 2.86 3.31 4.98 25 3.99 3.81 4.96 3.48 26 4.21 7.62 3.31 2.49 27 1.55 1.90 2.49 28 2.44 0.95 1.65 3.98

PAGE 124

124 Table C-5: continued 29 2.22 0.95 1.65 3.48 30 2.44 2.48 2.49 31 2.00 1.90 2.48 1.99 32 3.10 3.81 3.31 2.99 33 2.00 1.90 3.31 1.49 34 0.89 1.90 0.83 0.50 35 2.22 0.95 0.83 2.99 36 1.77 2.86 1.65 1.49 37 0.44 1.00 38 1.77 0.95 0.83 2.99 39 2.66 0.95 1.65 3.48 40 2.22 2.86 0.83 2.49 41 0.67 0.83 1.00 42 1.11 1.65 1.49 43 0.67 0.95 1.00 44 0.44 0.83 0.50 45 1.11 0.95 0.83 1.00 46 0.67 0.95 0.83 47 0.89 0.95 0.83 1.00 48 0.22 0.50 49 0.89 3.31 50 0.44 0.95 0.50 51 0.22 0.83 52 0.44 0.95 53 0.67 0.95 0.83 0.50 55 0.44 0.50 59 0.22 0.50 Note: Numbers represent the percentage of survey respondents in each category.

PAGE 125

125 Table C-6. Number of Years in the U.S. for placement survey respondents listed by campus All respondents East campus Osceola campus West campus Years in U.S. (n = 461) (n = 107) (n = 122) (n = 207) < 1 Year 1.3 3.7 1.0 1 Year 7.2 8.4 5.7 8.2 2 Years 12.4 10.3 11.5 12.1 3 Years 15.4 15.0 16.4 14.5 4 Years 10.8 13.1 9.8 10.6 5 Years 12.8 15.0 11.5 12.1 6 Years 10.6 11.2 7.4 13.0 7 Years 6.1 2.8 5.7 8.7 8 Years 3.5 3.7 3.3 3.4 9 Years 2.0 3.3 2.4 10 Years 4.1 3.7 6.6 2.4 11 Years 1.1 2.5 1.0 12 Years 1.7 0.9 3.3 1.0 13 Years 1.1 1.9 1.6 0.5 14 Years 1.1 0.9 1.6 1.0 15 Years 2.4 1.9 3.3 2.4 16 Years 0.9 1.9 0.5 17 Years 0.9 0.9 1.6 0.5 18 Years 0.9 1.9 1.0 19 Years 0.4 0.8 0.5 20 Years 0.9 1.9 22 Years 0.4 0.9 0.8 23 Years 0.4 0.8 0.5 25 Years 0.4 0.8 27 Years 0.2 0.8 28 Years 0.2 0.5 29 Years 0.2 0.9 30 Years 0.2 0.5 33 Years 0.2 0.9 34 Years 0.2 0.8 Note: Numbers represent the percentage of survey respondents in each category.

PAGE 126

126 Table C-7. Survey responses to, What ye ar did you graduate from high school/earn GED? listed by campus All respondents East campus Osceola campus West campus Year (n = 435) (n = 102) (n = 114) (n = 195) 2006 0.2 0.9 2005 19.8 25.5 22.8 16.9 2004 12.2 11.8 14.9 11.8 2003 8.3 7.8 8.8 8.2 2002 5.5 2.9 5.3 6.2 2001 5.1 3.9 4.4 6.2 2000 6.0 3.9 3.5 8.2 1999 4.8 6.9 2.6 4.6 1998 2.8 3.9 3.5 1.0 1997 2.3 3.5 2.1 1996 2.1 3.9 0.9 2.1 1995 3.7 4.9 2.6 4.1 1994 3.4 1.0 3.5 4.6 1993 2.1 1.8 3.1 1992 1.8 2.9 2.6 1991 2.3 3.9 2.6 1.0 1990 1.4 1.0 1.8 1.0 1989 1.1 1.8 1.5 1988 0.7 1.0 0.9 0.5 1987 1.4 2.0 2.1 1986 1.4 0.9 2.6 1985 1.4 1.0 1.5 1984 1.4 1.0 1.8 1.5 1983 1.8 2.0 2.6 1982 1.1 1.0 2.6 0.5 1981 0.7 2.0 0.5 1980 1.4 2.0 2.6 0.5 1979 0.2 0.5 1978 0.2 1.0 1977 0.5 1.0 0.9 1975 0.7 2.6 1974 0.5 1.0 1973 0.5 1.0 0.9 1972 0.2 0.9 1971 0.5 1.0 0.9 1970 0.2 1969 0.5 1.0 Note: Numbers represent the percentage of survey respondents in each category.

PAGE 127

127 Table C-8. Survey responses to, Are you th e first person in your family to go to college? All respondents East campus Osceola campus West campus Answer (n = 465) (n = 109) (n = 122) (n = 208) No 72.26 70.64 77.87 69.71 Yes 27.74 29.36 22.13 30.29 Note: Numbers represent the percentage of survey respondents in each category. Table C-9. Survey responses to, If you weren t the first person in your family to go to college, who was? All Respondents Answer (n = 295) My child has gone to college 2.71 My siblings/cousins or spouse has gone to college 50.51 My parents or their siblings have gone to college 41.02 My grandparents have gone to college 2.71 Everyone in my family has gone to college 2.71 East Campus Answer (n = 68) My child has gone to college 2.71 My siblings/cousins or spouse has gone to college 50.51 My parents or their siblings have gone to college 41.02 My grandparents have gone to college 2.71 Everyone in my family has gone to college 2.71 Osceola Campus Answer (n = 82) My child has gone to college 2.71 My siblings/cousins or spouse has gone to college 50.51 My parents or their siblings have gone to college 41.02 My grandparents have gone to college 2.71 Everyone in my family has gone to college 2.71 West Campus Answer (n = 129) My child has gone to college 2.71 My siblings/cousins or spouse has gone to college 50.51

PAGE 128

128 Table C-9: continued My parents or their siblings have gone to college 41.02 My grandparents have gone to college 2.71 Everyone in my family has gone to college 2.71 Note: Numbers represent the percentage of survey respondents in each category. Table C-10. Survey responses to, Are you the fi rst person in your family to go to college in the U.S.? All respondents East campus Osceola campus West campus Answer (n = 464) (n = 108) (n = 121) (n = 209) No 37.50 40.74 37.19 35.89 Yes 62.50 59.26 62.81 64.11 Note: Numbers represent the percentage of survey respondents in each category. Table C-11. Survey responses to, If you weren t the first person in your family to go to college, who was? All Respondents Answer (n = 147) My child has gone to college 4.08 My siblings/cousins or spouse has gone to college 82.31 My parents or their siblings have gone to college 12.93 My grandparents have gone to college 0.68 East Campus Answer (n = 41) My child has gone to college 2.44 My siblings/cousins or spouse has gone to college 78.05 My parents or their siblings have gone to college 17.07 My grandparents have gone to college 2.44 Osceola Campus Answer (n = 37) My child has gone to college 5.41 My siblings/cousins or spouse has gone to college 91.89 My parents or their siblings have gone to college 2.70 My grandparents have gone to college

PAGE 129

129 Table C-11: continued West Campus Answer (n = 61) My child has gone to college 3.28 My siblings/cousins or spouse has gone to college 80.33 My parents or their siblings have gone to college 16.39 My grandparents have gone to college Note: Numbers represent the percentage of survey respondents in each category. Table C-12. Survey responses to, Have you gone to college outside of the United States? All respondents East campus Osceola campus West campus Answer (n = 464) (n = 109) (n = 121) (n = 210) No 60.78 61.47 66.94 60.00 Yes 39.22 38.53 33.06 40.00 Note: Numbers represent the percentage of survey respondents in each category. Table C-13. Number of years sp ent in college outside of the U.S. by placement survey respondents listed by campus All respondents East campus Osceola campus West campus Years (n = 185) (n = 44) (n = 42) (n = 82) < 1 Year 1.1 2.3 2.4 1 Year 16.8 13.6 28.6 13.4 2 Years 20.5 20.5 23.8 19.5 3 Years 22.2 18.2 11.9 28.0 4 Years 17.8 25.0 14.3 17.1 5 Years 11.9 13.6 7.1 13.4 6 Years 4.9 2.3 11.9 1.2 7 Years 1.6 2.3 2.4 8 Years 1.6 2.4 9 Years 0.5 2.3 10 Years 1.1 2.4 Note: Numbers represent the percentage of survey respondents in each category.

PAGE 130

130 LIST OF REFERENCES AACC. (2007). American Associat ion of Community Colleges: Fa st Facts. Retrieved May 30, 2007, from http://www.aacc.nche.edu/Content/Navigati onMenu/AboutCommunityColleges/Fast_Fac ts1/Fast_Facts.htm Web s ite: http://www2.aacc.nche.e du/pdf/factsheet2007_updated.pdf Abedi, J. (2006).Psychometric issues in ELL assessment and special education eligibility. Teachers College Record. 108(11), 2282-2303. Abedi, J., Lord, C., Hofstetter, C. & Baker, E. (2000). Impact of accommodation strategies on English language learners test performance. Education Measurement: Issues and Practice. 19(3), 16-26. Abedi, J., Lord, C., Hofstetter, C. (1998). Imp act of selected backgr ound variables on students NAEP math performance (CSE Tech Rep. N o. 478). Los Angeles: University of California, National Center for Research on Evaluation, Standards and Student Testing. Abedi, J., Lord, C., Kim-Boscardin, C., & Miyos hi, J. (2000). The effects of accommodations on the assessment of LEP students in NAEP (CSE Tech. Rep. No.537). Los Angeles: University of California, National Center for Research on Evaluation, Standards and Student Testing Abedi, J. Lord, C., & Plummer, J. (1997). La nguage background as a variable in NAEP. Los Angeles: University of Calif ornia, National Center for Research on Evaluation, Standards and Student Testing. Abraham, A. A. & Creech, J. D. (2002). Reducing remedial education: What progress are states making? Educational Benchmarks 2 000 Series. Retrieved Nov 10 2003 from http://www.sreb.org/main/Benchmarks2000/remedial.pdf Abramson, L.Y., Seligman, M.E.P., & Teasdale, J. D. (1978). Learned helplessness in humans: Critique and reformulation. Journal of Abnormal Psychology. 87, 49-74. Abstien, J. (1998) Be it ever so humble, ther e's no place like... yada, yada, yada. Florida Community College Advocate, 1(4), 8-9. Alexander, P. A., & Judy, J. E. (1988) The in teraction of domain-specific and strategic knowledge in academic performance. Revi ew of Educational Research, 58, 375-404. American College Testing Program. (1990). ASS ET technical manual for use with forms B and C. Iowa City, IO: Author American Educational Research Association. Am erican Psychological A ssociation, & National Council on Measurement in Education. ( 1999) Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

PAGE 131

131 Armstrong, W. B. (1994). English placement test ing, multiple measures, and disproportionate impact: An analysis of the criterionand cont ent-related validity evidence of the reading & writing placement tests in the San Diego Community College District. San Diego, CA: San Diego Community College District, Re search and Planning. (ERIC Document Reproduction Service No. ED 398 965) Artiles, A. J., Rueda, R., Salazar, J. J., & Higa reda, I. (2005). Within-group diversity in minority disproportionate representation: English langua ge learners in urba n school districts. Exceptional Children, 71(3), 1-17. Bachman, L. F. (1990). Fundamental considerat ions in language testing. Oxford: Oxford University Press. Bachman, L. and Palmer, A.S. (1983). The construct validity of the FSI Oral Interview. In J.W. Oller, Jr., (Ed.), Issues in language te sting research (pp. 154-169). Rowley, MA: Newbury House. Bachman, L. F. & Palmer, A. S. (1989). The construct validation of self ratings of communicative language ability. Language Testing, 6, 14-20. Banta,T. W., Rudolph, L. B., Van Dyke, J. & Fi sher, H. S. (1996). Performance funding comes of age in Tennessee. Journal of Higher Education, 67, 23-45. BEBR. (2005). Population Projections by Age, Se x, Race, and Hispanic Origin for Florida and Its Counties 2005-2030. Bureau of Economic and Business Research. FPS Bulletin 145. Belcher, M. J. (1993). Preparedness of high sc hool graduates for college : A statewide look at basic skills tests results 1990-91 [Informati on Capsule No. 93-01C]. Miami, FL: Office of Institutional Research at Miami-Dade Community College. (ERIC Document Reproduction Service No. ED 366 394). Behrman, E. H. & Street, C. (2005). The va lidity of using a cont ent specific reading comprehension test for college placement. J ournal of College Reading and Learning, 35 (2), Spring 2005. Blanche, P., & Merino, B. (1989). Self-assessment of foreign language skills: Implications for teachers and researchers. Language Learning, 39, 313-340. Blumenthal, A. J. (2002). English as a S econd Language at the Community College: An Exploration of Context and Concerns. New Directions for Community Colleges, 117, p 45. Byrnes, J.P. (1995). Domain specificity and the l ogic of using general abili ty as an independent variable or covariate. Merri ll-Palmer Quarterly, 41, 1-24. Canale, M. & Swain, M. (1980). Theoretical ba ses of communicative approaches to secondlanguage teaching and testing. App lied Linguistics, 1 (1), 1-47.

PAGE 132

132 Carroll, J.B. (1961). Fundamental considerations in testing for English proficiency of foreign students. In Testing the English proficienc y of foreign students (pp. 31-40). Washington, D.C.: Center for Applied Linguistics. Also in H.B. Allen and R.N. Campbell, (Eds.), Teaching English as a second language: A book of readings (pp. 313-320). New York: McGraw Hill. Carroll, J.B. (1983). Psychometric theory and language testing. In J.W. Oller, Jr., (Ed.), Issues in language testing research (pp. 80-10 7). Rowley, MA: Newbury House. CCCC Committee on Second Language Writing. ( 2001). CCCC statement on second-language writing and writers. College Compos ition and Communication, 52(4) 669-674. Chomsky, N.(1957) Syntactic structures. The Hague, Mouton & Co. Christensen,L, Fitzpatrick, R., Murie,R. & Zhang, X. (2005). Building voice and developing academic literacy for multilingual students: Th e commanding English model. In J. L. Higbee, D. B. Lundell, & D.R. Arendale (Eds .) The General College vision: Integrating intellectual growth, multicu ltural perspectives and student development (pp155-184). Minneapolis, MN: Center for Research on De velopmental Education and Urban Literacy, General College, University of Minnesota. Clark, J. L. D. (1981). Language. In T. S. Barrows, S. M. Ager, M. F. Bennett, H. I. Braun, J. L. D.Clark, L. G. Harris, and S. F. Klein (2535). College Students' Knowledge and Beliefs: A Survey of Global Understanding. New Ro chelle, NY: Change Magazine Press. Clifford, M. M. (1976). A revised measure of lo cus of control. Child Study Journal, 6, 85-90. Cohen, J. (1988). Statistical power and analysis fo r the behavioral sciences (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates. Cohen, A. (2002). America's community colleges: On the ascent. Retrieved April 1, 2007, from U.S Department of State: Ejournal U.S.A. Web site: http://usinfo.state.gov/journals/itsv/0602/ijse/cohen.htm#top College Board. (2001). ACCUPLACER Coordi nators Guide. New York, NY: Author. College Board. (2003). ACCUPLACER Online: Technical Manual. New York, NY: Author. College Board. (2004). ACCUPLACER Coordi nators Guide. New York, NY: Author. College of the Canyons. (1994). Predictive valid ity study of the APS writing and reading tests and validating placement rules for the APS writing test. (Eric Document Reproduction Service No. ED376915). College of the Canyons Office of Institutional Development. (1996). Disproportionate impact study. Valencia, CA: Office of Institutional Development for College of the Canyons. (ERIC Document Reproduction Service No. ED 401 982).

PAGE 133

133 Coombe, C. (1992). The relations hip between self-assessment ra tings of functional skills and basic English skills results in adult refugee ESL lear ners. Unpublished doctoral dissertation, Ohio State University, Columbus. Crandall, V. C., Katkovsky, W. A. & Crandall, V. J. (1965). Childrens belief in their own control of reinforcements in intellectualachievement situations. Child Development, 1965, 69, 91-109. Culbertson, W.L. (1997) Improving predictive accu racy for placement in entry level college mathematics courses using available student information (Doctoral Dissertation, The University of Toledo, 1997) Dissertati on Abstracts International, 58, A0395. Cummings, S. W. (1991). The English placement practices of fifteen selected Southern California community colleges (Doctoral Dissert ation, University of Southern California, 1991). Dissertation Abstracts International, A52/06, 1958. Cunningham, J. M. (1983). An evaluation of E nglish placement instruments for first term freshmen at Embry-Riddle Aeronautical Un iversity [CD-ROM]. Abstract from: UMI ProQuest Digital Disse rtations: Dissertation Abstract Item: 8315061. Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self determination in human behavior. New York: Plenum. Dornyei, Z. (2003). Attitudes, orientations, and motivations in language learning: Advances in theory, research, and applications. Ann Arbor: Blackwell. Estrada, L., Dupoux, E., Wolman, C. (2005). Th e Personal-Emotional Social Adjustment of English-Language Learners to a Community College. Community College Journal of Research & Practice, Vol. 29 Issue 7, p 557. Evola, J., E. Mamer, and Lentz, B. (1980). Disc rete point versus global scoring for cohesive devices. In J.W. Oller, Jr. and Kyle Perkin s (Eds.), Research in Language Testing, (pp. 177-181). Rowley, MA: Newbury House. Farhady, H. (1983). On the plausibility of the unita ry language proficiency f actor. In J.W. Oller, Jr., (Ed.), Issues in language testing resear ch (pp. 11-28). Rowley, MA: Newbury House. Feldt, R. C. (1989) Reading Comprehension and critical thinking as predictors of course performance. Perceptual and motor skills, 68, 642. Florida Department of Educa tion. (2005). Developmental edu cation in Florida community colleges: Appendix A. Retrieved Feb 13, 2007, from Florida Department of Education Web site: http://www.fldoe.org/ cc/Vision/PDFs/PR2005_05App.pdf Florida Statutes. (2003) Title XL VIII K-20 code, chapter 1008. Retrieved June 15, 2004, from http://www.flsenate.gove/Statutes/index.c fm?App_mode=Display_Statute&URL=Ch100 8/ch1008.htm

PAGE 134

134 Ford, D. Y. (1994). An investigation of the paradox of underachievement among gifted black students. Roeper Review, 16(2), 78-85. Fouly, K.A, Bachman, L.F., and Cziko, G.A. (1990) The divisibility of language competence: A confirmatory approach. Language Learning, 40 (1), 1-21. Galbraith, F. L. (1986). The use of multiple choi ce items and holistically scored writing samples to assess student writing ability [CD-RO M]. Abstract from: UM I ProQuest Digital Dissertations: Dissertation Abstract Item: 8626947. Gardner, R. C., & Lambert, W. (1972). Attit udes and motivation in second language learning. Rowley, MA: Newburry House. Garrow, J. R. (1989). Assessing and improving th e adequacy of college composition placement [CD-ROM]. Abstract from: UM I ProQuest Digital Disserta tions: Dissertation Abstract Item: 8921432. Gerald, D. (2000). Projections of education statistics to 2010 (c hapter 2) Retrieved September 10th 2001 from http://nces.ed.gov/pubs2000/projections/chapter2.html Gibian, G. (1951) College English for fore ign students college English 13 (3) 157-160. Goen, S., Porter, P., Swanson, D., & Vandomm elen, D. (2002). Generation 1.5. The CATESOL Journal, 14(1), 103-105. Goodman, J.F., Freed, B., and McManus, W.J. ( 1990). Determining exemptions from foreign language requirements: Use of the Modern Language Aptitude Test. Contemporary Educational Psychology, 15 (2),131-141. Grant, R. A., & Wong, S. D. (2003). Barriers to literacy for language-minority learners: An argument for change in the literacy educa tion profession. Journal of Adolescent & Adult Literacy, 46(5), 386-394. Grunder P. G. & Hellmich, D. M. (1996). Academic persistence and achievement of remedial students in a community colleges college success program. Community college review, 24(2) 21-33. Harklau, L., (2000). From the "good kids" to the "worst:" Representations of English language learners across educational settings. TESOL Quarterly, 34 (1), 35. Harklau, L. (2003). Generation 1.5 students in college writing. Retrieved December 26,2004, from Washington DC: Center for applied linguistics Web site: http://www.cal.org/resources /digest/0305harklau.html Harklau, L. Losey, K. M., & Siegal, M. (Eds.). (1999). Generation 1.5 meets college composition: Issues in the teaching of writing to U.S. educated learners of ESL. Mahwah, NJ: Earlbaum.

PAGE 135

135 Harris, R.J. (1975). A primer of multivar iate statistics. New York: Academic. Hirshy, P., & Mack, Q. (2001). Comprehens ion of student success among Asnunstuck Community College elementary algebra st udents placed by ACCUPLACER, Scholastic Aptitude Test Score, or prereq uisite course. Paper presented at the North East Association for Institutional Research Annual Confer ence, Cambridge M.A. (ERIC Document Reproduction Service No. ED465362) Hymes, D. (1972). On Communi cative Competence. In J.B. Pride & J.Holmes (Eds.). Sociolinguistics. Harmondsworth, England: Penguin Books. Ignash, J. M. (1995). Encouraging ESL student persistence: The influence of policy on curriculum design. Community College Review, 23(3) 17-34. Isonio, S. (1994). Relationship between APS writi ng test scores and instructor preparedness ratings: Further evidence for validity. H untington Beach, CA: Golden West College. (ERIC Document Reproduction Service No. ED 370 617). Ives, S. (1953). Help for the foreign student s. College Composition and Communication, 4 (11) 141-144. James, C. (2006). Validating a computerized sc oring system for assessing writing and placing students in composition courses. Assessing Writing, 11(3), 167 178. Jones, J., & Jackson, R. (1991). The impact of writing placement testing and remedial writing programs on student ethnic populations at Oxnard College [Research Report #91-02]. Oxnard, CA: Oxnard College. (ERIC Docume nt Reproduction Service No. ED 335 081). Kessler, R. P. (1987). Can reading placement scores predict classroom performance? A discriminate analysis. (Santa Ana, CA: Rancho Santiago Community College District. (ERIC Document Reproduction Service No.291440) Krashen, S. (1982). Principles and practice in second language acquisition. Oxford, UK: Pergamon Press. Lao, R. C. (1980). Differential factors affecting male and female academic performance in high school. Journal of Psychology, 104, 119-127. Lay, N. D. S., Carro, G., Tien, S., Niema nn, T. C., & Leong, S. (1999). Connections: High school to college. In L. Harklau, K. Lo sey, & M. Siegal (Eds.), Language minority students, ESL, and college composition (pp. 175). Mahwah, NJ: Erlbaum. LeBlanc, R., & Painchaud, G. (1985). Self-assessment as a second language placement instrument. TESOL Quarterly, 19, 673-687.

PAGE 136

136 Lee, Y. (2005). A summary of construct vali dation of an English for academic purposes placement test. Retrieved May 6, 2006 from Working Papers in Second or Foreign Language Assessment Website. http://www.lsa.umich.edu/eli/spaan/paper s2005/spaan_working_papers_v3_FULL.pdf Lefcourt, H. M., VonBaeyer, C. L., Ware, E. E., & Cox, D. J. (1979). The multidimensionalmultiattributional causality scale: the devel opment of a goal-specific locus of control scale. Canadian Journal of Behavioral Science, 11,286-304. Lofland, J. & Lofland, L. H. (1995) Analyzing soci al settings: A guide to qualitative observation and anlysis (3rd ed.) Belmont CA: Wadsworth Publishing Co. Lynch, B., Davidson, F., and Henning, G. (1988) Person dimensionality in language test validation. Language Testing, 5(2), 206-219. Matsuda, P. K., Canagarajah, A. S., Harklau, L. Hyland, K., & Warschauer, M. (2003). Changing currents in second language writing research: A colloquium. Journal of Second Language Writing, 12(2), 151-179. Miele, C. (2003). Bergen Community College Meets Generation 1.5. Community college Journal of Research and Practice, 27: 603-612, Moore, R. and Christiansen, L. (2005). Academ ic Behaviors and perfor mances of Generation 1.5 Students who succeed in college science c ourses. The Learning Assistance Review. 10(2), 17-29. Mortiz, C. (1995). Self-assessment of foreign language proficiency: A critical analysis of issues and a study of cognitive orientations of French learners. Unpublished doctoral dissertation, Cornell University, Ithaca, NY. Myles, J. (2002). Second language writing and rese arch: The writing process and error analysis in student texts. Retrieved Decem ber 13, 2004, from TESL-EJ Web site: http://cwp60.berkely.edu: 16080/TESL-EJ/ej22/al.html Nunnally, J. (1978). Psychometric theo ry (2nd ed.) New York: McGraw-Hill. Odgden, E.P., & Trice, A. D. (1986). The predicti ve validity of the Academic Locus of Control Scale for College Students; Freshman out comes. Journal of Social Behavior and Personality, 1, 649-652. Oller, J.W., Jr. (1972). Scoring methods and difficu lty levels for cloze tests of proficiency in English as a second language. M odern Language Journal, 56, 151-158. Oller, J.W., Jr. (1979). Language tests at school. London: Longman. Oller, J.W., Jr. (1983). A consensus for the 80's? In J.W. Oller, Jr., (Ed.), Issues in language testing research (pp. 351-356). Rowley, MA: Newbury House.

PAGE 137

137 Oller, J.W., Jr. and Perkins, K, (Eds.), (1978) Language in Education: Testing the Tests. Rowley, MA: Newbury House. Oller, J.W., Jr. and Perkins, K, (Eds.), ( 1980). Research in Language Testing. Rowley, MA: Newbury House. Oller, J. W. (1992). Language testing research: lessons applie d to LEP students and programs. Proceedings of the Second National Research Symposium on Limited English Proficient Student Issues: Focus on Evalua tion and Measurement. OBLEMA. Oltman, P.K, Stricker, L.J., and Barrows, T.S. (1990). Analyzing test structure by multidimensional scaling. Journal of Applied Psychology, 75, 21-27. Ortiz, A.A. and Yates, J.R. (1983). Incidence of exceptionality among Hispanics: Implications for manpower planning. NABE Journal 7, 41-54. Padron, E. J. (1997) Entry-Level Placement Scor es for the 1996-97 Academic Year. Miami-Dade Community College, FL. June 1, 1997 Memorandum. ED409927. Park, Y. S. (1998). Locus of Control: Attributio nal style and academic achievement comparative analysis of Korean-Chinese and Chinese st udents. Asian Journal of Social Psychology. Vol. 1(2). Peirce, B. M., Swain, M., & Hart, D. (1993). Se lf-assessment, French immersion, and locus of control. Applied Linguistics, 14, 25-42. Phakiti (2005)An empirical investigation into th e nature or and factors affecting test takers' calibration within the context of an Eng lish placement test. Retrieved May 16th 2006 from Working Papers in Second or Fo reign Language Assessment Website http://www.lsa.umich.edu/eli/spaan/pap ers2005/spaan_working_papers_v3_FULL.pdf Rand, E. (1972). Integrative and di screte-point tests at UCLA. Wo rk papers in TESL: UCLA, 6, 67-78. Reid, J.M., (1997). Which nonnative speaker? Differe nces between interna tional students and U.S. resident (language minority) students. New Directions for Teaching and Learning, 70, 17. Reimanis, G. (1980). Locus of control and anomie in Western and Africa n cultures. Journal of Social Psychology, 112:2. Rita, E. S. (1997). The effect of computer-a ssisted student development programs on entering freshmen locus of control. Colle ge Student Journal, 31(1), 80. Roberge, M. M. (2002).California's Gene ration 1.5 immigrants: What experiences, characteristics, and needs do they bring to our English classes?. The CATESOL Journal. 14(1), 107-129.

PAGE 138

138 Ross, S. (1998). Self-assessment in second languag e testing: A meta analysis and analysis of experiential factors. La nguage Testing, 15(1), 1-20. Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological monographs, 80, 128. Rotter, J. B. (1975). Some problems and misconcep tions related to the co nstruct of internal versus external control of reinforcement. Journal of Consulting a nd Clinical Psychology, 48, 56-67. Ruiz-de-Velasco, J., & Fix, M. (2000). Overlooked and Underserved: Immigr ant students in U.S. Secondary Schools. Washington, DC: The Urban Institute Press. Rumbaut, R. G. & Ima, K. (1988). The ad aptation of southeast Asian refugee youth. A comparative study. Final report to the office of resettlement. San Di ego: San Diego State University. ERIC Document Servi ce Reproduction Service No. ED 299 372. Schmidt, F.L. (1971). The relative Efficiency of regression in simple un it predictor weights in applied differential psychology. Educationa l and Psychological Measurement, 31, 699714. Smith, J. M. (1973). A quick measure of achieveme nt motivation. British Journal of Social and Clinical Psychology. 12, 137-143. Smith, N. B. (1983). The relationshi p of selected variables to pe rsistence and achievement in a community junior college (Doctoral Dissertat ion, Auburn University, 1983). Dissertation Abstracts International, 44,A0078. Smittle, P. (1996). ACCUPLACER: Foundation for high school/college testing project to enhance college readiness. Presentation at the annual meeting of the Florida developmental education asso ciation. Daytona Beach, FL. Smoke, T. (2001). Mainstreaming writing: What does this mean for ESL students? In G. McNenny (Ed.), Mainstreaming basic writer s: Politics and pedagogies of access (pp. 193). Mahwah, NJ: Erlbaum. Spack, R. (2004). The acquisition of academic lit eracy via second language: A longitudinal case student, updated. In V. Zamel & R. Spack (E ds.), Crossing the curriculum: Multilingual learners in college classr ooms (pp.3-17). Mahwah, NJ: La wrence Erlbaum Associates. Spann-Kirk, E. M. (1991). A validity assessment of a mathematics placem ent test used with entering students at Mott Community College (placement tests) Doctoral dissertation, Wayne State University, 1991) Dissertation Abstract s International, 53, A1044 Saunders, P. I. (2000) Meeting th e needs of entering students th rough appropriate placement in entry-level writing courses. Saint Louis, MO : Saint Louis Community College at Forest Park. (ERIC Document Repr oduction Service No. ED447505)

PAGE 139

139 Sawyer, R. (1989). Validating th e use of ACT Assessment scores and high school grades for remedial course placement in college (ACT Research Report Series 89-4). Iowa City, IO: ACT. (ERIC Document Reproduction Service No. ED 322 163) Shepard, S. (2006). Locus of control and academic achievement in high school students. Psychological Reports, 98(2), 318-322. Smittle, P. (1995). Academic performance predicto rs for community college student assessment. Community College Review, 23(2), 37-43. Swanson, C. (2004). Between old country and new: Academic advising for immigrant students. In I.M. Duranczyk, J. L. Higbee, & D. B. Lundell (Eds.) Best practices for access and retention in higher educati on (pp. 73-81) Minneapolis, MN : Center for Research on Developmental Education and Urban Literacy, General College, University of Minnesota. Thonus, T. (2003). Serving generation 1.5 learners in the university writing center. TESOL Journal, 12(1), 17-24. Tremblay, P. F. & Gardner, R. (1995). Expandi ng the motivation construct in language learning. Modern Language Journal, 79, 505-518. Trice, A. D. (1985). An academic locus of cont rol scale for college students. Perceptual and Motor Skills, 61, 1043-1046. Trice, A. D., Ogden, E.P., Stevens, W., & B ooth, J.V. (1987). Concurrent Validity of the academic locus of control scale. Educational and Psychological Measurement. Troike, R. (1969). Receptive competence, produc tive competence and performance. In papers of the 20th Georgetown univers ity round table. Washington D.C. Georgetwon University Press. Upshur, J. A. (1973). Context for language testing. In J. W. Oller & K. Pe rkins (Eds.), Research in language testing (200-213). Rowley, Mass.: Newbury House. Upshur, J.A and Homburg, T.J. (1983). Some relations among language tests at successive ability levels. In J.W. Oller, Jr., (Ed.), I ssues in language test ing research (188-202). Rowley, MA: Newbury House. Valdes, G. (1992) Bilingual minorities and language issues in writing. Written Communication, 9, 85-136. Valencia Community College. (2007). VCC colle ge catalog. Retrieved April 1, 2007, from 20062007 Official Catalog Web site: http://valen ciacc.edu/catalog/05-06/default.htm (2006). Valencia Community College. (2006). Who are we : Fast facts & statis tics. Retrieved Feb 13, 2007, from Valencia Community College Web site: http://valenciacc.edu/AboutUs/whoweare/fastfacts.cfm

PAGE 140

140 Wang, D, (2005) Students Learni ng and Locus of Control in Web-Supplemental Instruction. Innovative Higher Education,30(1), 67-83. Weiner, B. (1992). Human motivation: Metaphors, theories, and research. Newbury Park, CA: Sage Wink, J. (1999). Critical pedagogy: Notes from the real world ( 2nd ed.). Boston: Allyn & Bacon. White, E. M. (1990). Language and reality in writing assessment. CCC, 41(2), 187-200. Wolcott, W. (1996). Evaluating a basic writi ng program. Journal of Basic Writing 15(1), 57-69. Wolcott, W., & Legg, S. M. (1998). An overview of writing assessment: Theory, research and practice. Urbana, IL: National Council of Teachers of English. Zamel, V. (2004). Strangers in academia: The e xperiences of faculty a nd ESOL students across the curriculum. In V. Zamel & R. Spack (E ds.) Crossing the curri culum: Multilingual learners in college classr ooms (pp.3-17). Mahwah, NJ: La wrence Erlbaum Associates. Zinn, A. (1988). Ideas in practice: Assessing writi ng in the developmental classroom. Journal of Developmental Education, 22(2), 28-39.

PAGE 141

141 BIOGRAPHICAL SKETCH James May has been an instructor of Eng lish for academic purposes (EAP) at Valencia Community College in Orlando, Florida for the past 6 years. Before that, he worked for three years as an adjunct ESL instructor at Santa Fe Community College in Gainesville, Florida. James earned his Master of Education de gree in ESOL from the University of Florida in 1999. Prior to that, he was a Korean and Spanish linguist for th e U.S. Army. His undergraduate degree is also from the University of Florida, a B.A. in Sp anish literature. James has traveled the world extensively, has taught English as a foreign language in both Mexi co and Korea, and has studied Spanish, Portuguese, Korean, and American Si gn Language. While earning tenure at Valencia, James developed a special English as a second la nguage program to teach English to the deaf. Although his true passions are learning and teach ing language, James also writes, trains, and consults on how best to work w ith second language populations at the community college level.