Assessment procedures for students entering Florida community colleges

MISSING IMAGE

Material Information

Title:
Assessment procedures for students entering Florida community colleges theory and practice
Physical Description:
xiv, 192 leaves : ; 28 cm.
Language:
English
Creator:
Ramey, Luellen, 1950-
Publication Date:

Subjects

Subjects / Keywords:
Prediction of scholastic success   ( lcsh )
Community colleges -- Florida   ( lcsh )
Counseling in higher education -- Florida   ( lcsh )
Counselor Education thesis Ph. D
Dissertations, Academic -- Counselor Education -- UF
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1981.
Bibliography:
Bibliography: leaves 184-190.
Statement of Responsibility:
by Luellen Ramey.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 028120486
oclc - 07809271
System ID:
AA00014216:00001


This item is only available as the following downloads:


Full Text












ASSESSMENT PROCEDURES FOR STUDENTS ENTERING FLORIDA COMMUNITY
COLLEGES: THEORY AND PRACTICE








BY

LUELLEN RAMEY


















A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY




UNIVERSITY OF FLORIDA

1981


































Copyright 1981

by

Luellen Ramey






























Dedicated to my family--both my Ramey family and

my extended family of friends. Each one has shared

a unique part of my journey.
















ACKNOWLEDGEMENTS


This dissertation has been possible with the assistance

of several people. Dr. Paul Fitzgerald, my doctoral committee

chairman, has provided me with guidance and support.

My appreciation is extended to other members of my

advisory committee. Dr. Harold Riker has been especially

helpful with the organization and editing of the manuscript.

Dr. John Nickens has been helpful through his knowledge of

the community colleges and research methodology. Dr. Donald

Avila's belief in my ability has encouraged me in my efforts.

Other friends and colleagues have offered valuable

assistance and support. Dr. Rubye Beal has been especially

helpful in instrument development. Dr. Joe Fordyce has

offered helpful suggestions and kind support. Dr. Pat Korb,

Dr. Paul Wheeler, Dr. Ed Blankenship, Larry Humes, Linda

Hague, and Don Mott have offered valuable feedback and

encouragement.

I would like to thank the Florida Community Junior

College Inter-Institutioanl Research Council (IRC) for

their funding and direction. Special thanks are extended

to Al Steuart for his assistance with data analysis and to

Lydia Quinn for her cooperation and secretarial services.











Appreciation is given to the IRC member researchers for

their assistance in instrument development.

I would also like to thank the community college

personnel who participated in this study.

My gratitude is extended to my many friends and my

family for their moral support, encouragement, and love.

Ray and Sarah Lamont's support has been felt throughout,

as has Loraine McCosker's.

















TABLE OF CONTENTS


Page

ACKNOWLEDGEMENTS . ... iv

LIST OF TABLES . . ix

ABSTRACT . .... .. xi


I. INTRODUCTION . . 1

Purpose of the Study . 9
Need for the Study . .. 10
Importance of the Study . .. 12
Definitions of Terms . .. 13
Organization of the Study . .. 16

II. REVIEW OF THE RELATED LITERATURE ... 18

Historical and Philosophical Foundations of the
Community College . 20
Challenges to and Modifications of the Open Door
Policy . . 22
The Development of Community College Student
Assessment as Part of Florida's Educational
Accountability . . 27
The National History of Competency-Based
Education . . 31
Rationale for Competency Testing .. ... 32
Issues Regarding Competency Testing .. 34
Student Assessment in the Community College 37
Uses of Student Assessment by Colleges and by
Students . . 39
Community College Entering Student Assessment 43
Summary . . ... .. 45

III. METHODOLOGY . . ... 50

Overview . . 50
Research Questions . .. 52
Procedures .... . 52
Population . . 52
Instrumentation . ... 54











Page

Data Collection . ... 59
Analysis of Data . ... 61
Part I . .... 61
Part II. . . ... 62
Limitations and Assumptions . .. 62

IV. RESULTS . . ... 64

Overview . . 64
Results of Survey of Entering Student Assessment
Procedures: Part A . ... 66
Job Title Demographics . ... 66
Subject Areas Assessed ............ 67
Assessment Instrument Data ... 70
How Assessment is Administered .. 70
Additional Student Assessment Factors .. 73
Community College Programs That Are Not Open
Admission . . 75
Admissions Criteria for Programs That Are Not
Open Admission . 79
Assessment Program Costs . .. 81
Additional Areas of Assessment .. ... 84
Results of Survey of Entering Student Assessment
Procedures: Part B ......... 87
Primary Assessment Instruments for Reading
Comprehension . ..... 87
Primary Assessment Instruments for English
Writing Ability . ... 88
Primary Assessment Instruments for English
Usage . . 93
Primary Assessment Instruments for Mathematical
Computation Skills .. ..... 98
Student Groups Assessed by Primary Assessment
Instruments . . 107
Primary Assessment Instrument Selection .107
Selection Factors for Primary Assessment
Instruments . 107
Factors Determining Cut-Off Scores for Placement 114
Additional Instruments .. ....... 114
Results of Entering Student Assessment Opinionnaire 117

V. SUMMARY AND CONCLUSIONS . 131

Summary and Discussion . .. 132
Part I: The Current State of Student
Assessment Programs . .. 133
Part II: Perceptions of Program Coordinators
Toward Assessment Policies and Procedures 142


vii











Page

Conclusions .................. 146
Implications ................. 151
Recommendations for Further Research .. 152

APPENDICES . . ... 156

A. THE FLORIDA COMMUNITY JUNIOR COLLEGE INTER-
INSTITUTIONAL RESEARCH COUNCIL .. 156

B. SURVEY OF ENTERING STUDENT ASSESSMENT PRO-
CEDURES . .. 159

C. LETTER OF TRANSMITTAL WITH SURVEY OF ENTERING
STUDENT ASSESSMENT PROCEDURES 165

D. STRUCTURED INTERVIEW GUIDE TO ENTERING STUDENT
ASSESSMENT PROCEDURES ... 167

E. ENTERING STUDENT ASSESSMENTOPINIONNAIRE 172

F. LETTER OF TRANSMITTAL WITH ENTERING STUDENT
ASSESSMENT OPINIONNAIRE .. 177

G. COMMUNICATIONS TO INTER-INSTITUTIONAL RESEARCH
COUNCIL REPRESENTATIVES ... 179

H. LIST OF RESPONDENTS . ... 182

BIOLIOGRAPHY . . 184

BIOGRAPHICAL SKETCH . ... 191


viii

















LIST OF TABLES


Page
Table 1. Job Title of Person Completing Survey
of Entering Student Assessment Pro-
cedures by Absolute Frequency and
Percentage . ... 68

Table 2. Subject Areas in Which First-Time-
in-College Students are Assessed by
Absolute Frequency and Percentage ... 69

Table 3. Responses of Sample to Statements
Regarding Assessment Instruments by
Absolute Frequency and Percentage ... 71

Table 4. Responses of Sample to Statements
Regarding the Administration of
Assessment by Absolute Frequency
and Percentage . ... 72

Table 5. Factors Besides the Results of
Assessment Instruments That Are
Considered as Part of First-Time-
In-College Student Assessment by
Absolute Frequency and Percentage 74

Table 6. Existing Community College Programs
That Are Not Open Admission by
Absolute Frequency and Percentage ... 76

Table 7. Admissions Criteria Used by Colleges
for Programs That Are Not Open Ad-
mission by Absolute Frequency and
Percentage . .. 80

Table 8. Estimated Cost Per Student of
Assessment Programs . ... 82

Table 9. Additional Areas in Which First-
Time-In-College Students Are
Assessed by Absolute Frequency
and Percentage ... .. 85











Table 10. Primary Assessment Instruments Used
By Colleges for Reading Comprehension 89

Table 11. Primary Assessment Instruments Used
By Colleges for English Writing Ability 94

Table 12. Primary Assessment Instruments Used
By Colleges for English Usage ...... 99

Table 13. Primary Assessment Instruments Used
By Colleges for Mathematical Computation
Skills . ... 103

Table 14. Student Groups Assessed by Primary
Assessment Instruments for Each Subject
Area by Absolute Frequency and Percentage 108

Table 15. Group or Individual Selecting Primary
Assessment Instruments for Each Subject
Area by Absolute Frequency and Percentage 110

Table 16. Selection Factors for Primary Assessment
Instruments for Each Subject Area by
Absolute Frequency and Percentage .... 112

Table 17. Absolute Frequency and Percentage of
Factors Determining Cut-Off Scores for
Placement of Students in Developmental
Courses for Each Subject Area .. 115

Table 18. Response of Sample to Position With the
Community College by Absolute Frequency
and Percentage . 118

Table 19. Response of Sample to Extent of Involvement
in the Assessment Program for Entering
Students by Absolute Frequency and Per-
centage . ... 120

Table 20. Responses of Sample to Entering Student
Assessment Opinionnaire by Absolute Fre-
quency, Mean, Median, and Standard
Deviation . .. 121

Table 21. Responses of Sample to Entering Student
Assessment Opinionnaire by Absolute Fre-
quency and Percentage of Scale Choices .126
















Abstract of Dissertation Presented to
the Graduate Council of the University of Florida
in Partial Fulfillment of the Requirements for the
Degree of Doctor of Philosophy



ASSESSMENT PROCEDURES FOR STUDENTS ENTERING FLORIDA COMMUNITY
COLLEGES: THEORY AND PRACTICE

By

Luellen Ramey

March 1981

Chairman: Dr. Paul Fitzgerald
Major Department: Counselor Education

The purpose of this study was to analyze student

assessment programs in Florida's community and junior

colleges in terms of theory, practice, and policy.

Coordinators of student assessment programs were surveyed

for data relating to current programs as well as their

opinions in regard to the major issues involved in assess-

ment programs for entering students.

The need for this study originated from recent Florida

legislation. As part of Florida's system of educational

accountability, institutions of higher education have been

mandated to address the achievement of college-level com-

munication and computation competencies. One of the problems

in measuring educational effects or gains by students in the

community colleges is that one must first know the state of












the students' abilities as they enter an educational program.

At this time, there is no standardized entering student assess-

ment procedure throughout the state and, until this study,

there had not been data available as to current assessment

procedures for entering students to Florida community colleges.

This study provides baseline data on this subject to community

college administrators, student personnel workers, the State

Department of Education, and the Florida legislature.

This study consisted of two parts: Part I surveyed

the current state of student assessment in Florida's com-

munity and junior colleges; Part II investigated the opinions

of student assessment coordinators in regard to the issues

involved in assessing entering students and what they thought

the assessment program should be. Data for Part I were

collected from responses of student assessment program

coordinators to the Survey of Entering Student Assessment

Procedures and the Structured Interview Guide to Entering

Student Assessment Procedures. Data for Part II were

collected from the Entering Student Assessment Opinionnaire.

All three instruments were developed by the researcher based

on the purpose of this study, the research questions, and

findings in the literature which relate to current assessment

issues. Data for both Part I and Part II were analyzed using

subprograms of the Statistical Package for the Social Sciences.

The analysis of Part I indicated that assessment

programs at Florida community and junior colleges vary in all












dimensions including skill areas assessed, selection of

instruments, the objectives and uses of the results of

assessment, cost and administration of assessment, specific

student groups assessed, and requirements for admission

into selective admission programs.

From the responses of coordinators of Florida com-

munity and junior college student assessment programs,

the following represent major conclusions:

1. Coordinators agree that there should be an assess-

ment program for entering students.

2. Their opinions are divided as to whether or not

assessment instruments, practices, and policies should be

standardized throughout the state; slightly more are against

standardization than are for it.

3. Coordinators do not favor a standardized policy for

the use of placement criteria and cut-off scores at all

community colleges.

4. There is a tendency to agree that students should

be selected for limited enrollment programs based on assess-

ment scores but they tend to be against the use of assessment

scores to select students for all courses.

5. They strongly agree that no Florida resident with

a high school diploma or a GED should be denied admission

to a Florida community college as a result of assessment scores.

6. There is a tendency to agree that the "open door"

does not contribute to a lowering of academic standards.


xiii













7. They have a slight tendency to believe that assess-

ment programs are not discriminatory to minority students.

Implications of the results of the data analysis are

also discussed.
















CHAPTER 1


INTRODUCTION


The decade of the 1980's has produced for community

and junior colleges tighter budgets, public accountability

demands, further declines in standardized test scores, and

a change to a more conservative political climate. These

factors have combined to bring about an emphasis at both

the state and the national level on the attainment of

minimum competencies in educational programs. A brief

history of the competency-based movement in Florida provides

the foundation for understanding the origin and intent

of recent Florida legislative mandates for community

college assessment.

Florida has been committed to educational account-

ability since the late sixties (Fisher, 1978). The

state began developing programs and laws when various

reports and special study groups revealed a lack of data

about school effectiveness. Florida lawmakers, educators,

and the Department of Education staff adjusted the edu-

cational accountability statutes from session to session

as they struggled to create a workable system. The House

of Representatives reviewed the accountability programs










during late 1975 and early 1976. Under the leadership

of the House Education Committee, the staff conducted

a full investigation of the accountability laws, with

particular emphasis on statewide assessment.

Meanwhile, key members of the Senate Education

Committee were becoming deeply interested in the quality

of education Florida students were receiving. They were

concerned because students were being promoted and

graduated from school without minimal reading, writing,

and arithmetic skills. When the House and Senate interests

came together in conference committees, the 1976 Educational

Accountability Act resulted and was passed unanimously

(Fisher, 1978).

The 1976 act mandated new minimum graduation standards

for the class of 1978-79--specifically, the accumulation

of a minimum number of course credits as required by the

local district, mastery of the basic skills, and satisfactory

performance on functional literacy tests (Laws Relating

to Florida Public Education Enacted by the 1976 Legislature,

1976). It was decided that the Florida Statewide Assessment

Program would become the focal point for determining student

mastery of the basic skills and functional literacy. The

program had previously tested all public school students in

grades 3 and 5, but the new act expanded it to grades 3, 5,

8, and 11.












A decision was made to split the grade 11 assessment

program into two components. The first would be a pre-

liminary screening device, while the second would be a

test of functional literacy. Students would be expected

to pass "The Test" within four tries or receive a certi-

ficate of attendance rather than a diploma. Twenty-four

skills were measured in the functional literacy component

--13 in mathematics and 11 in communication skills (Fisher,

1978; Graves, 1978). It was hoped that minimum-competency

tests would help guarantee that students would no longer

automatically pass through schools simply on the basis

of social promotion. Legislators and educators tried

to ensure that in return for the time spent in schools,

students would be guaranteed some minimum amount of

learning in terms either of "school skills" or "life

competencies" (Haney & Madaus, 1978). Yet this enthusiasm

for competency testing poses a contradiction because it

comes just at a time when various questions are being

raised about, and criticisms leveled against, tests and

the uses of tests, such as those addressed by Houts(1977).

This concern about the outcomes of education to

higher education was expanded by 1979 Florida Statute,

Chapter 79-222. The statute instructs the State Board

of Education

to adopt for public universities and from time
to time modify minimum standards of college-
level communication and computation skills












generally associated with successful performance
and to approve tests and other assessment
procedures which measure student achievement
of those skills.(Section 107, subsection (2) of
section 229.053)


There is particular interest in community college

outcomes since the historical philosophy of the community

colleges has been one of an "open door policy" of admission

as well as an historical commitment to remediation (Cross,

1971; Knoell, 1966; Kaster, 1979; Medsker & Tillery, 1971;

Rippey & Roueche, 1977; Roueche, Baker, & Brownell, 1971.)

Remediation, however, is quite costly. At a time of

increasing inflation, an erratic economy, higher edu-

cational costs, and competing demands for other public

services, the gap between community college philosophy

and practical reality widens.

Although Florida has expanded elementary and

secondary assessment with the 1976 Accountability Act,

the Florida Legislature is concerned about the extensive

need which still exists for the funding of developmental/

remedial programs for community college students. Grant

and Hoeber (1978) caution that attempting to account

for the existence of developmental/remedial or basic

skills programs in postsecondary education is not merely

a matter of claiming that students are less well prepared.

Those institutions which have gone to open admissions

have found it necessary to equalize more than just access












to college. Thus, changes in institutional policy

have necessitated a change in curriculum, and basic

skills programs have come into existence. Recognition

that the open door could quickly become the revolving

door for a high-risk student has caused institutions

to move toward the establishment of developmental

or basic skills programs.

Florida has at least four possible future alternatives

for dealing with academically low-skilled community college

students in the future: 1) refuse admission; 2) admit

these students but do nothing remedial; 3) fund limited

developmental/remedial courses in basic skills; or

4) fund extensive remediation programs at the community

college level. All of these options have student,

faculty, and financial implications.

Some possible outcomes of the above alternatives

can be predicted. If, for example, extensive remedial

programs would be funded at the community college level,

more tax dollars would have to be legislated for this

purpose. While the community college would be adhering

to their philosophy of access to higher education for

all, there could be no reduction in budgeting for higher

education. This approach would also pose the problem

of what to do with the person who does not improve even

when developmental courses are available.












The most obvious result of refusing admission to

high-risk students is that fewer students would be

served by higher education; that is, some students would

be denied access to higher education. This in itself

is counter to the national trend that developed in the

1960's that all persons should have access to higher

education as evidenced by the Carnegie reports and

greater federal aid to institutions and to students.

Rippey and Roueche (1977) point out that low-skilled

students with special needs fall into the category of

"non-traditional students". Where at one time these

students were recruited in part due to the Full Time

Equivalency (FTE) income they generated, the end to

recruitment of non-traditional students could serve to

alleviate the necessity of having to deny their admission.

Initially these students served to maintain the growth

era of the community colleges. Now,however, largely due

to inflation, states are searching for ways to reduce

the amount of money budgeted for higher education. The

FTE dollars generated by these students are now often

offset by the remedial and financial services needed by

these students. Rippey and Roueche (1977) predict that

easing the recruitment of non-traditional students would

please some college constituents. They state that:


the traditional teaching faculty has rarely
accepted remedial courses even when they were
broadened and called developmental. Therefore,











elimination of the so-called high-risk student
is calculated to please most of the faculty,
provided of course, there are enough "good
students" to ensure their own positions.(p. 57)



Perhaps for different motives some community college

administrations would also be pleased to lose the re-

cruiting problems--the need for "extra services" that

increase costs, and the frequent administrative problems

that usually accompany the servicing of non-traditional

students. Internal forces within the community college

might, at least silently, welcome the absence of non-

traditional students. External forces might also

appreciate that no more tax monies will be expended on

persons who "should not be in college anyway." State

legislatures, boards of trustees, and most other power

groups within communities have shown little interest over

the years in broadening access or supporting programs to

compensate anyone with special needs (Rippey & Roueche,

1977). Easing recruitment would affect ethnic minorities,

poor persons and other classes of non-traditional college

students. In such a situation, higher education in the

United States would return to its traditional role of

"select and sort." Rippey and Roueche (1977) caution us

to ask ourselves if it makes sense to deny access to

those who need it most. The issue goes beyond community

colleges and beyond education as a social institution.







8
They say:


low achievers, functional illiterates, and other
"push-outs" from an elitist system of education
will force the long-term costs of crime, welfare,
race relations, and related social problems higher
than our society can afford.
National economic vitality, education, unem-
ployment, welfare, criminal justice systems and
the tax laws that support them are all integral
parts of our overall national social system.(Rippey
and Roueche, 1977, p. 58)


The concept of accountability demands that the success

of the community college be judged by the results in

terms of both student retention and achievement (Roueche,

Baker, & Brownell, 1971). It would follow that, if a

college admits students but does nothing remedial, the

college is hardly being accountable.
















Purpose of the Study


The purpose of this study was to analyze entering

student assessment programs in Florida's community and

junior colleges in terms of theory, practice, and policy.

The study addressed the following:

1. What are the objectives of entering student

assessment programs? What should the objectives be?

2. Who decides what entering student assessment

programs are composed of? Who should decide?

3. What procedures are a part of entering student

assessment programs? What procedures should be a part?

4. How are the results of assessment used? How

should the results be used?

5. How are entering student assessment programs

administered? How should they be administered?

6. How should student assessment programs be

evaluated?

Therefore the focus is twofold:

1) toward the current practices, policies, and

procedures for assessing entering students to the com-

munity colleges; and 2) toward the ideal practices,

policies, and procedures as perceived by the college

personnel who administer student assessment programs.












Student assessment programs are discussed in terms

of their congruence with the open door philosophy of the

community college system and in relation to accountability

concerns and competency testing.


Need for the Study


In light of current societal and financial changes,

educational accountability will continue to receive

emphasis. Since the passage of 1979 legislation, Florida

institutions of higher education are now having to

respond directly to the Florida Legislature to address

the achievement of college-level communication and

computation competencies. Because of this requirement,

it is necessary to assess the skill level of incoming

students to Florida's community colleges. At this time,

there is no standardized entering student assessment

procedure throughout the state nor are there available

data as to what the current assessment practices and

policies are at each of Florida's 28 community colleges.

This study makes available current information regarding

policies and practices of student assessment programs.

Those interested should include college administrators,

student personnel workers, the State Department of

Education and the Florida Legislature.

Since student assessment is typically a function

of Student Affairs or Student Development, the responsibility












of implementing assessment programs usually is delegated

to counselors and student personnel workers. Results of

a 1979 survey of student assessment practices in member

colleges of the League for Innovation in the Community

College, a national consortium of community colleges,

point out the involvement of counselors in the assessment

process. This survey indicated that in many of the

community colleges, counselors assisted in selecting

tests or devising campus-produced assessment instruments,

counselors often administered the assessment, and most

often it was counselors who reported and interpreted

results and advised students on the basis of their results

(Student Assessment for Academic Success, 1979). Clearly,

these assessment and accountability demands involve

community college counselors and student personnel workers.

The review of literature revealed no studies that

surveyed community college student assessment policies

and practices other than the one referred to above, and

no statistics were computed with the data of that study.

It was surprising not to find studies of student assess-

ment practices, particularly considering that the review

of literature does indicate that minimum competency

testing now exists in some form in public education in

all of the fifty states, and legislation is increasing

in this direction.












Importance of the Study


Florida Statute Chapter 79-222 has mandated

specific activities as a part of the system of edu-

cational accountability. The Articulation Coordinating

Committee, a state committee which deals with concerns

related to transfer of community college students to

four-year colleges, is undertaking the task of defining

and maintaining a list of college-level communication

and computation skills associated with successful student

performance which will be submitted to the State Board

of Education for approval. This committee is also

required to maintain a listing of tests and other assess-

ment procedures which measure and diagnose student achieve-

ment of college-level skills. And, finally, the Articulation

Coordinating Committee is required to supply to the State

Board of Education and the State Legislature data which

reflect achievement of college-level communication and

computation competencies by students in state universities

and community colleges. Community college student assess-

ment data should provide vital information to these

working committees.

A Standing Committee on Student Achievement of

Communication and Computation Competencies has been formed

by Department of Education officials. This committee

will recommend to the Articulation Coordinating Committee












types of tests that can be used to assess these com-

petencies. Basic to these recommendations, of course,

is knowledge of current student assessment policies and

procedures which this survey would provide. Specific

information describing the existing state of student

assessment would enable educators to identify problems

and evaluate current practices. These survey data

should be of benefit to the Articulation Coordinating

Committee in making future plans and decisions in regard

to student assessment.

This study provides baseline data to the Florida

community colleges. A number of community colleges

have expressed an interest in having assessment data

available to them. These data should enable them to

be informed so that they can provide recommendations

to these state committees in a proactive manner rather

than having as their only response a reaction to

policies which affect their programs but were policies

made at a state level without the direct input of

the colleges.


Definition of Terms


The terms listed below are defined as follows for

the purpose of this study.

Academic Achievement: increased student knowledge,

understanding, and intellectual skills, including











written and oral communication skills (Lenning,

1977, p. 1).

Accountability: the concept of being held responsible

or liable in finance or business operations or

instruction and student learning; it can be applied

to the activities of an individual, a department,

a division, or an institution (Wilson, 1971).

Achievement: a change in status that is positive in nature

(Lenning, 1977, p. 3).

Basic skills student, high-risk student, or non-traditional

student:



one who has not acquired the verbal and mathe-
matical, and full range of cognitive skills
required for collegiate-level work. Generally,
he (or she) is a student whose grades fall in
the bottom half of his high school class, who
has not earned a (college preparatory) diploma,
and is assigned to a high school which has a
poor record for student achievement, or who
has been tracked into a general, commercial,
or vocational high school program. Such a
student will generally rank low in such tra-
ditional measures of collegiate admissions as
SAT board scores, high school class average
standing. .. .(Gordon, 1976, p. 4)

College Benefits: any attainments from college programs

(Lenning, 1973).

Competency-based Education: educational programs

which carefully specify their desired objectives

and then assess student achievement of the specified

objectives (Forrest, 1977; Knott, 1975; Trivett, 1975).

Developmental Courses: courses offered for the purpose












of remediation of basic skills, such as mathematics,

reading and writing skills.

Entering Student Assessment: a student assessment (using

the Lenning definition below) of incoming students

to the community college.

Entering Students: first-time-in-college or beginning

students in the community college.

First-Time-In-College Students: students who have not

previously been enrolled in postsecondary education.

Functional Literacy: application of basic skills to

problems encountered in everyday life (Minimum

Student Performance Standards for Florida Schools,

Grades 8 and 11, 1977, p. iv).

Minimum Competency Testing: any program of assessment,

evaluation, certification, or testing that is

designed to determine whether individual students

have reached a minimum level of performance pre-

determined as satisfactory (Graves, 1978, p. 33).

Remedial Education: educational programs aimed at over-

coming academic deficiencies (Cross, 1976, p. 31).

Student Assessment: the measurement, analysis, appraisal,

and evaluation of the attainment or increase of some

desired and intended accomplishment for individuals

or groups of students (Lenning, 1977, p. 4).












Organization of the Study


The remainder of this study is presented in four

chapters, plus appendices. Chapter II, the review of

literature, traces the background and development of

community college student assessment programs. A brief

historical view of the philosophical foundations of

the community college is presented, followed by a dis-

cussion of current challenges to and modifications of

the open door philosophy. The following section

reports on the development of community college student

assessment as part of Florida's educational account-

ability. This discussion includes facets of the com-

petency-based movement: its history, its rationale,

and issues regarding competency testing. Types of

assessment to be made and uses of student assessment by

college and by students is considered. The review

reports results from a recent survey of student assess-

ment programs of member colleges of the League for

Innovation in the Community College.

In Chapter III, the methods and procedures used

in the development of the study are presented. Chapter

IV reports the statistical findings of the study.

Chapter V contains a summary and discussion of the

results, conclusions, limitations of the study, and

recommendations for further study. The appendices







17




include information in the Florida Community Junior

College Inter-Institutional Research Council, the

survey instruments, and the letters of transmittal

mailed with the instruments.
















CHAPTER II


REVIEW OF THE RELATED LITERATURE


This review of literature is focused on the

background and development of community college

student assessment programs. The chapter begins

with a brief historical view of the philosophical

foundations of the community college and continues

with current challenges to and modifications of the

open door philosophy. The following section is

directed toward the development of community college

student assessment as part of Florida's educational

accountability. A discussion of competency testing

as related to accountability and academic achievement

is inlcuded. Types of assessment and uses of student

assessment by colleges and by students is considered.

The review concludes with findings from a 1979

survey of student assessment programs of member colleges

of the League for Innovation in the Community College.

The League, a consortium of 17 community colleges

throughout the country, is based in Los Angeles. Those

community colleges which choose to become members of the

League collaborate on projects and share information












of interest within their colleges. In Florida, only

Santa Fe Community College belongs to the League

for Innovation.

Florida Statute Chapter 79-222 designates as one

of the duties of the state board:


to adopt for public universities and community
colleges and from time to time modify minimum
standards of college-level communication and
computation skills generally associated with
successful performance and to approve
tests and other assessment procedures which
measure student achievement of those skills.
(Section 107, subsection (2) of section 229.053)


Basic to assessing mastery of learning at the

community college is knowledge of student skills at

entrance (Lenning, 1977). When educators assess the

learning of a student or the effectiveness of a program,

change in learning level becomes an essential focus,

as indicated by Hartnett (1971, p. 14):


Almost all proponents of educational accountability
tend to favor a 'value-added' concept. That is,
institutions should be judged not by their outputs
alone, but by their outputs relative to their in-
puts. The students' final standing with regard
to various characteristics would not be as impor-
tant as their changes (usually gains) during the
college years.


As stated by Cooley (1974, p. 33):


One of the best established, yet frequently
ignored principles in the assessment of edu-
cational effects is that the state of students'
abilities and motives as they enter an edu-
cational program is always the strongest
prediction of what they will achieve in that program.










Cooley stresses that ambiguity is the only possible

result of assessing educational effects unless a

measurement is made prior to the initiation of that

educational program.


Historical and Philosophical Foundations of the Community
College


The public community college in the United States

has been described as the only educational institution

that can be truly considered an American social inven-

tion (Gleazer, 1963). Sometimes called "democracy's

college," it adopted a philosophy of equal educational

opportunity for all and advocated an ideal of open

admissions (Roueche, Baker, & Brownell, 1971).

Higher education first became available to more than

the privileged elite with the creation of land grant

colleges by the Morrill Act of 1862. This act gave sub-

stance to the concept that each individual, regardless

of economic or social status, should have the opportunity

to progress educationally as far as interests and

abilities might permit (Roueche, 1968).

Gleazer (1970) indicates that the belief in extending

educational opportunities to all people led to a philosophy

of the "open door" that became the hallmark of the com-

munity college movement. The community college's "demo-

cratic style, positive philosophy, and social promise













appealed to the American people and won great popularity

and support. The unprecedented educational benefits

accompanying the G. I. Bill of Rights after World War II

further enhanced and expanded the community college

movement" (Roueche et al., 1971, p. 10).

This movement was also founded on the conviction

that colleges exist to serve the society that supports

them. "Education helps to equalize opportunity by

stressing the concept of individual worth and serving as

a vehicle for personal and social advancement" (Roueche

et al., 1971, p. 10).

Postsecondary education in the United States today

is a vital national need to which community colleges are

in a unique position to respond (Roueche et al., 1971).

The role of unskilled workers is becoming less important

as technological society grows more complex. There are

now few jobs available for high school graduates who have

no other training (Cohen, 1969). Conversely, there is

a national demand for individuals trained in highly

technical skills. Since the nation cannot afford to

waste human resources, it becomes imperative that edu-

cational institutions provide essential skills for all

students (Bloom, 1968).

Two-year college stduents are more likely than uni-

versity students to come from the lower two-thirds of

the socio-economic spectrum (Roueche et al., 1971). The












community college "open door" performs a vital service

in removing barriers to education. Geographic location

of academic institutions is a crucial factor in education.

Colleges constructed within commuting distance of potential

students extend educational accessibility to the total

population (Roueche, 1968). Modest community college

fees remove financial barriers and provide an economical

avenue to higher education. However, Cross (1969) states

that even if all geographical and financial barriers

could be eliminated, racial minorities, women, and those

from low socio-economic classes would still be under-

represented. The concept of accountability demands

active efforts to seek, recruit, enroll, and retain every

possible student in the community. Wattenbarger and

Goodwin (1962) emphasize that the community college must

make readily available, programs of education that meet

a wide spectrum of community needs and relate economically

to the total patterns of educational opportunity in the area.


Challenges to and Modifications of the Open Door Policy


The number of public community colleges increased

from 656 in 1961, to 1,100 in 1970 (Medsker and Tillery,

1971). Between 1948 and 1968 community college student

enrollments rose by over one million (Department of

Health, Education and Welfare, 1970). In the 1970's













50 percent of those completing high school engaged in

some type of postsecondary education. Nationally, one-

third of those entering higher education started in a

community college (Karabel, 1972). In Florida, almost

two-thirds of enrolled college freshmen were in community

colleges (Sawyer & Nickens, 1980).

These statistics and a substantial body of knowledge

(Cross, 1968; Medsker & Trent, 1965; Willingham, 1970)

reflect favorably on the community college. However,

now that the community college is widely recognized as

a community-based, open door college providing an upward

extension of educational opportunities (Kaster, 1979),

it is now faced with the critical challenge of becoming

accountable for its unfulfilled potential by translating

ideals into reality.

The open door policy implies acceptance of the

concept of universal higher education. According to

Roueche et al. (1971) community colleges have become the

primary means of economic and social advancement for the

lower segment of the population. The typical student

body is an extremely diverse population that is often

drawn from backgrounds characterized by lower social

and economic status, lower educational achievement,

marginal employment, and limited participation in community

affairs. Students from these environments are disadvantaged











to the extent that their culture has failed to provide

them with experiences typical of the students that

traditional colleges customarily serve. The community

college must recognize that a considerable number of

disadvantaged, low-aptitude students in its student body

create unique problems that require changes in traditional

curriculum and instructional techniques. Accountability

demands that the success of the community college be

judged by results. Student success, both retention and

achievement in college, is an accurate measure of the

open door policy (Roueche et al., 1971).

Dropout rates at community colleges generally are

astonishing. The typical city community college reports

annual student attrition rates of more than 50 percent

(Cohen, 1969). As many as 75 percent of the lower-achieving

students withdraw during their first year (Schenz, 1963).

In the typical California junior college, 80 percent of

the entering students enrolled in remedial English, but

only 20 percent later enrolled in regular college English

classes (Bossone, 1966). Remedial courses are generally

poorly designed, poorly taught, and seldom evaluated

adequately (Roueche et al., 1971). A national survey

of community colleges revealed that although 91 percent

of the institutions espoused the concept of the open

door, only 55 percent provided programs appropriate for

non-traditional students (Schenz, 1963).













While growth in community college enrollment in the

last decade symbolizes the success of the community college

movement, it also provides the basis for a threat to its

existence (Kaster, 1979). Kaster explains that as com-

munity colleges attempt to provide more services, tax-

payers are beginning to be alarmed at what appears to

them to be an unlimited expansion of cost of this edu-

cational system. This situation has been complicated by

an erratic economy, rising inflation, and competing

demands for other public services. Legislative reactions

in Florida are evidenced by enrollment caps, funding

reductions, challenges of curriculum relevance, and

accountability demands favoring centralized control

(Kaster, 1979).

Rippey and Roueche (1977) indicate that there are

crucial implications of reduced funding for the open

door commitment of the community college. In the early

1970's, when hit by both the inflation spiral and an

unprecedented recession, several states began to search

for ways to reduce the amount of money budgeted for

higher education. One easy and obvious solution used

by some colleges was to place a lid on student enrollments.

So the community colleges' very success in locating new

sources of students has hastened the advent of enrollment

ceilings. What will be the consequences in community

college enrollments from these externally imposed enrollment












limits? The most obvious result is that fewer students

will be served by higher education; that is, some students

will be denied access to higher education (Rippey & Roueche,

1977). This in itself is counter to the national policy

that developed in the 1960's that all persons should

have access to higher education as evidenced by greater

federal aid to institutions and to students. The crucial

question is: who will be denied access to higher education?

Rippey and Roueche (1977) propose that the nation's answer

to this dilemma is to deny access to none but rather to

stop recruiting non-traditional students. "With enroll-

ment limits, it is likely that the community colleges

will begin to ease recruitment of ethnic minorities, poor

persons, and other classes of non-traditional college

students" (p. 58). Community colleges are only now beginning

to identify and serve those constituents in their com-

munities who need educational opportunity most desperately.

Rippey and Roueche (1977) point out the absurdity of

providing access to only those who need it least while

doors are closed off to those who have no foreseeable options.

In 1960, the California state college system im-

plemented one of the first major open admissions programs

in the United States. By the late 1960's, 68 percent of

California high school graduates were entering college.

No evidence exists that its system of higher education

has suffered as a result of open admissions (Harrison &

Rayburn, 1979).











Cross (1971) points out that the success or failure

of an open admissions program depends upon remediation

and advising. A study of open admissions-type students

at the University of Detroit (Harrison & Rayburn, 1979)

resulted in findings that demonstrated that as students

were provided academic assistance, adequate counseling,

and ample financial aid, open admissions-type students

significantly achieved in the basic skill areas of reading,

math, and language. Karabel (1972) states that "both

evidence and common sense indicate that nothing

inherent in open admissions will bring about a lowering

of academic standards" (p. 39).


The Development of Community College Student Assessment
as Part of Florida's Educational Accountability


In order to understand the development of community

college entering student assessment in Florida it is

necessary to provide a background of the actions of the

Legislature and the State Department of Education in the

1970's. Florida's accountability movement began with

a statewide attack on functional illiteracy among high

school graduates. A competency testing approach to

determine student mastery of basic skills was implemented

(Fisher, 1978). This movement towards "competency-based

education" in the elementary and secondary schools has

resulted in many implications and potential consequences













for college admissions, curricula, graduation require-

ments and student learning (Spady, 1978).

Florida has been committed to educational account-

ability since the late sixties (Fisher, 1978). The state

began developing programs and laws when various citizen

reports and special study groups revealed a lack of com-

mitment to goals and a lack of data about school effectiveness.

Florida's accountability laws were passed in a period

when many states developed accountability policies. But

in Florida the legislature did more than pass a law or

two and then forget the issue. The lawmakers, with input

from Florida educators and the Department of Education

staff, adjusted the educational accountability statutes

from session to session as they struggled to create a

workable system (Fisher, 1978).

The House of Representatives reviewed the account-

ability programs during late 1975 and early 1976. Under

the leadership of the House Education Committee, a staff

conducted a full investigation of the accountability

laws, with particular emphasis on statewide assessment.

This staff was critical of the way in which certain

aspects of the laws had been implemented (House of Repre-

sentatives, 1976). At the same time, key members of the

Senate Education Committee were becoming interested in

the quality of education public school students were

receiving. They were concerned because students were













being promoted and graduated from school without

minimal reading, writing, and arithmetic skills.

When the House and the Senate interests merged in

conference committees, the 1976 Educational Accountability

Act was written and passed unanimously. The act was

quite comprehensive and went far beyond merely referring

to minimum requirements and functional literacy (Fisher, 1976).

The 1976 act mandated new minimum graduation

standards for the class of 1978-79. These minimum

standards not only included a miminum number of course

credits, but also mastery of the basic skills and

satisfactory performance on functional literacy tests

(Department of Education, 1976). It was decided that the

Florida Statewide Assessment Program would become the

focus for determining student mastery of the basic skills

and functional literacy. The program has previously

tested all public school students in grades 3 and 5, but

the new act expanded it to grades 3, 5, 8, and 11 (Fisher,

1978; Graves, 1978; Tyler, 1978).

Florida State Department of Education (SDE) decided

to split the grade 11 assessment program into two compo-

nents. The first would measure student mastery of the

basic skills only as a preliminary screening device. This

component focused on practical problems and tasks. The

second component, the test of functional literacy, would

be the true hurdle for students. It would measure 24













skills 13 in math and 11 in communication skills

(Department of Education, 1977). Students would be

expected to pass "the test" within four tries or receive

only a certificate of attendance rather than a diploma.

(Fisher, 1978; Glass, 1978; Brandt, 1978; Graves, 1978;

Tyler, 1978). Florida State Assessment Program officials

enlisted the assistance of the Educational Testing

Service of Princeton, New Jersey, in drafting test items.

After revision, pretest, review, and changes, the test

was ready by May 1977 (Fisher, 1978). The first admini-

stration of the Functional Literacy Test stirred contro-

versy due to the high failure rate of students. The

mathematics and communications failure rates eventually

hit 36 percent and 8 percent respectively, causing Time

magazine to say, "Florida Flunks" (Time, 1977). While

some applauded the test, others made serious criticisms

(Graves, 1978). After examining the manual for standard-

setting for the test, Glass, a leading educational re-

searcher, concluded that the test items had "never been

validated as measures of 'survival skills', and the pass-

fail standards were set mindlessly and capriciously."

(Glass, 1978, p. 605).

The 1976 Accountability Act also mandated remedial

instruction for all students who needed assistance. The

state provided a compensatory education program that

distributed several million to local districts to provide

remedial instruction (Fisher, 1978; Glass, 1978).












The National History of Competency-Based Education


In 1972 the Oregon State Board of Education took an

important first step in redefining the basis of high

school graduation requirements (Spady, 1978). The

board established a framework for students to meet

locally determined "competency" standards in three broad

areas. Starting in 1978, graduation from Oregon's high

schools was to be contingent on satisfying attendance,

course-credit, and the new minimum competency requirements

in basic math and communication skills. This action

became a catalyst for similar policy debates and changes

across the country. Although the Oregon policy was

criticized by a variety of constituencies for quite diverse

reasons, it symbolized the beginning of the "comptency-

based education" movement in United States high schools.

By April 1978, at least thirty-three states had adopted

some kind of policy requiring that students achieve

minimum competencies in basic skills in addition to passing

a sufficient number of courses as conditions for receiving

a diploma (Spady, 1978; Pipho, 1978).

There are fairly uniform themes that characterize

the rationale behind these actions (Spady, 1978). One

is that both school grading standards and the diploma

have lost their credibility. A second is that the

knowledge, skills, and competencies of recent high school

graduates, particularly those who have gone on to college,













are far lower than they should be. A third is that some

mechanism must be implemented that will improve minimum

levels of student achievement and document those gains

in objective terms. A fourth is that more attention must

be given to developing capacities that are essential to

young people as they face the realities of life outside

the school (Spady, 1978).

Minimum competency testing is now looking different

than it did in 1975-1977. What began as a startling idea

in Oregon, California, and Florida has now arrived in

some form in each state. In general, the minimum

competency testing movement has been mellowing with age.

Much of the action has now switched from the state

legislatures to state departments of education, state

and national education groups, and school districts

(Pipho, 1978).


Rationale for Competency Testing


According to Haney and Madaus (1979, p. 463) the

"enthusiasm for competency tests stems from a belief

that the testing of essential skills and competencies

will help raise academic standards and increase edu-

cational achievement." On the surface, the idea of

minimum-competency testing is immensely attractive.

It is hard for anyone to argue against competence. In

a time of wide-spread concern over deteriorating












educational standards, validated by declining SAT scores,

systematic assessment of students' competence certainly

seems to make sense. Students who are certified

minimally competent would avoid the suspicion that they

are products of a faltering educational system, and

students who fail competency tests can theoretically

receive remedial help in order to gain the competencies

and skills they need to enter the world of work. However,

many questions arise regarding implementation. The

primary unresolved problems concerning minimum-competency-

testing programs include the following three: 1) the

definition of competencies; 2) the specifications of minimal

competencies, and 3) the testing of minimal competencies.

Another explanation for the support of minimum-com-

petency-testing is the shift in political climate from

liberal to conservative thinking on education (Haney and

Madaus, 1978). It could be called a shift from concern

over equality to concern over excellence or a shift from

educational equity to educational achievement. In part

the minimum-competency movement is one aspect of the

back to basics movement, part of a backlash against the

"open education" philosophy of the 1960's (Kilpatrick, 1977).

The focus on testing perhaps stems in part from the

fact that it is reform that "is clearly being led or

pushed, by non-educators" (Pipho, 1978, p. 586). As

non-educators, enthusiasts of competency testing are free













to focus on the results and to pay little attention to

the processes by which they might be achieved.

Another perspective on the phenomenon of minimum-

competency-testing is rising concern over the costs of

education (Haney and Madaus, 1978). Public education

is by far the largest and most expensive undertaking of

state and local governments, accounting for more than

one-third of their direct expenditures. Local school

districts receive about half of their total revenues

from local taxes, and per pupil expenditures have more

than doubled in the past decade (Golladay, 1977). Not

only is the public apparently reluctant to pay more, but

it is increasingly demanding proof of the return on the

expenditures it is already making. Since test scores

are one of the most convenient of educational measures,

such demands more often than not get translated into

calls for more testing (Haney and Madaus, 1978).

The rapid rise of the minimum-competency-testing

movement is due largely to the fact that there is a

merging of interests on the idea. Conservatives support

it because of concerns over costs, and liberals favor

it to promote more quality education (Haney and Madaus, 1978).


Issues Regarding Competency Testing


Pipho (1978) discusses some of the contradictions

and controversies of the minimum-competency-testing













movement. Some of the most apparent controversies would

include the following: 1) With implementation deadlines

scheduled all the way up to 1985, it may be years before

anyone knows whether the mandates for statewide minimum

competency standards have really helped to improve student

achievement or instruction. 2) The minimum-competency-

testing movement is clearly being led by non-educators.

3) There appears to be an assumption that school skills

will make an automatic transfer to on-the-job skills.

4) Minimum-competency-testing programs have in some cases

been mandated for local districts and state departments

of education with little or no financial support and little

understanding of the cost of expensive remediation.

5) National standards and national testing would violate

the principle of a locally controlled American educational

system.

Brickell (1978) points out that adopting a policy

on minimum-competency-testing requires answering a number

of major questions that would include the following:

1. What competencies will be measured school skills

or life skills or both?

2. How will these competencies be measured? The

possibilities range from testing through experience

in actual performance situations to testing with

paper and pencil. Brickell indicates that as the

individual moves away from actual performance situations













in life and move toward paper and pencil, testing

becomes easier and cheaper, but the test results

become less likely to predict later success. Thus

a student can fail a minimum competency paper-and-

pencil test but succeed in the actual performance

situations of real life. Another decision is whether

to develop one's own test or use what is available.

3. Will competencies be measured during school or at

the end of school?

4. Will one minimum standard apply to all students or

will ability, special talents, family background

and other factors known to affect the learning of

students be considered?

5. How high will the minimum be? This question brings

up several other related questions. Suppose remedi-

ation doesn't work; then what? How many students

can a state afford both economically and politically

to remediate, or not promote, or not graduate if

remediation fails? If students cannot achieve the

minimum will the minimum be lowered to meet the

students?

6. Are the minimum competencies for students or for

schools?

Thus, there are important issues to consider

regarding a minimum-competency-testing program.











Student Assessment in the Community College


Community college student assessment can be divided

into two categories: 1) Entering student assessment

which describes students when they enter the college, and

2) assessment which reflects what happens to students

after entry. This second type of assessment is important

in that it is a measure of what learning has been mastered

and it serves as a measure of readiness for further edu-

cation (Lunneborg, 1977).

Lunneborg (1977) suggests that there are five areas

in which information should.be gathered about students

at entry. The first area would be prior educational

record--coursework taken and grades earned. The next

two areas would include achievement testing and assess-

ment of both general and specific aptitudes. Measure-

ment should also include nonintellective or noncognitive

personal characteristics that affect readiness for learning.

This last area is of particular importance since only

about one-third of the very high community college student

attrition is explained on intellectivee" grounds (Monroe,

1972). These personal characteristics should not be over-

looked; Cross (1972) points out that the lack of self-

confidence is so great that fewer than one-third of

community college students are confident of their ability

to handle coursework. Caughren (1973) studied motivation

of community college students and emphasized the importance












of the measurement profession concerning itself more

with the construction of tests in the area of assessing

creativity, persistence, interests, values, attitudes,

manual and artistic skills. He points out that these

nonintellective, personal characteristics have relevance

not only in a student's choice of a particular educational

or vocational goal, but also in the degree of success

the student might expect in the particular chosen venture.

However, given the most attention by community

college assessment programs is the measurement of edu-

cational proficiencies and academic skills (Lunneborg,

1977). Monroe (1972) reports that at least 75 percent

of community colleges do some diagnostic testing of

skill in communication and computation. A number of

standardized testing programs are available. Examples

of these programs would include the Comparative Guid-

ance and Placement Program of the College Entrance

Examination Board (CEEB), the American College Testing

(ACT) Assessment, and the Sequencial Tests of Educa-

tional Progress and Cooperative Achievement Tests

developed by the Educational Testing Service. Where

standard placement tests do not match educational

objectives, local instruments are constructed for the

same purposes. Lunneborg (1977, p. 28) states that

"local characteristics should have a major influence

on the development of any student assessment system.











The description of learning goals, instructional resources,

and student clientele, unique in some detail to the insti-

tution, will all interact to shape a testing program as

it matures."


Uses of Student Assessment by Colleges and by Students


"Assessment" is used to mean the measurement, analysis,

appraisal, and evaluation of the attainment or increase

of some desired and intended accomplishment for individuals

or groups of students (Lenning, 1977). Student assessment

is generally considered to be a continual process and is not

a one-time affair which takes place either near or at the

start of an academic term (McCrary, 1979). However, one of

the problems in measuring educational effects or gains in

the community college is that one must first know the state

of students' abilities and motives as they enter an educational

program. That knowledge is the strongest prediction of what

students will achieve in that program (Cooley, 1974). There-

fore, basic to any continued assessment must be the initial

assessment of entering students.

Student assessment can be used in a number of specific

practical ways by community colleges and their students

(Miller and Prince, 1976). Lenning (1977) discusses several

major uses of this information that will be briefly

summarized in the following sections.

Student placement in courses. One of the most common

uses of assessment data in many community colleges is for












course placement purposes. Assessment scores are used to

separate entering students into those who need general

remedial work, those who should enter the regular sequence

of the program, and those who will be allowed to exempt

certain first-level courses in the sequence if they desire

to do so. Entering student assessment data, such as

American College Testing Program Assessment or Comparative

Guidance and Placement scores, along with high school grades

and successful completion of particular high school courses,

are all academic achievement data that are used by many

community colleges for initial course placement of students

(Lenning, 1977). The community colleges are confronted with

an extremely wide variety of educational backgrounds. It has

been reported, for example, that within a typical college's

transfer program, enrollment reading level may range from

Grade Four to Grade Fourteen (Lunneborg, 1977).

Evaluating institutional effectiveness. Increasingly,

postsecondary education is being called upon to be accountable

for the educational benefits they provide for students

(Lenning, 1974). "Evaluation of the effectiveness of colleges

and their programs, and relating that effectiveness to program

costs, is the acknowledged way to gather the evidence being

demanded to show that colleges are doing worthwhile jobs"

(p. 7). Overall assessment of the academic achievements of

students as a group is an important part of any evaluation.

Lenning (1977) indicates that these evaluations should include












the effect of student academic achievement on such specific

areas as curriculum and instructional methods and techniques.

Student advisement and counseling. By providing feed-

back to students about their skill level, instructors,

counselors, and academic advisors can help students explore

the realism of their plans, identify areas of weakness that

need particular attention, and determine the need for par-

ticular courses of action. This feedback to students can

assist them in their self-understanding, which in turn leads

to greater individual responsibility and self-direction that

enables students to plan realistically for their futures.

This idea is well summed up by Miller and Prince (1976, p. 52),

"Growing is a cumulative business. Knowledge of an in-

dividual's group's, or organization's present status is a

prerequisite to a planned change."

Planning learning experiences. In order to plan effective

future learning experiences, institutional personnel must

understand the level of development of groups of students,

and of individual students within the groups. This know-

ledge can assist institutional and program staff in planning

future learning experiences, not only for these particular

students, but also for students entering the program in

the future.

Identifying student problems. Some assessment instruments

may serve a diagnostic function in that they test specific

skill areas which indicate proficiency in a particular area.












Identification of low skill areas for particular students

or a group as a whole can indicate whether remedial or

developmental courses are necessary. Diagnostic instruments

can identify students' strong and weak areas and can thereby

help students improve in their areas of weakness and build

on their areas of strength.

Additional uses. On-going student assessment, of

course, is used for the purpose of assigning grades. And

lastly, on-going assessment is used as a means of evaluating

innovative instructional styles and materials (Lenning, 1977).

These two uses of student assessment will not be discussed

in any detail since these assessments are on-going rather

than entering assessment procedures.

Although counselors and other student personnel workers

have traditionally used assessment results to help them

better understand and assist the students with whom they are

working, over the past few years there has been a strong

move to help students use such data themselves without the

assistance of the counselor or academic advisor (Lenning,

1977). Miller and Prince's (1976) theory of student

development indicated that the goal of assessment for student

development:


is to help students understand their current
patterns of behavior, emphasizing positively
the specific skills they have instead of the
ones they lack. From this base, all students
can move toward increased self-direction .
Assessment programs must be designed with











students rather than for or about them; there-
fore, only information that can directly in-
crease students' self-understanding or improve
their self-direction need be collected. The
primary focus of many student assessment
efforts has been to help student affairs workers
better understand their clients Although
this objective is desirable, it has tended to
create volumes of information about students
that is rarely used directly by them. (pp. 48-49)



In summary, for assessment programs to have impact

it seems necessary to consider thoroughly the purposes

for gathering assessment information when selecting or

developing instruments (Airasian and Madaus, 1972; Lenning,

1977). Student assessment can improve the decisions of

several groups. Students can more realistically plan

their educational and occupational careers with an

appreciation of their chances of success in their chosen

area. Colleges can best plan their programs, organize

their curricula, attempt to meet the needs of specific

groups, and evaluate the success of their efforts when

they have comprehensive information about the skill levels

of their students. And the success of the college in meeting

the needs of its students can be partially evaluated on the

basis of student progress (Baird, 1977).


Community College Entering Student Assessment


The most recent and useful data on entering student

assessment in community colleges were compiled by the

League for Innovation in the Community College (1979).











Sixty-three League college representatives met in Dallas,

September 25-28, 1979, to discuss the state of the art of

assessing students for academic success. This conference

marked the culmination of a project undertaken by a League

Fellow to study assessment policies in League colleges.

The diversity of community college student assessment programs

was brought out in RichardMcCrary's summary of this con-

ference. His comment was that, in his opinion, the subtitle

for the conference should have been "Trends Toward Diversity."

His conference summary reports that the number of goals of

student assessment "appear to be as varied as the number of

institutions represented" (McCrary, 1979, p. 1). This

diversity of goals may, of course, be related to the lack

of standardized instruments available. In many cases, campus-

produced instruments are utilized because there is not an

appropriate standardized instrument available.

There was some quasi-agreement about what areas should

be assessed: English and math. However, beyond that, the

agreement breaks down. Some colleges assess for reading,

others for music, others for chemistry, and the list can be

extensive. Little if any agreement exists about who should

be assessed. Campuses themselves have difficulty in agreeing

on this issue.

The area most closely agreed upon was that concerning

the parameters of testing. Testing should be 1) quick,













2) easily administered by paraprofessional or clerical staff,

and 3) easily scored in order that results are known as soon

as possible.

No agreement was reached on how assessment results should

be used regarding course selection or placement of students

in courses. Some administrators felt that if one of the

college's goals is to assist students in making realistic

decisions, then assessment results should be used to place

students in courses. Other administrators felt that the

student should have the final choice regarding course selection.

Since so much diversity existed in the areas assessed,

it is no surprise that the instruments used were also

extremely diverse. No statistics were calculated from the

League survey. However, reading appeared to be the area in

which standardized instruments are most often used. Among

standardized instruments, the Nelson-Denny appeared often as

the standardized instrument of assessment. Beyond that, it

would be difficult to identify "common" instruments for

assessment since the diversity is so great.


Summary


In this chapter the researcher reviewed the literature

relevant to the background and development of community

college student assessment programs as they now exist in

Florida. Florida Statute has mandated assessment of student













achievement of communication and computation skills.

Measurement of student skills at entrance is essential to

assessing gain in learning.

A brief historical view of the philosophical foundations

of the community college was presented. Central to the

philosophical foundation of the community college is the

"open door policy" which advocates the ideal of open

admissions and educational opportunity for all. The com-

munity college movement was founded on the conviction that

public colleges exist to serve the society that supports them.

However, now that the community college has become widely

recognized as a community-based, open door college providing

an upward extension of educational opportunities, it is now

faced with the critical challenge of becoming accountable

for its potential by translating ideals into reality. As

community colleges attempt to provide more services, dis-

enchanted taxpayers are beginning to wonder what they are

getting for their tax money. Legislative reactions in

Florida may be found in such actions as enrollment caps,

funding reductions, challenges of curriculum relevance, and

accountability demands favoring centralized control. One of

the serious consequences of reduced funding for the community

colleges is that fewer students can be served; that is, some

students will be denied access to higher education. This,

of course contradicts the open door policy.












The review of literature revealed how community college

student assessment programs have developed as a part of

Florida's educational accountability. The movement began

with a statewide attack on functional literacy among high

school graduates. A competency testing approach to determine

student mastery of basic skills was implemented. This in-

terest in competency based education then extended to the

elementary and postsecondary levels. The thrust toward

mastery of minimum competence in basic skills has resulted

in many implications and potential consequences for college

admissions, curricula, graduation requirements and student

learning.

It was shown that the enthusiasm for competency tests

stems from a belief that the testing of essential skills

and competencies will help raise academic achievement. Thus,

liberals are favoring the minimum-competency-testing move-

ment because it promotes more quality education, and con-

servatives support it because of concerns over costs.

The review made apparent important issues to consider

regarding a minimum-competency-testing program. Contro-

versies include issues such as what competencies should be

measured; how and when these competencies will be measured;

if one minimum standard can be applied to all students;

how high the minimum will be; and whether minimum competencies

are for students or for schools. Other controversies include

the assumption that school skills make an automatic transfer












to on-the-job skills; financial support of minimum-competency-

testing programs; and adherence to the principle of locally

controlled American educational system.

Community college student assessment was shown to be

divided into two categories: entering student assessment

which describes students when they enter the college and

assessment which reflects what happens to students after

entry. There are five major areas in which information should

be gathered about students at entry. These areas would

include: prior educational record courses taken and

grades earned; achievement testing and assessment of both

general and specific aptitudes; measurement of nonintellective

or noncognitive personal characteristics that affect readi-

ness for learning; measurement of educational proficiencies

and academic skills. This last area is given the most

attention by community college assessment programs, par-

ticularly in communication and computation skills.

A number of specific uses of student assessment by

colleges and by students were discussed in the review of

literature. One of the more common uses of assessment

data in many community colleges is for course placement

purposes. Assessment scores are used to separate entering

students into those who need general remedial work, those

who should enter the regular sequence of the program, and

those who will be allowed to exempt certain first-level

courses in the sequence. Evaluating institutional












effectiveness and relating that effectiveness to program

costs has become a common use of assessment data. Assess-

ment results are also used in student advisement and counsel-

ing. Thesedata can also be used to assist institutional

staff in planning future learning experiences. Some assess-

ment instruments may serve a diagnostic function in that

they test specific skill areas which indicate proficiency

in a particular area. Identification of low skill areas for

particular students or a group as a whole can indicate

whether remedial or developmental courses are necessary.

Finally, on-going student assessment is used to assign

grades and as a means of evaluating innovative instructional

styles.

The review of literature concluded with a summary of

the state-of-the-art workshop on student assessment for

academic success held in Dallas, Texas,in September, 1979.

Participants in this workshop were personnel from member

colleges of the League for Innovation in the Community

College. The results of this conference revealed such a

diversity in assessment practices in League colleges that

few generalizations could be made in regard to the goals

of assessment programs, instruments used in the assessment

process, the administration of assessment, use of the results

of assessment, and evaluation of assessment programs.

















CHAPTER III


METHODOLOGY


Overview


The purpose of this study was to investigate and analyze

entering student assessment programs in Florida's 28 com-

munity and junior colleges in terms of theory, practice,

and policy.

The preceding review of literature points out how such

factors as tighter budgets, public accountability demands,

declines in standardized test scores, and a change to a more

conservative political climate have combined to bring about

an emphasis at a state and national level on the attainment

of minimum competencies in educational programs. The back-

ground of community college assessment in Florida indicates

that initial concern was focused on competencies for high

school graduates, although broadened to include periodic

assessments through the elementary and secondary levels.

This concern has recently expanded to the community college

and university level with the 1979 legislative mandates

for assessment.

Although assessment has been mandated at both entry

and exit levels, there exist no comprehensive data to

50













indicate what are current community college student assess-

ment program policies and procedures. This study provides

these data on current entering student assessment programs.

This research is a descriptive study consisting of two

parts. Part I addresses the current state of student assess-

ment in Florida's community and junior colleges. Assessment

programs are investigated and described in regard to who is

assessed, what skill areas are assessed with what instruments,

at what cost to whom, and how assessment results are used in

decision-making for students and institutions. The data

required for these analyses wereobtained through the use of

the Survey of Entering Student Assessment Procedures (Appendix

B) distributed to all student assessment program coordinators

at community colleges in the state of Florida. Nonrespondents

were interviewed by the researcher by telephone using the

Structured Interview Guide to Entering Student Assessment

Procedures (Appendix D).

Part II addresses the opinions of student assessment

coordinators in regard to the issues involved in assessing

entering students. Data for the analysis of Part II were

obtained from the Entering Student Assessment Opinionnaire

(Appendix E) which was also distributed to all student

assessment program coordinators at Florida community and

junior colleges.

This chapter describes the research questions addressed

by this study, the population and sampling procedures, the












instruments used, the methodological procedures and the data

analysis. A statement of assumptions and limitations regarding

this study will conclude the chapter.


Research Questions


Since student assessment is an area of research that has

been little examined before, there is no basis for predictions

concerning the results. For this reason, research questions

rather than hypotheses are posed. The following are pertinent

to this study.

1. What are the objectives of entering student assess-

ment programs? What should the objectives be?

2. Who decides the composition of entering student

assessment programs? Who should decide?

3. What procedures are a part of entering student

assessment programs? What procedures should be a part?

4. How are the results of assessment used? How should

the results be used?

5. How are entering student assessment programs

administered? How should they be administered?

6. How should student assessment programs be evaluated?


Procedures


Population

Since this study is a case study of Florida community

colleges, the population for this study included all 28













community and junior colleges in the state. Respondents to

the Survey of Entering Student Assessment Procedures and the

Entering Student Assessment Opinionnaire were coordinators of

student assessment or the coordinator's designee at each of

these colleges.

Florida community and junior colleges include the

following: Brevard Community College, Broward Community

College, Central Florida Community College, Chipola Junior

College, Daytona Beach Community College, Edison Community

College, Florida Junior College at Jacksonville, Florida Keys

Community College, Gulf Coast Community College, Hillsborough

Community College, Indian River Community College, Lake City

Community College, Lake-Sumter Community College, Manatee

Junior College, Miami-Dade Community College, North Florida

Junior College, Okaloosa-Walton Junior College, Palm

Beach Junior College, Pasco-Hernando Community College,

Pensacola Junior College, Polk Community College, St. John's

River Community College, St. Petersburg Junior College, Santa

Fe Community College, Seminole Community College, South

Florida Junior College, Tallahassee Community College and

Valencia Community College.

Only St. John's River Community College did not par-

ticipate in Part I of the study. One hundred percent

return was obtained on the Entering Student Assessment

Opinionnaire of Part II of this study.











Instrumentation

The instruments utilized in Part I of this study are

the Survey of Entering Student Assessment Procedures (Appendix

B) and the Structured Interview Guide to Entering Student

Assessment Procedures (Appendix D). These instruments

were designed to collect data on current policies and practices

regarding assessment of entering students at community and

junior colleges in Florida. Part II utilizes the Entering

Student Assessment Opinionnaire to collect data on the per-

ceptions of student assessment coordinators in regard to the

issues involved in assessing entering students.

All three instruments were developed by the researcher

for the purposes of this study. In order to obtain content

and face validation, the researcher proceeded in the following

manner when developing the instruments for Part I.

1. A preliminary version of the instrument was

developed from the research questions that were posed.

Each question appearing on the Survey of Entering Student

Assessment Procedures was designed to provide data that

answered these pertinent questions. The questions appearing

on the Structured Interview Guide of Entering Student Assess-

ment Procedures paralleled those of the original instrument.

Since the second form is for interview purposes, it differed

only in that open-ended questions were asked rather than

presenting a statement with possible responses to choose from.

Each question on this second form had a one-by-one correspon-

dence with items on the Survey of Entering Student Assessment

Procedures.












2. Drafts of the two instruments were presented at

a regularly scheduled meeting of the Inter-Institutional

Research Council, composed of institutional researchers

of member colleges of the consortium of community colleges.

These 12 researchers were given a purpose statement of this

study and copies of the instruments and asked to clarify

and revise these survey instruments in writing. The re-

searchers were requested to indicate the particular term-

inology that was most consistent with that used in the

community colleges in order to avoid unnecessary confusion.

3. The suggestions offered by the panel of experts

were incorporated into a revision of the instruments.

4. The revised instrument was mailed to the 12

researchers for their final input corrections.

5. These comments were used to make final revisions

to the instruments.

A preliminary version of the Entering Student Assess-

ment Opinionnaire for Part II was developed from both issues

arising in the review of literature and the research

questions that were posed. Copies of the draft of this

instrument were mailed to the same 12 Inter-Institutional

Research Council members for their clarification and revision

in writing. Their suggestions were again incorporated into

the development of the final instrument.

The instruments for the two parts of the study are

described below.












Part I. The Survey of Entering Student Assessment

Procedures was developed by the researcher for the purposes

of this study to collect pertinent data on current policies

and practices regarding assessment of entering students at

community and junior colleges in Florida.

The survey requested the following information of

each college:

1. Demographic data: Name of institution, name of

person completing form, his/her title, and date.

2. Part A focused on identifying the subject areas

for which assessment instruments are administered; how

assessment instruments are currently administered in that

college; what other factors, besides the results of assess-

ment instruments, are considered as part of entering student

assessment; whether colleges have programs that are not

open admission, and if so, what the criteria are for

admission to these programs; what the cost of assessment

per student is; who assumes the cost of assessment; and

whether or not assessment or study skills, self-concept or

career interest is part of the assessment program.

3. Part B was completed for each area assessed. This

part focused on what specific student groups are given

assessment instruments, what instruments are currently

being used to assess each particular subject area; whether

the instrument is standardized or campus-produced; how and













by whom this instrument was selected as an appropriate

assessment tool; what the reasons were for selecting

particular instruments; whether or not the results of

these assessment instruments are used to make decisions

about placing students in developmental/remedial courses;

whether or not placement of students in developmental/

remedial courses is mandatory or voluntary; what cut-off

scores are used for placement of students in developmental/

remedial courses; what research basis there is for devel-

oping cut-off scores; and whether or not the results of

assessment instruments are used for exemptions, honors

courses or credit by examination. A copy of this form is

found in Appendix B.

The Survey of Entering Student Assessment Procedures

was mailed with a cover letter and a stamped return envelope

to the coordinators of student assessment programs at all

community and junior colleges in Florida. The survey forms

were returned by mail to the Inter-Institutional Research

Council (IRC) at the University of Florida.

The Structured Interview Guide to Entering Student

Assessment Proceddres was an interview guide that paralleled

the format of the Survey to Entering Student Assessment

Procedures. It was used with only Manatee Junior College and

South Florida Community College, since these colleges did not

respond to the mailed survey. A copy of this form is found

in Appendix D.











Part II. The Entering Student Assessment Opinionnaire

was developed by the researcher for the purposes of this

study to measure the attitudes and opinions of student

assessment coordinators in regard to the issues involved

in assessing entering students.

The Entering Student Assessment Opinionnaire consisted

of 33 items. Item 1 requested the respondent's position

with the community college and item 2 requested the re-

spondent's degree of involvement with the assessment program

for entering students. Item 3 addressed the sample per-

ceptions as to whether or not Florida community colleges should

have an assessment program for entering students. Items 4 and

5 addressed the issue of standardization of policy and practice

throughout the state. Items 6 through 12 addressed the

perceptions of the sample regarding the goals and objectives

of assessing entering students. Item 13 addressed the involve-

ment of faculty members in the development of college's

assessment programs. Items 14 and 15 addressed the perceptions

of the sample regarding the use of assessment scores for

student selection for programs. Items 16 through 22

addressed perceptions of issues related to what assessment

programs should be. Items 23 through 25 addressed perceptions

regarding the use of the results of assessment. Items 26

through 29 addressed the perceptions of the sample regarding

the administration of student assessment. Item 30 addressed

the evaluation of assessment programs. Items 31 through 33












related attitudes about assessment policies to the philosophy

of the community college. All items were developed from issues

arising in Chapter II of this study and the research

questions that were posed.

Respondents were instructed to respond to the Entering

Student Assessment Opinionnaire using a Likert rating scale.

The scale consisted of five choices: Strongly agree -- 5;

Agree -- 4; No opinion -- 3; Disagree -- 2; Strongly disagree

-- 1. Choices were made by darkening the selected choice

on a computer-readable answer sheet.

The Entering Student Assessment Opinionnaire was mailed

with a cover letter (Appendix F), answer sheet, and a stamped

return envelope to the same individuals (coordinators of

student assessment programs or their designees) who responded

to the Survey of Entering Student Assessment Procedures

utilized in Part I. The opinionnaires were returned to the

IRC office at the University of Florida.

Data Collection

Part I. Data concerning the current practice and

policies of assessing entering students in Florida community

colleges were collected by using the Survey of Entering

Student Assessment Procedures. The instrument in its final

form was mailed to each of Florida's 28 community and junior

colleges named previously. Multi-campus institutions were

treated as separate institutions where varying assessment

procedures exist at the different campuses.












The 12 researchers of the Inter-Institutional Research

Council who represent member colleges of the consortium,

were asked to supply the name of the coordinator of student

assessment for their respective colleges. Other student

assessment coordinators were identified through the Florida

Education Directory. If not located in the directory, a

telephone call was made to that campus in order to identify

the coordinator of student assessment.

The mailing included a letter of transmittal and the

survey instrument. The letter of transmittal appears in

Appendix C. The mailing also included a target return date

and a stamped envelope for return to the IRC office at the

University of Florida.

Beginning two weeks after the mailing, follow-up

telephone calls were made to non-respondents. If a response

was not then received, another telephone call was made to

non respondents. At this time questions from the Structured

Interview Guide to Entering Student Assessment Procedures

were read to the coordinator of the assessment program and

data were collected via a structured telephone interview.

The questions on the Structured Interview Guide paralleled

the questions on the original survey instrument. Data were

collected by the Survey of Entering Student Assessment

Procedures by all colleges with the exception of Manatee

Junior College and South Florida Community College which

responded to the Structured Interview Guide.










The responses to these instruments were recorded and

analyzed for presentation in this study. Tables developed

from the responses of these surveys are presented in Chapter IV.

Part II. Perceptions regarding issues involved in

assessing entering students were obtained from the Entering

Student Assessment Opinionnaire administered to the coordinators

of student assessment programs at all Florida community and

junior colleges. This instrument was mailed with a cover

letter (see Appendix F), answer sheet, and a stamped return

envelope. The opinionnaires were returned to the IRC office

at the University of Florida. The responses were recorded

for statistical analysis.


Analysis of Data

Part I.

Data from the Survey of Entering Student Assessment

Procedures and the Structured Interview Guide to Entering

Student Assessment Procedures were collected and coded.

Frequencies and percentages were computed on each variable

using subprograms of the Statistical Package for Social

Sciences (Nie, Hull, Jenkins, Steinbrenner, & Bent, 1975).

All data except cost data were treated with subprogram

CROSSTABS which computed two-way joint frequency tables of

all variables by college. Cost data were treated with sub-

program CONDESCRIPTIVE which computed cost by college.

Tables 1 through 17 in Chapter IV present the results of

the data analysis of this survey. These tables include the












following: absolute frequencies and percentages on each of

the variables; student assessment program cost data; and

primary assessment instruments used by colleges for each

subject area.

Part II.

The data from the Entering Student Assessment Opinionnaire

were recorded using an optical character reader for statistical

analysis by computer. Responses were analyzed using sub-

programs of the Statistical Package for the Social Sciences

(SPSS) on the Amdahl 470 computer at Northeast Regional Data

Center at the State University System located on the University

of Florida campus.

Frequencies, means, medians, and standard deviations

were computed for each item. Descriptive statistics are

presented in Tables 18 through 21 in Chapter IV.


Limitations and Assumptions


The data used in this study were based on the reports

of coordinators of student assessment programs or the

coordinator's designee. Accuracy of these data can be

limited by the respondents' accuracy and whether or not

the respondent understood the questions. In cases where

data were supplied by any person other than that one most

closely associated with the student assessment program, it

is possible that some inaccuracy in reporting resulted.












Therefore, for the purposes of this study, the method-

ological assumptions and limitations are that:

1. The information provided by the institutions on

the Survey of Entering Student Assessment Procedures and

the Structured Interview Guide to Entering Student Assess-

ment Procedures is current and accurate.

2. The responses to the Entering Student Assessment

Opinionnaire accurately reflect the opinions of the sample.

3. Content and face validity of the survey and opin-

ionnaire instruments as determined by the panel of research

experts was sufficient for the purposes of this descriptive

study.

4. Data received via structured telephone interview-

using the Structured Interview Guide to Entering Student

Assessment Procedures was of the same accuracy and quality

as if the original Survey of Entering Student Assessment

Procedures had been answered and returned by mail.

















CHAPTER IV


RESULTS



Overview


This study was designed to collect and analyze information

from Florida community and junior colleges on the current

state of student policies and procedures for assessing

entering students. Coordinators of student assessment

programs were surveyed for data relating to current programs

as well as their opinions in regard to the major issues

involved in assessment programs for entering students.

This chapter presents and discusses the results of the

survey of current practices as well as the perceptions

of program coordinators as to what student assessment

programs should be.

The results of the study were determined by analyses

of data obtained from the Survey of Entering Student Assess-

ment Procedures (Appendix B) and the Entering Student

Assessment Opinionnaire (Appendix E). The analyses of both

instruments were determined by the application of the

Statistical Package for the Social Sciences (SPSS) computer

program. This program was used to compute frequencies,











cross-tabulations, and percentages on the responses to the

Survey of Entering Student Assessment Procedures. Means,

medians, standard deviations, frequencies, and cross-

tabulations were computed from the Entering Student Assess-

ment Opinionnaire using the SPSS program.

The Survey of Entering Student Assessment Procedures,

used to collect information on current practices and policies

regarding assessment of entering students at the community

colleges, was responded to by all community and junior colleges

in the state with the exception of one college; St. Johns

River Community College did not respond to the survey.

Responding colleges include the following: Brevard Community

College, Broward Community College, Central Florida Community

College, Chipola Junior College, Daytona Beach Community

College, Edison Community College, Florida Junior College at

Jacksonville, Florida Keys Community College, Gulf Coast

Community College, Hillsborough Community College, Indian

River Community College, Lake City Community College, Lake-

Sumter Community College, Manatee Junior College, Miami-Dade

Community College, North Florida Junior College, Okaloosa-

Walton Junior College, Palm Beach Junior College, Pasco-

Hernando Community College, Pensacola Junior College, Polk

Community College, St. Petersburg Junior College, Santa Fe

Community College, Seminole Community College, South Florida












Junior College, Tallahassee Community College and Valencia

Community College.

The Entering Student Assessment Opinionnaire used to

collect the responses of assessment program coordinators

regarding what they considered entering student assessment

should be, was completed by all Florida community and

junior colleges.


Results of Survey of Entering Student Assessment Procedures:

Part A

Job Title Demographics

Respondents from 26 of the 28 Florida community and

junior colleges completed the Survey of Entering Student

Assessment Procedures. Only St. Johns River Community

College did not participate in this part of the study.

Polk Community College responded with a letter of explanation

that their student assessment program was in transition

at the time of the survey. Therefore, that college did

not complete the survey form and was not included in data

analyses. A survey form was mailed to each campus of multi-

campus institutions. However, all multi-campus institutions

except for Florida Junior College at Jacksonville chose to

report a uniform policy across campuses. Since Florida

Junior College's assessment program differed between its

four campuses, four separate survey forms were returned.

Therefore, the sample comprising the first part of this study

is a total N = 29.












Of the 29 community and junior college personnel

responding to the Survey of Entering Student Assessment

Procedures 8 (27.6%) were Deans of Students or Student

Development or Student Services. Table 1 indicates that

four respondents (13.8%) indicated their job title was

Director of Counseling and/or Guidance and three (10.3%)

listed their job title as Counselor for Assessment or

Testing. Two respondents (6.9%) were Vice-Presidents.

The remaining twelve survey forms were responded to by

twelve individuals each reporting their job title as one

of the following: Coordinator of Assessment, Registrar,

Director of Student Personnel Services, Chairman of the

Testing and Research Department, Director of Testing,

Chairperson of Counseling/Admissions, Coordinator of

Counseling, Test Technician/Testing Agent, Director of

Educational Research and Planning, Provost, Psychometric

Analyst, and Director of Placement and Follow-up.

Subject Areas Assessed

Table 2 provides information (absolute frequency and

percentage) about what subject areas are currently being

assessed at Florida community colleges. Data indicate

that 24 campuses (82.8%) assess mathematical computation

skills, 21 campuses (72.4%) assess reading comprehension,

19 campuses (65.5%) assess English usage, and 15 campuses

(51.7%) assess English writing ability of incoming students.

Only two respondents (6.9%) reported that their programs

















TABLE 1

Job Title of Person Completing Survey of Entering Student
Assessment Procedures by Absolute Frequency and Percentage


Absolute
Frequency
Job Title (N = 29) Percentage

Director of Counseling and/or
Guidance 4 13.8

Coordinator of Assessment 1 3.4

Registrar 1 3.4

Dean of Students or Student
Development or Student Services 8 27.6

Counselor for Assessment or Testing 3 10.3

Vice President 2 6.9

Director of Student Personnel Services 1 3.4

Chairman of Testing and Research
Department 1 3.4

Director of Testing 1 3.4

Chairperson of Counseling/Admissions 1 3.4

Coordinator of Counseling 1 3.4

Test Technician/Testing Agent 1 3.4

Director of Educational Research
and Planning 1 3.4

Provost 1 3.4

Psychometric Analyst 1 3.4

Director of Placement and Follow-Up 1 3.4
















TABLE 2

Subject Areas in Which First-Time-In-College Students
Are Assessed by Absolute Frequency and Percentage


Absolute
Frequency
Subject Area (N = 29) Percentage

Reading Comprehension 21 72.4

English Writing Ability 15 51.7

English Usage 19 65.5

Mathematical Computation Skills 24 82.8

Science 2 6.9

Social Science 1 3.4













assess science and one campus (3.4%) reported that its

assessment program assesses in the area of social science.

Table 2 provides information about frequencies by subject area.

Assessment Instrument Data

Table 3 provides information on the responses of the

sample to statements regarding assessment instruments.

Twenty-two campuses (75.9%) indicated that all entering

students take the same assessment instruments. Two campuses

(6.9%) indicated that all entering students take a core

instruments) plus additional instruments as determined by

course selection. Two campuses (6.9%) indicated that a

standardized test (such as the American College Test or

the Scholastic Aptitude Test) is recommended but not

required for assessment prior to admission to the college.

One campus (3.4%) indicated that all assessment is voluntary.

Another campus indicated that students with no standardized

test scores are referred to the subject area departments

for assessment. And finally, one campus reported that all

students take a core reading and English assessment instrument

and various math instruments according to course background.

How Assessment Is Administered

Table 4 provides information regarding the administration

of assessment instruments. Twelve campuses (41.4%) indicated

that assessment instruments are administered in groups at

orientation. Another 7 campuses (24.1%) reported that

















TABLE 3

Responses of Sample to Statements Regarding Assessment
Instruments by Absolute Frequency and Percentage


Absolute
Frequency
Statement (N = 29) Percentage

All students take the same
assessment instruments 22 75.9

All students take a core instruments)
plus additional instruments as
determined by course selection 2 6.9

Students take separate instruments
as determined by course selection 0 0

All assessment is voluntary 1 3.4

A standardized test is recommended
but not required prior to admission 2 6.9

Students with no standardized test
scores are referred to the subject
area departments for assessment 1 3.4

All students take a core reading and
English assessment instrument and
various math instruments according
to course background 1 3.4

















TABLE 4

Responses of Sample to Statements Regarding the Administration
of Assessment by Absolute Frequency and Percentage


Absolute
Frequency
How Assessment is Administered (N = 29) Percentage

In groups, at orientation 12 41.4

Individually, during pre-registration 4 13.8

In groups, during pre-registration 7 24.1

Individually, during registration 3 10.3

In groups, during registration 2 6.9

As needed on a walk-in basis 1 3.4

Assessed by course instructors 1 3.4

In groups, once each week during
summer session 1 3.4

Individually, at other designated
times 4 13.8

Individually, prior to registration
with take-home assessment 2 6.9

Assessed during national or local
testing dates 4 13.8

Mail-home self-scoring assessment used 1 3.4

Assessed during first class periods 2 6.9












assessment instruments are administered in groups during

pre-registration. Four campuses (13.8%) assess individually

during pre-registration. Four more campuses administer

their assessment instruments individually at other designated

times. An additional four colleges assess by the use of

national or local testing dates. Three campuses (10.3%)

administer their assessment instruments individually

during registration. Two campuses (6.9%) indicated that they

administer their assessment instruments in groups during

registration. Two campuses (6.9%) indicated that their

assessment was done individually prior to registration with

a take-home assessment. Two campuses assess during the first

class period. One campus (3.4%) indicated that assessment

instruments are administered as needed on a walk-in basis.

One campus reported that course instructors assess their

students and another campus assessed in groups once each

week during summer session. One college uses a mail-home

self-scoring assessment procedure.

Additional Student Assessment Factors

Table 5 provides information on additional factors

besides the results of assessment instruments that are

considered as part of first-time-in-college student

assessment. High school grades are considered as part of

assessment by 22 of the responding campuses (75.9%).

Twenty-one respondents (72.4%) reported that previous

college course work is considered as part of assessment.

















TABLE 5

Factors Besides the Results of Assessment Instruments That
Are Considered as Part of First-Time-In-College Student
Assessment by Absolute Frequency and Percentage


Absolute
Frequency
Factor (N = 29) Percentage

High School Grades 22 75.9

Previous College Coursework 21 72.4

Student Self-Evaluation 14 48.3

Individual Interview 1 3.4

Assessment Instruments Only 1 3.4

Results of ACT, SAT Only 3 10.3

GED Scores When Applicable 1 3.4

Conference With Counselor 1 3.4












Student self-evaluation is an assessment consideration by

14 campuses (48.3%). Three colleges (10.3%) reported that

the American College Test (ACT) and the Scholastic Aptitude

Test (SAT) results are the only assessment consideration.

Four other factors reported by one college each (3.4%)

included: individual interview, conference with a counselor,

General Education Diploma (GED) scores when applicable and

available, and no factors considered other than the scores

of assessment instruments. It is common for a college to

use a combination of as many as three factors in assessing

entering students.

Community College Programs That Are Not
Open Admission

Table 6 lists community college programs which are not

open admission by frequency and percentage. The most fre-

quently appearing program reported by campuses as one which

is not open admission was nursing. Nineteen campuses (65.5%)

reported some form of selection criteria for nursing programs.

Ten campuses (34.5%) reported selection criteria for para-

medic and for emergency medical technician programs, 9

campuses (31%) reported dental hygiene programs, 8 more

campuses reported respiratory therapy programs, 7 campuses

(24.1%) reported medical laboratory technician programs, and

6 campuses (20.7%) reported radiology programs. Selection

criteria also applied to cosmetology programs on 5 campuses

(17.2%), physical therapist technician programs on 4 campuses

















TABLE 6

Existing Community College Programs That Are Not Open Admission
By Absolute Frequency and Percentage


Absolute
Frequency
Program (N = 29) Percentage

Nursing 19 65.5

Radiology 6 20.7

Nuclear Medicine 3 10.3

Opticianary Science and/or Vision
Care Technician 2 6.9

Human Services 3 10.3

Dental Hygiene 9 31.0

Medical Lab Technician 7 24.1

Respiratory Therapy 8 27.6

Paramedic and/or Emergency
Medical Technician 10 34.5

Professional Police Training
and/or Police Science 2 6.9

Legal Assisting 1 3.4

Cardio-Pulmonary Technician 1 3.4

Nuclear Medicine Technician 1 3.4

Dental Assistant and/or Dental
Technician 8 27.6

Physical Therapist Technician 4 13.8

Forest Technician 1 3.4











TABLE 6 continued


Golf Course Operations 1 3.4

Landscape Design and Sales and/or
Ornamental Horticulture and
Landscaping 2 6.9

Park Technology 1 3.4

Auto Body Repair and Repainting 1 3.4

Auto Mechanic, Auto Mechanic Specialist,
Auto Performance Mechanic 2 6.9

Brick and Block Masonry 1 3.4

Cosmetology 5 17.2

Welding 1 3.4

Biomedical Equipment Technology 1 3.4

Veterinary Technician 1 3.4

Electroencephalographic Technician 1 3.4

Medical Record Technician 1 3.4

Small Gas Engine and Motorcycle Repair 2 6.9

Clerical Science 1 3.4

Data Processing 1 3.4

Retailing 1 3.4

Secretarial Science 2 6.9

Real Estate 1 3.4

Business 2 6.9

Junior Executive Marketing Management 1 3.4

Computer Programming 1 3.4

Operating Room Technology 2 6.9

Occupational Therapy Assistant 1 3.4







78




TABLE 6 continued


Professional Photography 1 3,4

Air Conditioning, Heating, and
Refrigeration Technician 1 3.4

Architectural Woodworking 1 3.4

Electronics 1 3.4












(13.8%), nuclear medicine programs on 3 campuses (10.3%)

and human services programs on 3 campuses. The following

programs were reported by 2 campuses each (6.9%) as programs

that were not open admission: opticianary science and/or

vision care technician; professional police training and/or

police science; landscape design and sales and/or ornamental

horticulture and landscaping; auto mechanic; small gas engine

and motorcycle repair; secretarial science; business; operating

room technology. Many programs were reported by one college

each (3.4%) as programs that were not open admission. They

included: legal assisting; cardio-pulmonary technician;

nuclear medical technician; forest technician; golf course

operations; park technology; auto body repair and repainting;

brick and block masonry; welding; biomedical equipment

technology; veterinary technician; electroencephalographic

technician, medical record technician; clerical science;

data processing; retailing; real estate; junior executive

marketing management; computer programming; occupational

therapy assistant; professional photography; air conditioning,

heating and refrigeration technician; architectural wood-

working; and electronics.

Admissions Criteria for Programs That Are Not
Open Admissions

Table 7 gives information on the admissions criteria

used by colleges for programs that are not open admission.

Sixteen campuses (55.2%) require that the applicant score

















TABLE 7

Admissions Criteria Used by Colleges for Programs That Are
Not Open Admission by Absolute Frequency and Percentage


Absolute
Admissions Criteria Frequency
Used by Colleges (N = 29) Percentage

Score on standardized test 16 55.2

Individual interview 9 31.0

Score on campus-produced test 1 3.4

Specific prerequisite requirements 12 41.4

Previous related work experience 2 6.9

Selection committee 9 31.0

Academic achievement criterion 12 41.4











above a specified score for admission to particular programs.

Twelve campuses (41.4%) indicated academic achievement

criteria as prerequisites to admission to particular

programs. Another 12 campuses (41.4%) reported requiring

other specific prerequisites (an example might be an English

language examination requirement or medical examination

report at the applicant's expense). Nine campuses (31.0%)

used an individual interview as part of the admissions

criteria and another 9 campuses reported the use of a

selection committee. Two campuses (6.9%) reported a

requirement of previous related work experience and one

campus (3.4%) required a specific score on a campus-produced

instrument.

Assessment Program Costs

Table 8 presents the assessment program cost data

for each respondent. Estimated cost per student to the

institution was reported to be $.50 or less by 19 campuses

(65.5%). An additional 4 respondents reported the estimated

cost to be between $.50 and $1.00. Therefore, 23 campuses

(79.3%) estimated the cost per student to the institution

to be $1.00 or less. Three colleges (Edison Community

College, Hillsborough Community College, and South Florida

Community College) reported costs to the institution of

around $5.00 for standardized tests. Figures on cost to

the institution include the purchase and scoring of instruments,

and they do not include initial hardware costs or personnel costs.
















TABLE 8

Estimated Cost Per Student of Assessment Programs


Estimated Cost Per
Student to the In-
stitution (Purchase
and Scoring of In- Costs to
College struments) Students

Brevard Community College $.96 0

Broward Community College negligible 0

Central Florida Community
College minimal $7.50 (ACT)

Chipola Junior College $1.00 0

Daytona Beach Community
College $.07 0

Edison Community College $5.00 + (CGPT) 0

Florida Junior College at
Jacksonville-Downtown $.50 0

Florida Junior College at
Jacksonville-Kent MD MD

Florida Junior College at
Jacksonville-North $.50 0

Florida Junior College at $.15 (+ cost of reusable MD
Jacksonville-South tests)

Florida Keys Community College $.50 MD

Gulf Coast Community College 0 $7.50 (ACT)

Hillsborough Community College $4.75 (CGPT) 0

Indian River Community College Unknown specific programs
$5.00 $12.00


Lake City Community College


$.25










TABLE 8 continued

Lake-Sumter Community College $.23

Manatee Junior College 0

Miami-Dade Community College $.25

North Florida Junior College MD

Okaloosa-Walton Junior College 0

Palm Beach Junior College $.22

Pasco-Hernando Community
College $.10

Pensacola Junior College $.51


Santa Fe Community College $.21

Seminole Community College $.38

St. Petersburg Junior College $.50 $.60

South Florida Community
College $5.00 (SCAT)

Tallahassee Community College $.25


Valencia Community College


$.50


0

$7.50 (ACT)

0

$5.00 (ACT)

$7.50 (ACT)

$7.50 (ACT)


0

Nursing
$5.00 $7.50

0

0

0


0

Medical
$12.00 $15.00

0


MD Missing Data











The only costs to the students were reported to be

examination fees for national standardized tests. Six

colleges reported requiring all students to pay ACT fees.

Three colleges reported additional examination fees paid

by students for admission to specific programs. These

examination fees ranged from $5.00 to $15.00. Seventeen

respondents (58.6%) reported assessment at no cost to

the student.

Additional Areas of Assessment

Table 9 gives information on additional areas in which

first-time-in-college students are assessed.

Study skills are assessed on 10 campuses (34.5%). How-

ever, 9 of these campuses make assessment optional to entering

students. Only Valencia Community College required the

assessment of study skills of entering students, where the

instrument used is the college-produced Study Skills Assess-

ment and Course Selection Guide.

Career interest is more often assessed. Seventeen

colleges (58.6%) indicated that career interest assessment

is offered to entering students. Only 2 campuses (6.9%)

require career interest assessment; Brevard Community College

uses its college-produced Brevard Community College Student

Goals, and Gulf Coast Community College reported using the

ACT Interest Inventory. The instrument most often used

(though optional) is the Strong-Campbell Interest Inventory,

used by 7 campuses (24.1%). Three campuses (10.3%) utilize
















TABLE 9

Additional Areas in Which First-Time-In-College Students
Are Assessed by Absolute Frequency and Percentage


Areas Assessed

Study Skills Assessment

Mandatory Assessment
Instrument: Study Skills
Assessment and Course
Selection Guide (campus-
produced-Valencia CC)

Non-mandatory Assessment

Career Interest Assessment

Mandatory Assessment
Instrument: Brevard
Community College Student
Goals and ACT Interest In-
ventory (Gulf Coast CC)

Non-mandatory Assessment

Self-Concept Assessment

Mandatory Assessment

Non-mandatory Assessment


Absolute
Frequency
(N = 29)

10

1





9

17

2





15

5

0

5


Percentage

34.5

3.4





31.0

58.6

6.9





51.7

17.2

0

17.2






86




the ACT Interest Inventory. Other instruments reported by

only one college each include the Hall Occupational Orient-

ation Inventory, the Oliver Career and Educational Interest,

and the Kuder.

Self-concept assessment was reported by 5 campuses

(17.2%). At none of these campuses is this assessment

mandatory for entering students.




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E69SFE20W_4KGUF3 INGEST_TIME 2013-11-16T00:36:48Z PACKAGE AA00014216_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES