• TABLE OF CONTENTS
HIDE
 Title Page
 Dedication
 Acknowledgement
 Table of Contents
 Abstract
 Introduction
 Review of related literature
 Methods and procedures
 Results
 Summary, conclusions, recommendations,...
 Appendix A: Interview guide
 Appendix B: Subject by level...
 Appendix C: Suggested steps for...
 References
 Biographical sketch














Title: Empirical and clinical models for student placement in the community college mathematics curriculum /
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00099481/00001
 Material Information
Title: Empirical and clinical models for student placement in the community college mathematics curriculum /
Physical Description: viii, 91 leaves : ill. ; 28 cm.
Language: English
Creator: McLeod, Robert Norman, 1950-
Publication Date: 1985
Copyright Date: 1985
 Subjects
Subject: School grade placement -- Florida   ( lcsh )
Mathematics -- Study and teaching   ( lcsh )
Community college students -- Florida   ( lcsh )
Educational Leadership thesis Ph. D
Dissertations, Academic -- Educational Leadership -- UF
Genre: non-fiction   ( marcgt )
 Notes
Thesis: Thesis (Ph. D.)--University of Florida, 1985.
Bibliography: Bibliography: leaves 85-90.
General Note: Typescript.
General Note: Vita.
Statement of Responsibility: by Robert Norman McLeod.
 Record Information
Bibliographic ID: UF00099481
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000875469
notis - AEH3023
oclc - 014696337

Downloads

This item has the following downloads:

PDF ( 3 MBs ) ( PDF )


Table of Contents
    Title Page
        Page i
    Dedication
        Page ii
    Acknowledgement
        Page iii
        Page iv
    Table of Contents
        Page v
        Page vi
    Abstract
        Page vii
        Page viii
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
        Page 9
        Page 10
        Page 11
        Page 12
        Page 13
        Page 14
    Review of related literature
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
    Methods and procedures
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
        Page 48
        Page 49
        Page 50
        Page 51
        Page 52
        Page 53
        Page 54
    Results
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
        Page 72
        Page 73
        Page 74
    Summary, conclusions, recommendations, and implications
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
    Appendix A: Interview guide
        Page 82
    Appendix B: Subject by level matrix
        Page 83
    Appendix C: Suggested steps for clinical evaluation
        Page 84
    References
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
        Page 90
    Biographical sketch
        Page 91
        Page 92
        Page 93
        Page 94
Full Text

















EMPIRICAL AND CLINICAL MODELS FOR STUDENT PLACEMENT IN
THE COMMUNITY COLLEGE MATHEMATICS CURRICULUM



By

ROBERT NORMAN McLEOD














A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE
DEGREE OF DOCTOR OF PHILOSOPHY










UNIVERSITY OF FLORIDA

1985



























For Barbara, Charles, and Travis















ACKNOWLEDGEMENTS


Several individuals have contributed greatly to this work and

deserve recognition. First, the members of the doctoral committee,

especially the chairman and the cochairman, have been extremely

helpful and are due special thanks. Dr. James Wattenbarger, chairman,

has provided insight, guidance, and understanding from the very

beginning of the doctoral program. Dr. John Nickens, cochairman, has

added expertise, humor, and encouragement. Their assistance has been

invaluable and my genuine appreciation is extended to them.

I am also grateful to Dr. Ernie St. Jacques for his congenial and

helpful attitude and to Dr. Jim Pitts for his cooperation and

assistance. Their suggestions have contributed to the completion of

this project.

Individuals from two local community colleges have been very

helpful by sharing information and ideas. Dr. Tom Delaino of Santa Fe

Community College has offered suggestions and furnished data that have

greatly influenced this study. Associate Dean Betty Towry of Central

Florida Community College has provided background information and

material which were also beneficial. Many thanks are due both of

them.

Finally, my family and friends have been an encouragement to me

throughout work in the doctoral program My wife, Barbara, has typed

numerous drafts of the dissertation as well as other papers. Her









support has been steadfast, and my love and appreciation are, as

always, due her. My father and his wife, my grandparents, my

brothers, and my sister have also shown much interest and have been

an encouragement. I am grateful to them all for their help and

support.














TABLE OF CONTENTS


Page

ACKNOWLEDGEMENTS.............................. .......... ..... iii

ABSTRACT ..................................................... vii

CHAPTER

I INTRODUCTION......................................... 1

The Problem.......................................... 8
Methodology............ ............................... 9
Assumptions..... ..................................... . 10
Limitations.. ................................. 11
Justification for the Study............................ 11
Organization of the Research Report.................... 13

II REVIEW OF RELATED LITERATURE.......................... 15

Community College Students and Rationale for Placement. 15
Hierarchical Nature of Mathematics..................... 22
Effectiveness of Standardized Tests as Placement
Instruments..................................... .. 24
Alternative Means of Assessment for Placement.......... 32
Sumnary ............................................... 41

III METHODS AND PROCEDURES................................ 43

Sample Selection and Data Collection................... 45
Description of Data.................................... 47
Data Analysis......................................... 51
Student Interviews............................... .... 51
Development of the Clinical Model...................... 53

IV RESULTS................................................ 55

Discussion of Results to Question One.................. 59
Results With Respect to Question Two................... 60
Discussion of Results With Respect to Question Two..... 61
Results With Respect to Question Three................. 63
Discussion of Results to Question Three................ 73

V SUMMARY, CONCLUSIONS, RECOMMENDATIONS, AND
IMPLICATIONS........................................... 75

Summary ............................................ 75
Conclusions....................................... .. 76








Page

Recommendations....................... .............. 76
Implications...................................... .. 79

APPENDIX

A INTERVIEW GUIDE...................................... 82

B SUBJECT BY LEVEL MATRIX............................... 83

C SUGGESTED STEPS FOR CLINICAL EVALUATION................ 84

REFERENCES.................................................. 85

BIOGRAPHICAL SKETCH............................................ 91












Abstract of Dissertation Presented to the Graduate School of
the University of Florida in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


EMPIRICAL AND CLINICAL MODELS FOR STUDENT PLACEMENT IN
THE COMMUNITY COLLEGE MATHEMATICS CURRICULUM

by

Robert Norman McLeod

December, 1985

Chairman: Dr. James L. Wattenbarger
Cochairman: Dr. John M. Nickens
Major Department: Educational Leadership

The problem of this study was to assess the appropriateness of

currently used placement tests in predicting student success in

mathematics courses and to identify additional factors which would

lead to appropriate placement of students into community college

mathematics courses. Placement test scores from the American College

Testing Program (ACT) and the Scholastic Aptitude Test (SAT) were

correlated with grades in entry-level mathematics courses for 605

first-time college students who had graduated from high school the

previous school year. Successful "high-risk" students, identified as

those with test scores below the cut-off points, were interviewed in

order to identify factors considered important to their successful

completion of college-level math courses. These and other factors

reported in the literature were utilized in a clinical evaluation

model which was reviewed by a panel of experts, composed of community

college instructors, counselors, administrators, and university

professors, for content and feasibility.








Correlations of test scores and grades in mathematics courses

were found to be low. Almost half of the correlations were negative

and only two of the positive correlations were greater than .50.

Three main factors emerged from the interviews as important. High

school preparation in mathematics, quality of student effort, and

clinical evaluation techniques were considered important by the

students interviewed. The panel of experts agreed that the concept of

clinical evaluation had merit. However, the panel disagreed over the

assumption that a hierarchy of subjects and levels in math could be

effectively used as a screening device in the model.

The study lends support to criticisms of the use of standardized

tests as a sole means of evaluation for mathematics placement. Other

factors which could be of use such as factors in a placement model

were identified, but not analyzed statistically. Recommendations for

further research of the validity of these factors in placement models

were made.













CHAPTER I

INTRODUCTION



The concept of the "open door" as an admissions policy has been

viewed as fundamental to the realization of the mission and purpose of

the community college. Thornton (1966) noted that, historically, the

community college had attempted to provide educational opportunity to

the average student. In describing the open door philosophy of

community colleges, Thornton described any high school graduate, or

any person who seemed capable of profiting by the instruction offered,

as being eligible for admission. Cross (1971), however, maintained

that without proper consideration of the diversity of students that

enter colleges, the open door can turn into a "revolving door,"

wherein students exit the college unfulfilled.

Florida's community colleges require a high school diploma or its

equivalent for admission. However, many students possessing high

school diplomas are not properly prepared for college level work. For

example, Florida's high school graduation requirements currently

include a minimum of three credits in mathematics; however, no credit

in high school algebra is required. Because of this, many entering

students need a significant amount of additional preparation in math

to master college level courses. As a result, college preparatory

courses have been included in the curriculum of Florida's community

colleges.









Consequently, appropriate placement of students into these

college preparatory courses becomes imperative. Roueche (1980)

commented that in order to give hope to students having learning

deficiencies who enroll, colleges were going to have to divorce

themselves from the "students have a right to fail" mind set, and

design instructional programs accordingly. Florida law requires

colleges to administer placement tests to all entering degree-seeking

students in order to assign them to courses commensurate with their

abilities. Because of the open door policy, entrance testing in

community colleges has been used as a means of placement rather than a

means of selection.

Four specific tests have been approved for placement purposes in

Florida's community colleges. The Florida legislature has mandated in

Rule 6A-10.315, FAC, that scores on one or more of the four approved

tests be used to place students in college preparatory communication

and computation courses, beginning the 1985 fall term. The four tests

are the American College Testing Program (ACT), Scholastic Aptitude

Test (SAT), Multiple Assessment Programs and Services (MAPS), and

Assessment Skills for Successful Entry and Transfer (ASSET). The cut-

off score for ACT has been set at 13, for SAT at 400, for MAPS at 206,

and at 12 for ASSET. Students who score within one standard deviation

of the cut-off scores may be exempted from or included in college

preparatory instruction on the basis of supplemental testing or

assessment documented by the institution. Identification of factors

in addition to standardized test scores that would provide information

concerning students' probability of success is therefore needed.










Students who score below the state-mandated cut-off scores on the

tests and are thus required to enroll in college preparatory courses

legitimately might ask how effective the tests and cut-off scores are

in predicting success in college level courses. Enrollment in college

preparatory courses will most likely increase the amount of time and

money spent for a college education and may lead to frustration and

lowered self-esteem. However, allowing students to enroll in courses

for which they are not properly prepared often diminishes the

probability of successfully passing the course. Since the goal of

effective placement is to assign students to courses in which they

have a reasonable probability of success, it is imperative to obtain

information as to what factors contribute to such success.

The necessity for some means of placement is widely recognized.

While some may support a "right-to-fail" concept that would allow

students to register for courses at any level, current practice

supports the use of placement methods to assure their right to

succeed. This has been accomplished not by restricting entry to

community colleges, but by identifying students' needs and then

providing the resources necessary to meet those specific needs.

The means of accomplishing effective placement, however, have

been elusive. Use of standardized testing has become pervasive in

admissions, placement, and curriculum decisions. A review of the

related literature revealed mixed results and only limited success,

when using standardized tests as predictors of academic success.

Other factors appear to be of equal or greater importance than test

scores when predicting academic success. Standardized tests persist,

however, as the most widely used type of evaluation for placement.









Hills (1971) stated "for some reason, though, there seems to be a halo

about test scores for placement that causes some faculty members to

believe that no other kind of data deserve consideration" (p. 707).

Evans (1975) cited the following statement by Winston Churchill

which helps explain some of the arguments for not using test scores as

a sole means of evaluation:

Examinations were a great trial to me. The
subjects which were dearest to the examiners were
almost invariably those I fancied least. . I
should have liked to be asked what I knew. They
always tried to ask what I did not know. When I
would willingly have displayed my knowledge, they
sought to expose my ignorance. This sort of
treatment had only one result: I did not do well
in examinations. (p. 270)

By virtually any other evaluative standard, Winston Churchill

would be considered successful. Yet, by his own admission, his

performance 'on examinations was poor. How many students entering the

community college system in this state and in this nation would

similarly do poorly on examinations but nevertheless have capabilities

and qualities that could lead to successful completion of college-

level courses?

McClelland (1973) and others have questioned the use of

standardized tests as a sole means of evaluation. Depending on

certain factors such as the type of test (aptitude or achievement) and

method of scoring (norm-referenced or criterion-referenced),

interpretation of test scores has been subject to question. Testing

experts disagree on the amount of emphasis that should be placed on

test scores. A continuing controversy rests on the implicit

assumption that standardized tests do indeed measure what they purport

to measure (Haney, 1980). Until it can be demonstrated with certainty









that test scores provide information sufficiently accurate to justify

the decision under consideration, the controversy is likely to remain.

Clearly, there is no definitive answer to the question of how

much emphasis to place on test scores for various decisions. Bersoff

(1973) aptly noted that the validity of a measuring device must be

evaluated in terms of the purpose for which it is intended. The goal

of assessment has been seen as the acquisition of relevant information

that will contribute to decisions about desired changes in behavior.

With regard to placement, then, other factors of evaluating student

competence would add to our understanding of assessment techniques and

help in making important educational decisions.

The task facing educators is one of improving processes of

evaluating students. A desirable combination would, ideally,

incorporate the impartial, objective nature of tests with other, more

holistic, means. Alternative methods of assessment are seen by this

writer as useful for complementing and cross-validating standardized

tests. While objective tests can be of use in identifying which

students do not possess certain competencies, they add little (if any)

insight into specific deficiencies causing the low test results. A

more comprehensive model for placement, taking factors into

consideration other than test scores, is called for.

The strongest arguments in favor of using standardized tests for

placement purposes appear to be convenience and a lack of

standardization of grades or other measures. Many criteria other than

tests have been explored as predictors; however, problems with

quantification, objectivity, and standardization limit their usage.

Student opinion, teacher opinion, and clinical evaluation, among










others, have seemed to have promise for placement, only to be rejected

for various reasons.

Experts in mathematics have long known that use of standardized

tests as a sole means of evaluation is inadequate. The National

Council of Teachers of Mathematics (NCTM) Handbook on Evaluation in

Math (1961) concluded that many possible combinations of evaluative

techniques should be used. Studies comparing mathematics tests and

subsequent grades have demonstrated only moderate to limited

predictibility. However, a search of the literature revealed no

definitive methods that have been widely used to supplant the use of

standardized tests, which persist as one of the best single methods of

evaluation in mathematics. Some combination of previous grades in

math and test scores appears to be the best method of prediction

(Larson & Scontrino, 1976).

Test scores are commonly intended for use as part of a placement

program. Since test scores reflect past opportunities to learn, the

scores should not be used as the sole criterion in a placement

decision. Test scores provide a means of comparing students to a

common standard and represent a valuable criterion in placement

decisions. These decisions should be reviewed periodically, however,

and should be adjusted if classroom performance indicates students

have not been placed correctly (Florida MAPS: Technical Manual, 1984).

Placement is most systematic when based on the results of a

validity study. Hills (1971) has stated that utility should be of

primary consideration when deciding on placement methods and

procedures. If the standardized tests approved for the placement

process are appropriate, research would demonstrate the validity of









the tests for placement purposes. If not, perhaps other, more

utilitarian practices could be explored in order to minimize the

expense and maximize the effectiveness of placement procedures.

Research in the area of learning has indicated that the

probability of success is an important factor in subsequent learning.

Students who experience success are likely to demonstrate continued

success (Ferster & Perrott, 1968). The converse of this "success

breeds success" premise has also been demonstrated. Individuals who

are thwarted in initial attempts at accomplishing certain tasks have

developed the phenomenon referred to as "learned helplessness"

(Levine, 1977). Thus, students not having a reasonable chance of

success should be discouraged (or even prevented) from enrolling in

courses and programs for which the probability of failure is high.

For these reasons, community college administrators recognize the

necessity of assessment following admission in order to place students

in courses where they have a good chance to succeed. Students having

deficiencies have recently been required to take necessary

developmental work before proceeding to programs where the lack of

skill could cause failure (McCabe, as cited in Schinoff, 1982). The

success of this combination of assessment, placement, and instruction

has been documented (McCabe & Skidmore, 1983). The structured

approach resulted in increased completion rates and improved student

performance.

Since current practice is to think of assessment as a means of

sorting and screening students and is seen as a key to learning

(Justiz, 1985), effective placement techniques are needed. As a

result of objections to the use of standardized test scores for the









sole means of placement, the identification of factors that could be

used in addition to test scores is one of the purposes of this study.

Learning theorists conclude that individuals process information

differently (Cross, 1976; Ferguson, 1980; Roueche, 1980; Dunn, Dunn, &

Price, 1981). Admonitions to instructors to vary their teaching

techniques abound (Cross, 1976; Dunn & Dunn, 1979; Cohen & Brawer,

1982; Easton, Barshis, & Ginsberg, 1983-1984). Why, then, should

educators be locked into a one-dimensional method of assessment? If

teaching strategies should be flexible, it seems self-evident that

evaluation strategies should be equally free of stifling methodologies

that lock educators into a single type of evaluation for placement. A

study that analyzes the effectiveness of mathematics placement tests

used in Florida's community colleges and attempts to identify other

potential factors for use in placement strategies would be useful.


The Problem


The necessity of appropriate placement in order to enhance

student success in community college mathematics courses has been

recognized. The factors that are currently used for placement are

certain state-mandated test scores which have been questioned when

used as the sole means of evaluation. The problem of this study was

to analyze the appropriateness of currently used placement tests in

predicting student success in mathematics courses and to identify

additional factors which would lead to appropriate placement of

students into community college mathematics courses.









Specifically, the following questions were addressed:

1. What is the relationship between selected placement test
scores and success in initial mathematics courses?

2. To what factors did high-risk students attribute their
successful completion of college-level mathematics courses?

3. What factors identified by the results of question number two
and in the literature may be used for components of a
clinical evaluation model which could augment the use of
standardized test scores for placement in mathematics
courses?



Methodology


In order to investigate the appropriateness of the presently used

placement strategy, performance on the tests and actual performance in

the classroom were analyzed. Thus, to answer question number one,

test scores on the math portion of the ACT and SAT were correlated

with grades in entry-level mathematics courses from a community

college. The sample consisted of data from 605 students who had

enrolled at Santa Fe Community College in Gainesville, Florida. All

of the students were high school graduates from the 1983-84 school

year and were entering the college for the first time. The students

had entrance test scores on record (ACT or SAT were the only test

scores available) and had completed their first college math course

during either the summer or the fall term, 1984.

The scores and grades were correlated using the Pearson product-

moment correlation (Pearson's r). This technique enabled a comparison

of student test score to student grade in order to examine the

strength of the relationship between the variables. This was done for

each of eight math courses, for each of the two tests for a total of









16 comparisons. A correlation with grades from all eight courses was

also computed.

In order to answer question number two, students were identified

who may have been considered "high-risk" because of low test scores,

but who had successfully completed a college-level math course on

their initial attempt. Specifically, students who scored 15 or below

on ACT or 430 or below on SAT and made C or better in a college

mathematics course other than MAT 1000 (Introductory Math Skills) or

MAT 1002 (Basic Mathematical Skills) were interviewed. Twenty

students were interviewed by telephone to determine the factors to

which they attributed their success in college-level math courses.

Question number three was answered by construction of a clinical

evaluation model which was examined by a panel of community college

math faculty members, counselors, administrators, and university

professors. The panel responded to questions concerning content,

feasibility, and utility of the model and made recommendations

concerning its use.


Assumptions


In selecting the sample of test scores, courses, and grades it

was assumed that the students and instructors in the particular

college which generated the data were representative of students and

instructors in community colleges throughout the state.

It was also assumed that the members of the panel of experts who

evaluated the model were representative of math instructors and

administrators in community colleges throughout the state.









Limitations


Of the four tests that have been approved for placement purposes

in Florida's higher educational system, only ACT and SAT were

considered in this study. Furthermore, only the mathematics section

of the tests and mathematics courses were analyzed. Therefore, only

mathematics placement was studied.

The data for courses and grades earned were limited to summer

and fall semesters, 1984 only. Students who did not have a test score

on record, did not take a math course in those terms, or did not

complete their course (grade of "W" or "I") were not included in the

sample.

The sample was also limited to high school graduates from the

1983-84 school year. Since community college populations are composed

of large percentages of students who are not recent high school

graduates, limiting the sample to recent graduates eliminated other

elements of the community college population. However, this did serve

as a control for age of students.

This study was also limited to initial math courses only. No

data were collected on subsequent math courses to determine student

performance in the overall mathematics curriculum.


Justification for the Study


The community college student has been viewed as somewhat

different from the traditional college or university student (Koos,

1970). Primarily designed to alleviate financial, geographic, and

motivational barriers, the community college was established for









students that may have been hindered from attending college because of

these barriers (Wattenbarger, 1971).

As community colleges developed and grew in numbers across this

nation, this uniquely American educational institution began to

attract new and different types of students. Non-traditional

students--those with lower entrance scores, from different socio-

economic backgrounds, part-time, older, having a variety of goals and

interests--made up larger and larger percentages of the community

college student population (Cross, 1976). The concept of the open

door was seen as fundamental to the mission and purpose of the

community college. Open access, allowing virtually any student to

enroll having a high school diploma or its equivalent, was largely

responsible for altering the course of American higher education

(Medsker & Tillery, 1971).

This emerging wave of students brought challenges and problems

never before encountered. Accountability and standards of excellence

became issues of high priority. Quality became a key word in higher

education. Many felt that the large number of new, non-traditional

students had somehow limited the quality of higher education in the

United States (Thornton, 1966). Recently, certain trends have

indicated a gradual closing of the open door (Henderson, 1982).

Because of the concerns for excellence and quality, additional

means of evaluation have become necessary. Florida's mandatory

student placement procedure, and the College Level Academic Skills

Test (CLAST), provide a current example. Merely having an open door

policy, then, has not been sufficient in accomplishing the stated

goals of the community college. For various reasons, students not









able to complete course requirements (and more recently, standardized

test requirements) turn the open door into a revolving door (Cross,

1976). For these reasons, effective placement procedures are

extremely important.

Open access with a "right-to-fail" philosophy is not justifiable

given the mission and goals of the community college, which include

development of students' abilities to the fullest. Rather than

allowing students to enroll in college-level courses for which they

are not prepared, appropriate placement would ensure enrollment into

courses compatible with their abilities. Students would then have a

reasonable probability of success in the course. Placement and

remediation, then, should be viewed as affirmation of the mission and

goals of the community college.

Research to provide information concerning the appropriateness of

current placement tests would enable decision makers to evaluate the

usefulness of the tests. Identification of factors in addition to

standardized tests that could be used in placement strategies would,

it is hoped lead to their implementation in placement models. Further

research on these subsequent models could validate the effectiveness

of the additional factors. Therefore, the results of this study could

be used in a comprehensive evaluation of the placement process.

Ultimately, the students in Florida's community colleges should

benefit from improved placement procedures.


Organization of the Research Report


The report consists of four additional chapters following this

introduction. Chapter II contains a review of the related literature.






-14-


Chapter III is a description of the methods and procedures, including

selection of the sample, data collected, and description of the

statistical analysis. Chapter IV presents the results of the

research. Finally, Chapter V contains the summary, conclusions,

recommendations, and implications of the study.













CHAPTER II

REVIEW OF RELATED LITERATURE



Community College Students and Rationale for Placement


The population of students attending this nation's community

colleges is markedly different from the population of students

attending four-year institutions. The population of community college

students includes students from a wide range of ability, age, and

other factors (Koos, 1970). Roueche (1980) referred to this

population as diverse. He noted that more and more students with

serious learning deficiencies had enrolled in community colleges and

were expected to continue to do so.

Cross (1976) reported that the historical trends in college

access from aristocratic to meritorious to egalitarian had brought

increasing numbers of low-ability students into programs of post-

secondary education. Calling these low-ability students "new-

students," Cross operationally defined them as those scoring in the

lowest third among national samples of young people on traditional

tests of academic ability.

Thornton (1966) noted that, historically, the community college

had attempted to provide educational opportunity for the average

student. The "open door" philosophy was described by Thornton as

allowing any high school graduate, or any person who seemed capable of

profiting by the instruction offered as being eligible for admission.









In a statement summarizing entrance requirements, Thornton stated that

the responsibility for choice--success or failure--should rest with

the student, not with a standardized test nor with the decision of an

admissions counselor. Entrance testing in community colleges has not

been seen as a selection process as is common in four year

institutions, but rather as a means of placement.

This concept of an "open door" as an admissions policy has been

seen as fundamental to the realization of the mission and purpose of

the community college. Cross (1971), however, maintained that without

proper consideration of this diverse population of students, the open

door can turn into a "revolving door," wherein students exit the

college unfulfilled. She commented that community colleges encouraged

diversity, yet seemed unable to move away from the unproductive

preoccupation with wanting all to learn the same thing at the same

rate.

Roueche (1980) commented that in order to give hope to students

who enroll having learning deficiencies, colleges were going to have

to divorce themselves from the "students have a right to fail" mind-

set, and design instructional programs accordingly. Placement into

proper courses was seen as essential to success in completion of

programs.

Wiener (1985) reported that between 60 and 70 % of all

community college students must take remedial courses. The drop-out

rate among such students was reported at over half. Students who have

successfully completed remedial courses, however, demonstrated a much

greater probability of completing their college programs. Wiener








called for mandatory placement testing and subsequent assignment into

remedial programs when necessary.

Linthicum (1980) studied the procedures and instruments used to

place students in developmental programs at a community college. The

evaluation system was designed to identify levels of skills and

subsequently guide students into the appropriate program. Grades,

nationally normed tests, and certain institutionally developed tests

were used to assess levels of skills. Linthicum also looked at

measures of the affective domain as reported by means of an Affective

Measurement checklist and a Self-Assessment checklist.

In evaluating the general success of the program, Linthicum found

that student choice was not an effective method of placement and that

a mandatory placement program was essential for student success.

However, placement based solely on test scores was not effective.

Qualitative factors such as motivation and self-concept were suggested

as important factors in the learning process. Linthicum emphasized

the necessity of flexibility in placement. She reported that reading

tests were better predictors of academic success than were math tests

and even recommended that math courses not be taken during the first

term by low ability students.

A study by Cordrey (1984) examined the effectiveness of a "skills

prerequisite program" used at a community college. The program

included mandatory placement testing and a curriculum of prescribed

remedial courses as a result of placement. The study examined

placement patterns, drop out rates, effects of remediation on

subsequent grade point average (GPA), and academic persistence.

Results of the study indicated that withdrawal from courses was









reduced as a result of the placement program. An institutionally

developed test was used for placement in math. The study also showed

that remediation did have a positive effect on future success in

academic courses. Other writers have questioned the effectiveness of

remediation.

Haase and Caffrey (1983) reported information concerning the

assessment and placement process that had recently been instituted at

a community college. The Stanford Test of Academic Skills (TASK) was

used in addition to institutionally developed diagnostic assessment

techniques. An increase in retention of students as a result of the

placement program was reported. They concluded by commenting on the

necessity of continuously monitoring placement procedures and changing

assessment techniques when necessary. It was recommended that methods

beyond mere testing were needed and that flexibility was imperative.

Reap (1979) reviewed the American College Testing (ACT)

Assessment Program in terms of its use at a community college. The

purpose of the evaluation was to determine how effective ACT was in

helping the college reach its educational goals. Specifically the

study sought to answer two questions: (1) Did ACT provide an accurate

description of the entering freshmen? (2) Did ACT operate as an

effective predictor of student success? Reap concluded that the first

question could be answered in the affirmative, but the latter question

in the negative.

The review in Buros (as cited in Reap, 1979) reported that the

predictive validity of ACT was as satisfactory as the state of the

measurement art then permitted. Since ACT was used as a placement

instrument, Reap's study examined its effectiveness at that








institution. The correlation of math grades with ACT scores was

reported as .19. Reap further reported that when high school grade

averages were included, the effectiveness of prediction was increased,

and suggested that possibly high school transcripts should be

considered for predictive purposes. While it did appear that the

effectiveness of ACT in predicting grades increased as scores

increased, Reap concluded that the ACT did not appear to be successful

as an effective predictor of student success.

Clark (1980) explored six factors in attempts at determining

which variables were significantly related to student success in four

different math courses. These were

Placement test scores
High school GPA
Prior college units taken
Prior college GPA
High school math grades
Prior college math grades

Grades in high school and college math courses were found to be

significantly related to success (defined as grade "C" or better)

using the Chi-square technique at the .05 level of confidence.

Allen (1981) in describing instructional techniques in a

Fundamental Algebra course at a community college emphasized the

necessity of proper placement. "A 40 question 'Co-op Test'" was

administered to all entering students at the college. Initially,

students had the option of taking their desired course in mathematics;

however, the college agreed that waivers of the suggested placement

were counter-productive. Students were placed into the Fundamentals

of Algebra course on the basis of test scores, with no waivers. This

decision was based on a prior study by Allen which found that when a

student was given the appropriate placement test and placed into the









recommended course according to math department guidelines there was a

positive correlation between placement scores and grades in initial

math courses. Allen recommended correct placement as the first step

for success in the initial math course.

Palow (1979) advocated that assessment and placement were an

"integral part of a comprehensive program of instruction in

mathematics" (p. 1). Citing the SAT scores from the previous year

which indicated that the freshman class was the "least academically

prepared" group in the history of the examination, Palow emphasized

the necessity of a system of assessment and placement to "match and

funnel" students into a compatible system of instruction.

The assessment and placement system described by Palow consisted

of combining the results of two paper and pencil inventories. The

first inventory was a multiple choice mathematics test geared to the

course which the students had indicated they wanted to pursue. The

second inventory was the Canfield-Lafferty Learning Styles Inventory,

designed to determine an individual's preferred way of learning.

Through a set of decision rules programmed into a computer the student

was assigned to one of four modes in individualized instruction.

Results of the placement program were not available; however, the

rationale for the necessity of placement was consistent with other

writers' recommendations. It was also significant in that it

represented one of the only programs that included students' preferred

learning style as a factor in placement.

Wood (1980) noted that the assumption that algebra should be the

first course in mathematics for an entering college student was

unwarranted. Since open-door colleges have been faced with





-21-


ever-larger numbers of entering freshmen who have studied very little

or no mathematics, Wood made a strong case for effective placement

programs in mathematics at the community college level.

Wood suggested three probable causes for failures among entering

freshmen: (1) students that had not completed two years of high

school algebra; (2) students that had completed two years of high

school algebra, but had been out of school for three or more years;

and (3) students that had completed two years of high school algebra,

but with minimal grades of low "C" or "D."

A testing program recommended by Wood included two placement

testing instruments--ACT and institutionally developed mathematics

tests composed by the faculty. Six different entry-level courses

ranging from a math review course (practical arithmetic) through the

first course in calculus were offered.

Wood's recommendations for an appropriate placement program in

math were summarized as follows:

1. A majority of junior college freshmen have deficiencies in
mathematics that range from partial to total.

2. Records show that these deficiencies do not necessarily imply
a lack of ability. They frequently spring from insufficient
high school training and/or a time lapse between high school
and college.

3. For students of normal or above-normal ability, these
deficiencies can be effectively removed by a review course of
one or two semesters. Our experience leads us to believe
that the two-semester plan is the better one for any college
with an open-door policy.

4. Presidents and academic deans of colleges need to be aware
that short of returning to high school, students with serious
mathematical deficiencies have no way to improve without such
a review course or courses. This is especially true of
mathematics because of its cumulative nature.









5. Placement tests for entering freshmen as well as advanced-
standing examinations (in college algebra and trigonometry)
for well-prepared students provide an efficient way to
achieve accurate student placement.

6. The results of our investigation support the philosophy that
any junior college that maintains an open-door policy to all
high school graduates accepts responsibility for providing
students with courses in which they have a reasonable chance
to succeed. (Wood, 1980, p. 64)


Hierarchical Nature of Mathematics


Wilson (1971) discussed and illustrated testing for evaluation

purposes in secondary school mathematics. In the process of this

discussion, a framework or model of the secondary school mathematics

curriculum was constructed. Based on a taxonomy of educational

objectives developed by Bloom (1956), the model described the

mathematics curriculum in terms of content and behaviors. The content

consisted of number systems, algebra, and geometry while the behaviors

included the cognitive and affective domains.

Mathematics content was described as being progressive or

sequential in nature. For instance, the concepts involved in number

systems were arranged in order from simple to complex. Many of the

concepts that were described required proficiency in prior content.

Number systems preceded algebra and geometry. However, it was

emphasized that much of the content was incorporated throughout the

curriculum rather than in a specific course. In describing the

sequential aspects of the mathematics curriculum, Wilson said that

although there was a sequential nature to the mathematics curriculum,

a given topic may be presented at increasing levels of sophistication.









While some topics logically precede others, the dividing line between

content areas was described as unimportant.

The levels of behavior were described as being both hierarchical

and ordered. The levels were subdivided into the cognitive and the

affective domain. The cognitive domain included computation,

comprehension, application, and analysis. The affective domain

included interests and attitudes, and appreciation.

Computation items were designed to require recall of basic facts

and terminology. Emphasis was upon knowing and performing operations

and not upon deciding which operations were appropriate.

Comprehension related to recall of concepts and generalizations. The

emphasis was upon demonstrating understanding of concepts and their

relationships, not upon using concepts to produce a solution.

Application items required recall of relevant knowledge, selection of

appropriate operations, and performance of the operations. Analysis

items required a nonroutine application of concepts such as the

detection of relationships, the finding of patterns, and the

organization and use of concepts and operations in a nonpracticed

context.

Wilson commented on the hierarchical and ordered nature of the

levels of behavior:

It is ordered in the sense that analysis is more cognitively
complex than application, which is in turn more cognitively
complex than comprehension, and the computation level
includes those items which are the least cognitively
complex. It is hierarchical in that, for example, an item
at the application level may require both comprehension
level skills (selection of appropriate operations) and
computation level skills (performance of an operation).
(p. 649)








The affective domain was not described as hierarchical. It was

emphasized, however, that the affective domain must not be discounted

when considering instruction and evaluation of mathematics.



Effectiveness of Standardized Tests as Placement Instruments


While several sources in the literature demonstrated the

necessity for placement in community colleges, the means of achieving

effective placement have been somewhat elusive. As cited, most

methods of placement involve the use of a standardized testing

instrument, either as a sole means of placement or used in conjunction

with other criteria. One of the purposes of placement is the

prediction of success in the course or courses into which the students

are placed. Standardized tests have demonstrated only limited

predictive value. Apparently other factors are involved which have

been difficult to detect.

McClelland (1973) questioned the use of standardized tests,

as predictors of academic success and also as predictors of "success

in life" as defined by certain accomplishments such as job status,

earnings, satisfactions, etc. McClelland noted that researchers have

had difficulty in demonstrating that grades in school are related to

any other behaviors of importance--other than doing well on aptitude

tests. Making a case that standardized tests and grades only

correlate highly with one another, McClelland urged that a wider array

of talents should be assessed for college entrance. While the

argument applies mainly to the use of tests as selection instruments,








their value as placement instruments has also been subject to

question.

Haney (1980) examined the use of standardized tests in a broad

context and related considerable controversy over their use. Sharp

disagreement was reported among testing experts on issues of test bias

and validity. Haney maintained that while a wide variety of

inferences may be drawn from any test score, one acid test of what

inferences were drawn was how the scores were used by social

institutions.

Haney commented on the differences between aptitude tests which

were reported as being used for screening or selection purposes and

minimum competency tests. The competency testing movement generally

represented a government or institutional effort to regulate and

improve schooling. Haney cited Jensen who commented that the only

justification for competency testing for placement purposes was

evidence that the alternative treatments were more beneficial to the

individuals assigned to them than would be the case if everyone got

the same treatment. Jensen concluded that minimum competency testing

would not contribute to the solution of the problem of test bias and

validity since it appeared to be an unnecessary stigmatizing practice

with no redeeming benefits to individual pupils or to society.

Jensen apparently felt that alternative treatments, i.e.,

remediation, or low level classes, were not effective. This view is

consistent with information reported by Hills (1971), who cited

research that indicated evaluation of remedial courses was not

effective. Popham (1975), however, felt that competence testing for

placement was useful because it allowed isolation and remediation of








instructional deficiencies. Apparently, Popham assumed that

remediation was possible and effective.

A distinction has been made in the literature on testing between

two types of tests--norm referenced and criterion referenced.

Fundamentally, the difference is based on comparison of performance to

other individuals (norm referenced) or comparison to certain pre-

determined standards (criterion referenced).

Glaser (1963) and Popham and Husek (1969) were the first to

introduce and popularize the field of criterion-referenced tests (CRT)

which promised to be a significant breakthrough in education. The

CRTs were seen as a means of maximizing the potential of each student.

Cross (1976) discussed individualized, competency-based or mastery

learning and pointed out similar advantages of CRTs.

Hambleton, Swaminathan, Algina, and Coulson (1978) noted that the

introduction of CRTs was intended to meet the testing and measurement

requirements in objectives-based instructional programs. Problems

arose regarding a precise, acceptable definition of criterion-

referenced test, the central issue being the use of the word

"criterion." "Criterion" is best defined for these purposes as a

domain of behaviors, not a performance standard, minimum proficiency

or cut-off score. Popham (1975) provided the best workable definition

of CRT: "A criterion referenced test is used to ascertain an

individual's status (referred to as a domain score) with respect to a

well-defined behavior domain" (p. 2).

Controversy existed regarding terminology for tests of this sort,

with the terms criterion-referenced, domain-referenced, and objectives








referenced being the three discussed most. Popham advocated the term

"criteria-referenced" because of considerable public support and the

ill-advisedness of beginning a new campaign for "domain-referenced"

tests, even though the latter term is probably most descriptive.

According to McClelland (1973) competency-based testing should

1. be criterion referenced (not norm referenced)

2. be designed to reflect changes in what the individual has

learned (not measure "native intelligence")

3. provide public and explicit information on how to improve on

the characteristics) tested (not keep answers secreted away

from the public)

4. assess competencies involved in clusters of life outcomes

(not test esoteric qualities that are of little use in the

real world)

a. communication skills

b. patience

c. moderate goal setting

d. ego development

5. involve operant as well as respondent behavior (not require

only pre-determined responses that may be unfairly limited,

i.e., all true-false or multiple choice)

6. sample operant thought processes to get maximum

generalizability to various action outcomes (so that students

can see the relevance of the skill, its application, and

ramifications in life situations)

The rapid acceptance of criterion-referenced testing in general

was not without problems, however. An urgent need for establishment








of standards, both for the development of and for demonstrating

validity of CRTs was noted by Evans (1975).

Novick and Lewis (1974) dealt with these problems as well as a

problem concerning the length of CRTs for a specified objective,

commenting that

The minimum acceptable length depends on the
manner in which test information is used to make
decisions about individual students, the level of
functioning required for defining mastery of an
objective, the relative losses incurred in making
false positive and false negative decisions, the
background information available on the student
and on the instruction process, and the premium on
testing time within the instructional process.
(p. 139)

Novick and Lewis adopted guidelines which effectively said that

test lengths of 12 items or fewer for a specified objective were very

desirable. Lengths above this and up to 20 were tolerable. Tests

that were longer than this for a single objective were described as

discomforting. Tables of test lengths, taking the above mentioned

factors into account, were published. They concluded that "mastery

must be confirmed by a test that permits demonstration of non-mastery"

(1974, p. 158).

Special categories of CRTs were identified by Keesling (1974), in

particular, the specific type of mastery learning in which order of

presentation was crucial. Objectives that were subject to a priority

ordering based upon task analysis or theories of instruction were

distinguished from objectives that required no ordering of

presentation or transfer of training. Keesling concluded that the

validity of the proposed structure of relationships among objectives

was very important. This hierarchy is not always detectable, however.

Hambleton et al. (1978) commented that when it is possible for a set









of learning objectives to be arranged into a learning hierarchy, the

strategy of branch-testing would seem to offer considerable potential

for decreasing the amount of testing while improving its quality.

The problems associated with criterion-referenced tests,

particularly such issues as test score validity, determination of cut-

off scores, and complicated legal actions by the courts have not been

totally resolved. However, Hambleton et al. (1978) reported that

sufficient theory and practical guidelines were available for

construction of at least adequate CRTs and criteria-referenced testing

programs.

Numerous studies have examined specific standardized tests, most

notably the SAT and the ACT and their correlation with grades. The

results have been inconclusive. Schade (1977) reported a study

carried out at a community college to determine the predictive

validity of various parts of two standardized tests toward academic

achievement as measured by the first semester GPA. The tests used in

the study were the ACT and the Missouri College Placement Test (MCPT).

Each of the ACT segments showed a significant correlation with first

semester GPA. The ACT mathematics segment had the lowest correlation,

r2 = .125. Schade described the predictive power of the tests as

"poor to moderate" and indicated that the values of the correlation

coefficients were comparable to results that had been reported from a

variety of institutions. None of the MCPT segments showed a

significant correlation.

Schade theorized as to why his study and several others which he

cited failed to demonstrate high correlations between grades and test

scores. One possibility mentioned was a change in student motivation,









either positive or negative. College-level work could possibly

stimulate previously low achievers with lower test scores into better

performance or could stifle earlier motivation. A second possibility

noted was that the tests could be suspect. A low score was actually

impossible to interpret, possibly indicating insufficient native

capacity or inadequate training in the skills and abilities tested.

Test bias may have also been a factor. Thirdly, grades and a lack of

standardization of grading systems could have contributed to the poor

correlation.

Nolan (1976) also conducted a correlational study using ACT sub-

test scores and grades earned in corresponding subject areas to

determine the predictive value of ACT. The correlation analysis

yielded coefficientis of such low magnitude as to make him conclude

that there was no significant relationship between ACT scores and

academic performance. The r2 for mathematics grades and the ACT math

sub-test was reported at .07. Nolan stated that "it appears that high

school grades alone are the best predictors considering the negligible

amount of variance accounted for by ACT scores" (p. 4).

Larson and Scontrino (1976) examined the validity of high school

grade point average and of the verbal and mathematical portions of the

SAT as predictors of college performance over an eight year period.

They reported multiple-correlation coefficients that were

"consistently high." Interestingly, the mean proportion of variance

accounted for in the eight-combined samples by using all three

predictor variables in combination was only 4.7% greater than the mean

proportion of variance accounted for by using the high school GPA as a

single predictor.





-31-


Fincher (1974) examined SAT scores in a state university system

over a 13 year period. He noted that the value of SAT scores is

their use in conjunction with high school grades. The SAT scores

alone (the zero-order) correlated with college performance was not

considered a sufficient indicator. However, used in addition to the

high school record, SAT was judged to improve predictive efficiency.

Fincier's comment regarding the mathematics portion of SAT was

noteworthy. He commented that the mathematics scale contributed with

less consistency than the verbal section of the test and would not

appear to be highly useful in one-half of the situations where it was

applied.

Other findings of interest by Fincher were confirmation of

previous findings that females were more predictable in academic

performance than males (when comparing SAT scores, high school grades,

and college grades) and the loss of information from combining SAT

math and verbal scores. Each portion of the test taken separately was

a better predictor than the combined score. These results were

consistent with previous findings and Fincher recommended continued

use of SAT as a selection instrument.

Price and Kim (1976) compared high school grades and entrance

test scores with performance in college. The ACT scores were used as

the standardized test scores. College GPA was mostly determined (75%)

by both high school grades and ACT scores. However, Price and Kim

concluded that it appeared reasonable to believe that ACT scores were

more significant and important in predicting a person's ability to

perform in college than were high school grades because the beta









coefficients of four specific fields of the ACT program were

relatively larger than those of high school grades.


Alternative Means of Assessment for Placement


Grades


Grades have traditionally been a basic means of evaluating,

recording, and reporting students' progress. Several sources in the

literature have expressed concern with using grades for objective

measurement purposes since grades are often determined subjectively.

Haase and Caffrey (1983) stated that grades were not good measures of

performance because of grade inflation and lack of standardization.

Schade (1977) commented similarly about the lack of

standardization of grades:

Among the many teachers, areas of study, and
institutions, there are a plethora of grading
standards. A student who achieves at a certain
level in one class should be expected to achieve
at approximately the same level in other classes.
Yet this does not always happen. If students were
to take the same course from different teachers,
whether at the same or at a different institution,
they would not necessarily make the same grades.
Different academic areas will tend to have
different standards. Some disciplines are
notoriously stringent and demanding while others
are the opposite. Compounding this difficulty is
the fact that lower ability students will tend to
gravitate towards those areas that are less
taxing. (p. 19)

Goldman and Slaughter (1976) questioned the use of the grade

point average (GPA) as a validation criterion. Most studies

attempting to validate standardized tests compare test scores to GPA.

However, because grades appear to be more explainable by unmeasured

traits than by test scores or previous grades, their use as a









validation criterion becomes suspect. Goldman and Slaughter further

maintained that composite GPA was a poorer predictor than single class

grades. Since composite GPA is made up of decidedly nonequivalent

components it is less reliable and hence less predictable than grades

from a single class.

Longstreet (1975) questioned the use of grades as fair and

objective measures of performance, maintaining that grades are used

for convenience in administration and for tradition. Longstreet noted

the important difference between grading based upon knowledge of

subject matter and grading based on comparative scores. Longstreet

called for alternatives to traditional grading such as mastery

learning, contract grading, self-assigned grading and conferences with

students, commenting that "criteria truly significant to the

development of an intellectually independent and creative individual

cannot be reduced to . a few letters or percentage points, however

convenient these may be bureaucratically" (p. 246).

McClelland (1973) questioned the validity of grades as

predictors, asserting that while grades and test scores correlate

highly with one another, neither can accurately predict future

measures of success in life. He noted that researchers have had

difficulty demonstrating that grades in school are related to any

other behaviors of importance other than doing well on aptitude tests.

He stated that while grade level attained seemed related to future

measures of success in life, performance within grade was related only

slightly. Results of several studies indicate that superior on-the-

job performance is related in no way to better grades in college.









Clinical Evaluation


Though grades have been the subject of criticism, their use is

widespread, practically universal. As previously noted, standardized

test scores have also been subject to criticism for many of the same

reasons as grades. Alternative means of assessment have been reported

in the literature. However, there seems to be no agreement on what,

if any, measures should be used as alternatives. Grades and

standardized tests persist as the most common means of measurement.

Clinical and holistic methods of evaluation have been suggested as

alternatives. The concept of clinical evaluation, an analytical

assessment of competencies and deficiencies, and the prescription of

treatment based on direct communication and observation by a

practicing professional, was referred to by several writers.

Holistic evaluation, based on the theory that reality is made up

of unified wholes that are different from the simple sum of their

parts, has also been proposed as a viable addition to measurement and

evaluation methods dominated by traditional grades and standardized

tests. Oral exams, performance tests, situational tests,

observations, and checklists were suggested as complements and

alternatives (Roueche, 1980). A holistic approach emphasizes the

importance of the whole and the interdependence of its parts

(Ferguson, 1980).

Clinical and holistic evaluation have been referenced numerous

times pertaining to mathematics; however, the specific terms have not

been used. Many writers who proposed alternatives to standardized

testing made reference to clinical and holistic approaches to

evaluation.








Wilson (1971) concluded that standardized testing alone was not

sufficient for proper evaluation in mathematics:

Mathematics learning is a many-component task. It
should be measured or evaluated over a broad range
of criteria. The evaluation of mathematics
learning in terms of a single measure leads in
incomplete or even erroneous information . .
The use of standardized tests in the evaluation of
classroom learning is of limited value. They are
inappropriate for formative evaluation. For
summative evaluation, standardized tests tend to
concentrate on one level of behavior (and hence
limit the range of outcomes to be considered) or
combine scores, levels of behavior or content (and
hence limit the information that may be available
on the test). (p. 264)

Sueltz (1961) in the National Council of Teachers of Mathematics

(NCTM) Handbook on Evaluation on Math commented that to determine the

level of sophistication of a student's work and the depth of

understanding of a major topic requires a much more refined procedure

than mere standardized testing. The writers of the Handbook further

conclude that evaluation of the thinking and procedures employed by

students usually is better done by careful observation and interview

than by objective testing.

Various writers have called for holistic and clinical approaches

to evaluation as alternatives to standardized tests and traditional

grading. Quinto and McKenna (1977), in an NEA published monograph,

suggested several alternatives to standardized testing as means of

evaluation. The alternatives included contract grading, conferences

with students, and checklists based on observations. Kopfstein (1980)

commented on community college reading and study skills programs,

proposing certain methods of evaluation that would be considered

clinical, although the term is not specifically used. Student

interviews were one of the primary methods suggested. O'Reilly,








Vogler, and Asche (1980) suggested several options that could be used

as alternatives to standardized tests, all of which would be

considered clinical or part of a holistic evaluation, although again

the terms are not used. Ginsburg (1975) recommended evaluating

student progress through "direct oral communication." Interviews and

conversations with students were essential to this type of evaluation.

In fields other than education, especially the medical field,

writers have called for additional methods of assessment. Clinical

observation procedures have been proposed. Kuliecke, Lloyd, and

Mathis (1982) identified problems in evaluation of medical students

and emphasized the necessity of process evaluation and not mere

product evaluation. Factors were cited that were responsible for poor

performance on medical examinations by medical students. Lack of

complexity in data analysis procedures, quality of training, mental

set at time of examination, experience with taking exams, effort put

into training, and barriers to assimilating training (especially

foreign language barriers) were cited. Kuliecke et al. called for

models of evaluation that could take these factors into account.

Shugars, May, and Vann (1981) discussed evaluation of dental

students. They maintained that cumulative data from several faculty

members should be used as a basis for grades. In addition, means for

assessing professionality in students was stressed. A clinical

judgment examination, wherein faculty members carefully observed

students, was integral to the dental evaluation model.

Halpin, Halpin, and Schaer (1981) compared objective measures of

writing to holistically scored essays. The holistic method of scoring

was based upon a generalized impression or global quality of the








essay. Halpin et al. found that 26% of the variance in the essay

scores was explained by the objective measures. Clemson (1978)

reported a somewhat higher correlation, 49% for r2, in a similarly

conducted study. Objective measures were not sufficient in explaining

the total range of variation in holistically scored samples. The

holistic method of evaluation appeared to be at least as useful as

objective measures in assessing competence.



Attitude and the Affective Domain


Corcoran and Gibb (1961) discussed attitude appraisal in the

learning of mathematics, noting that suitable instruments were not

widely available. Attitude towards math involves both cognitive and

non-cognitive aspects. They noted that a student's attitude toward

mathematics is a composite of intellectual appreciation of the subject

and emotional reactions to it. Corcoran and Gibb examined attitude

according to direction (attraction or repulsion) and intensity (strong

or weak). Other important aspects of attitude toward mathematics were

noted as consistency, salience, reaction to difficulty, interest, and

value.

Various methods of assessing attitudes toward math were reported,

basically separated into self-reports wherein students reported their

own attitudes, and observer reports, based on interviews with the

students. Thurstone scaling methods, Likert scales, Guttman-type

scales, and Hoyt-MacEachern scales were used in self-reporting

attitude appraisal. The Minnesota National Laboratory instrument was

suggested for observer reporting. Fouche (1961) summarized Corcoran

and Gibb's chapter by stating that ignoring attitude evaluations








completely would be almost to treat students as mere learning

machines, devoid of feelings and emotions, and such higher, complex

behavior as creativity and discovery. Fouche indirectly referred to

holistic evaluation in math:

A little thought will show that what is generally
meant by "testing" is really uniform testing of a
number of students. The conscientious, skillful,
.perceptive mathematics teacher is constantly
making evaluations of a student's verbal answers,
of his blackboard work, of his facial expressions
and other non-verbal behavior, but tests are
required nonetheless in order to have uniform and
easily comparable information about the behavior
of all students in the same situation. (p. 172)



Learning Style


Since no two students have the same learning style, Roueche

(1980) maintained that no single methodology of evaluation would fit

all students. Some students are "right-hemisphere preferenced; that

is, they excel at holistic and spatial functions. Traditional methods

of instruction and evaluation, designed around left-hemisphere

strengths (verbal and analytic) of the middle class, will not reach

them according to Roueche.

Within the last decade research having far-reaching implications

for evaluation and placement has been done regarding learning styles.

Dunn, Dunn, and Price (1977) stated that how a student learns is

perhaps the most important factor in his academic achievement. Dunn

and Dunn (1979) developed a "Learning Style Inventory" (LSI) based on

research data that yielded 18 categories which suggest that learners

are affected by the following elements:








1. immediate environment (sound, temperature, light, and

design),

2. emotionality (motivation, responsibility, persistence, and

structure),

3. sociological needs (self, pairs, peers, teams, adult, and/or

varied), and

4. physical needs (perceptual strengths and/or weaknesses, time

of day, intake of foods and fluids, and mobility).

Dunn and Dunn (1979) reported that teachers were able to

recognize some learning style elements with considerable accuracy.

Certain other elements were admittedly more difficult to assess. They

further commented that people "learn in ways that differ dramatically,

but certain students achieve only through selected methods--methods

that frequently fail to produce academic results for others" (p. 238).

The Productivity Environmental Preference Scale pepsS), an adult

version of the LSI, has also been developed by Dunn and Dunn. The

PEPS would be suitable for use in community college curriculum,

instruction, and evaluation.

Farr (1971) confirmed in a study of learning styles that

individuals could accurately predict the modality in which they could

demonstrate superior learning performance. The data revealed that "it

is advantageous to learn and be tested in the same modality and that

such an advantage is reduced when learning and testing are both

conducted in an individual's non-preferred modality" (p. 242). The

most desirable conditions existed when learning and testing were both

in the student's preferred modality.








Domino (1970) reported that students who had been exposed to a

teaching style consonant with the ways they believed they learned

scored higher on tests, fact knowledge, attitude, and efficiency of

work than those who had been taught in a manner dissonant with their

orientation. Hunt (1981) asserted that learning style described

students in terms of those educational conditions under which they are

most likely to learn. To say that a student differs in learning style

means that certain educational approaches are more effective than

others, said Hunt. This differed somewhat from Dunn and Dunn in that

Hunt viewed learning style as a malleable trait.

Hunt (1981) further stressed the reciprocal relation between

psychological theory and educational practice, concluding that

reciprocity is a central feature in matching of styles. Taking such a

reciprocal view of matching, Hunt declared, more reasonably accounts

for the continuously changing nature of the teaching/learning

transaction.

Davidman (1981) was critical of some of the assertions made

regarding learning styles. In particular, the validity of the LSI was

questioned. Davidman argued that, although the LSI provided

interesting information, it should not be taken as a clear and

irrefutable indication of a student's pattern of learning. He

concluded by stating his belief that brief, teacher-made instruments

would initiate a more useful diagnostic process.

Dunn, Dunn, and Price (1981) countered Davidman's criticism by

maintaining that the LSI offered a reliable and practical alternative

to "soft evaluation." They stated that "in conjunction with teachers'








insights and experiences, it could provide the foundation for building

learning environments designed to meet the needs of individuals"

(p. 646).

Palow (1979), as previously cited, listed learning-style

preference as a major consideration for placement of students.

However, no other sources were found that used learning style

preference as a placement strategy.


Summary


Placement has been a continuing problem for community colleges in

particular, because of the great diversity that characterizes their

clientele. Since entering students vary greatly in academic skills,

some means of assigning students to courses commensurate with their

skills has been considered imperative. Standardized tests have been

the most commonly used means of assessment for student placement.

High school grades have also been used, with slightly better success

than tests. Alternative means of assessment, including clinical and

holistic evaluation, assessment of the affective domain, and learning-

style preference have been proposed. Because of the difficulty in

reliably measuring these alternatives, the use of traditional grades

and standardized tests persist as the most widely used placement

techniques.

Numerous studies have considered the relationship of standardized

tests and grades. Most of these, however, examine the predictive

validity of the tests for use as selection instruments. Since

mandatory testing for placement is relatively recent in community

colleges, information regarding the effectiveness of certain commonly





-42-


used tests as placement instruments was limited. Only a small amount

of information was found in the literature regarding appropriate

placement strategies in community colleges. Most of the studies had

a similar rationale and underscored the necessity for placement.

However, it appears that an effective, agreed-upon system of placement

has not been reported.













CHAPTER III

METHODS AND PROCEDURES


Research Design and Selection of Variables


In order to determine the relationship between placement tests

and student performance in mathematics, and thus address the first

question of this study, several decisions were required in order to

design a suitable research model. As previously noted, four tests

have been approved by the Florida legislature for placement purposes

in higher education. While information on all four tests would have

been desirable, data on two of the tests, MAPS and ASSET, were not

available from the college used in this study. Both ACT and SAT have

long been used as admissions tests and data were available. The MAPS

and ASSET tests are relatively new and data concerning these two

tests, when available, should be used in subsequent studies.

High school grades have been commonly reported in the literature

as effective predictors of college grades. Several writers

recommended using high school grades in conjunction with placement

tests for the most effective prediction of college grades. Prior

grades typically have proven to be the best predictors of subsequent

grades. Therefore, the possibility of including high school grades as

a second independent variable was considered. However, since previous

research has been adequate in demonstrating the usefulness of high

school grades, they were not used as a second independent variable.





-44-


Since test scores were the only state-mandated measures for math

placement at the time of this study, standardized test scores were

used as the sole independent variable.

Several alternatives were available for selection as measures of

student performance to use as the dependent variable. Various

measures of performance and combinations thereof have been reported in

the literature in the assessment of student performance. Student

opinion, teacher opinion, clinical evaluation methods, other

standardized tests, grade point average, student retention, and other

factors have been used to measure relative success. Convincing

arguments have been made for the use of measures other than grades in

evaluating student performance. Grades have been criticized as

lacking objectivity and standardization. These same criticisms apply,

however, to the previously mentioned alternative means of assessment.

Grades have continued to be accepted as the most commonly used measure

of student performance. Therefore, grades in math courses were chosen

as the measure of student performance for this study.

Relationships between test score and grade would be most

meaningful when considered for each course individually. Grades in

different mathematics courses would obviously have different meanings

since math courses are sequentially arranged in the curriculum. For

instance, a grade of "A" in a college preparatory course would have a

much different meaning than a grade of "A" in calculus, since calculus

is near the end of the sequential mathematics curriculum, whereas

college preparatory math is a simpler course.

For these reasons, ACT and SAT scores and grades were correlated

for each of the eight selected math courses. A composite correlation

of test score and grades was also performed for each test.









Factors found in the literature and thought to be of potential

use in the placement process were studied by interviewing students who

had enrolled in college mathematics courses and were successful, i.e.,

achieved grades of C or better in courses other than MAT 1000

(Introductory Math Skills) or MAT 1002 (Basic Mathematical Skills) but

had test scores below or slightly above the cut-off scores.


Sample Selection and Data Collection


Given the variables which were to be used in the study--placement

test scores and grades--a source of data containing these variables

was located. Data from all students that had enrolled for the first

time in the summer or fall semester of 1984 who had entrance test

scores on record as well as a math course and grade were obtained

through the Management Information Services division of Santa Fe

Community College, one of the 28 community colleges in Florida's

higher educational system. The college is located in Gainesville, in

north-central Florida, and draws students mainly from its local

district, as well as from throughout the state and from other states

and nations.

The demographics of the selected cases obtained from the college

were found to be similar to the statewide population of community

college students. Table 3.1 shows the similarity.

The breakdowns by race and sex for recent statewide high school

graduates was quite similar as well. Statewide data represent

students in the 15-19 age group, while data from the college in the

study represent 1984 high school graduates (Table 3.2).










Table 3.1

Demographics of First-Time Community College Students


Santa Fe Community College Students Statewide


Sex Male 44% Sex Male 41.3%
Female 56% Female 58.7%

Race Black 11% Race Black 10.8%
White 79% White 77.3%
Other 10% Other 11.8%






Table 3.2

Demographics of Recent High School Graduates Entering the
Community College System


Age 15-19
Santa Fe Community College Students Statewide


Sex Male 43.4% Sex Male 43.7%
Female 56.5% Female 56.3%

Race Black 13.6% Race Black 11.4%
White 79.3% White 72.4%
Other 7.0% Other 16.0%


The data for students actually used in the sample, i.e., those

who had a test score and passed a course, were also similar. Table

3.3 depicts this.










Table 3.3

Demographics of Sample of 1984 Hiah School Graduates


Number %


Sex Male 280 46.2
Female 325 53.8

Total 605 100.0

Race Black 67 11.0
White 484 80.0
Other 54 9.0

Total 605 100.0




The data were collected from a computer print-out from the

college which contained information on all of the entering students

for the 1984-85 academic year (including both summer and fall

semesters) who had graduated from high school the previous academic

year, 1983-84. Only data from those students who had a test score

on record and had enrolled in and completed a math course were

collected. A total of 605 complete sets of data were used in the

study.


Description of Data


Tests


The two placement tests used in this study were the ACT and

SAT examinations. As noted previously, these tests were designed

for predictive purposes and have been commonly used as selection









instruments. However, these tests were widely used for placement

purposes in most of the community colleges in Florida at the time of

this study.

Kline (1972), in Buros' Seventh Mental Measurement Yearbook,

explained that the ACT Mathematics Placement Examination was developed

to assist colleges in placing entering students in the mathematics

classes most appropriate for their ability and preparation. The test

contains four categories: intermediate algebra, college algebra,

trigonometry, and "special topics." Kline suggested that the test may

be most useful for predicting success in highest level freshman

mathematics, analytic geometry, and calculus. The reliability, KR-21,

was reported at .81 for the total score.

Dubois (1972) and Wallace (1972) also reported in the Buros

yearbook. The purpose of SAT was described as an aid in assessing

students' competence for satisfactory achievement in college. The SAT

was designed to be effective over the full range of abilities of

students. The test had been found to have "reasonably good"

validities for predicting college achievement. Internal consistency

reliability was reported at .90 for SAT-mathematics. Alternate-form

coefficients of correlation were reported at .88. It was also noted

that multiple correlation studies with high-school grades yielded

better predictive results for college grades than SAT alone.

Scores on the ACT in this study range from 1 to 32. The mean

score on ACT was 14.262, median score was 14.5, and the mode was 10.

One standard deviation was 6.288. The mean SAT score was 433.029,

with standard deviation of 92.529. Median score was 429.333 and the

mode was 450. Of the 605 student scores used in the study, 421 were









ACT scores and 184 of the scores were SAT. Scores on the SAT ranged

from 210 to 710.



Courses


Eight math courses were chosen for use in this study, ranging in

content from fundamental skills in arithmetic to the first course in

calculus. These courses were by far the ones selected most often. A

small percentage of students chose other courses, such as business

math or the second course in calculus, but were not included in the

study. Twenty-three (23) students were enrolled in MAT 1000,

Introductory Math Skills. This course was designed for students who

needed to develop basic computational skills and improve accuracy with

basic arithmetic facts. Included in the course was a math lab, where

students worked individually to improve skills.

One hundred ninety-eight (198) students enrolled in MAT 1002,

Basic Mathematical Skills, which included the arithmetic of whole

numbers, fractions, decimals, and percent. This course also had a

lab component where students worked individually to develop and

improve skills.

One hundred ten (110) students enrolled in MAT 1024, Elementary

Algebra. This course represents the first of three algebra courses

offered by the college and included the study of algebraic notation

and terminology and the addition, subtraction, multiplication, and

division of general algebraic expressions, among other topics.

One hundred fifty-one (151) students enrolled in MAC 1102,

Intermediate Algebra. This course was the second in the algebra









series and included the study of complex rational expressions,

exponents, roots, radicals, and other algebraic topics.

One hundred twenty-one (121) students enrolled in MAC 1104,

College Algebra, the third course in the algebra series. This course

included the study of relations, functions and conic sections, theory

of equations, systems of equations, exponential and logarithmic

functions, and other related topics.

The sixth course of the eight courses used in this study was MGF

1113, Principles of Mathematics. Only 13 students enrolled in this

course which presented an overview of the various branches of

mathematics and their development. Sets, logic, introduction to

algebra, statistics, probability, and geometry were among the topics

covered.

Trigonometry, MAC 1114, was the choice of 22 students. This

course covered the study of the six trigonometric functions, their

interrelationships, and their application to both right and oblique

triangles.

Finally, 29 students enrolled in MAC 2311, Calculus I with

Analytic Geometry. This course was the first course offered in the

calculus series and included the study of limits, the derivative and

its geometric interpretation, continuity, integration of algebraic and

trigonometric functions, and applications of integration to area.



Grades


Students who received grades of "W" (withdrawal) or "I"

(incomplete) were not included in the study. Sixty-two (62) students,

approximately 9%, had grades of "W" or "I" and were omitted.








Data Analysis


Test scores, course, and grade were entered and analyzed using

the Statistical Program for the Social Sciences (SPSS) package.

Initially, the data were separated by test score, ACT or SAT, yielding

two groups. Since no student had scores on both the ACT and SAT, no

direct comparison of tests was made. The relationship of test score

to grade was determined separately for each test.

The data were then categorized by course. The eight courses were

analyzed for each test, yielding 16 tables and graphs (summarized in

Chapter IV). The Pearson-product moment correlation (Perason's r) was

computed for each course, correlating test score with grade. Finally,

composite correlations were done for each test across all courses,

comparing test score to grade.


Student Interviews


The literature suggested that other factors should be used in

addition to test scores when evaluating students for placement or

other purposes. However, identifying, defining, quantifying, and

using these other factors has been rather difficult and, as yet, no

strategies have been implemented on a statewide basis that make use of

any factors other than test scores.

In an attempt to identify additional factors that may be of use

in placement of students, interviews were conducted. Students who

scored below or slightly above the state-mandated cut-off scores

(effective August, 1985) were identified. Those students who

successfully completed a college-level mathematics course ("C" or








better) were asked what factors they considered important to their

success. The purpose of these interviews was not to draw any

conclusions or generate any statistical evidence on other factors for

placement. Rather, the identification of potential factors for use in

a placement model was the intent of the interviews.

The cut-off scores on ACT and SAT (math) have been mandated as 13

and 400, respectively. Students who scored below the cut-off or

slightly above were interviewed. Specifically, students whose score

on ACT was 15 or less, or 440 or less on SAT, were interviewed, but

only if they achieved a "C" or better in a college-level math course

(MAT 1024 or above).

Travers (1969) discussed the effects of an interviewer on the

results. As much uniformity as possible was recommended in order to

maintain consistency in the conditions under which the information was

collected. Interviews were suggested that were not too highly

structured, allowing the interviewee to respond freely but still in

accord with the subject at hand. The same introductory and concluding

remarks were recommended. Therefore, each interview began with an

introduction of the caller, a brief explanation of the purpose of the

interview, and a question as to whether the student would be willing

to participate. Following the interview, the interviewer thanked the

student for cooperating. The interview process was very similar,

then, for each student contacted.

Difficulties in quantification of results of interviews were also

reported by Travers. However, the purpose of the interviews in this

study was not to generate empirical data, but rather to identify

potential factors for use in placement strategies. Therefore,








quantification of responses was not part of the interview process,

except to report in a general way what the students said.

An interview guide is located in the appendix of this report.

The guide represents a framework from which the interviewer (this

writer) conducted the interviews. Twenty students were interviewed,

all by telephone. Results of the interviews are reported in Chapter

IV.



Development of the Clinical Model


As a result of the literature search in developing a rationale

for this study, several alternatives to standardized testing emerged.

Using the term "clinical evaluation," the writer attempted to combine

several of these aspects into a model which could be used to evaluate

student competence in mathematics. The intended use of the model was

to provide additional information beyond test scores, that could be

useful in decisions regarding placement, remediation, and curriculum.

The model was grounded in theoretical concepts that seemingly

would be of value when applied to practical problems in community

colleges, such as placement, design of college-preparatory courses,

and evaluation of student competence. In an attempt to validate the

effectiveness of the model, a "panel of experts" was consulted and

asked to respond to the model with respect to content and feasibility.

The panel consisted of knowledgeable persons in the field of

community college mathematics. Specifically, eight persons were

contacted. Two of the individuals were university professors familiar

with the problems of mathematics placement in the community colleges.

One of the panel members was a community college administrator,








likewise familiar with mathematics and related problems. One was a

counselor at a community college who was experienced in problems

of mathematics evaluation and placement. Four of the panel members

were mathematics instructors at community colleges, one of whom was

the department head.

The panel members were in agreement with the concept of such a

model. All of the members supported the idea that information in

addition to test scores would be useful. At various stages of the

development of the model various panel members made suggestions to

enhance the development of the model. The model was revised several

times and eventually a final version was submitted to the panel for

their comments, criticism, and suggestions as to the feasibility and

content of the model. A description of the model and a summary of the

panel's reaction to the model are included in Chapter IV.













CHAPTER IV

RESULTS



Results are presented in this chapter in response to the three

questions outlined in Chapter I. Results of the data analysis to

determine the relationship between placement test scores and success

in initial math courses (Question one) are presented in Table 4.1.

This table summarizes the correlations of ACT and SAT scores and

grades earned in the eight entry level mathematics courses described

in Chapter III. The correlations are presented separately for ACT and

SAT. The Pearson product-moment correlation (r) is given for each

course. The final two rows represent the correlation for a composite

of grades across all courses, for ACT and for SAT.

A further analysis of the comparison of test scores and grades is

provided in Tables 4.2 and 4.3. The correlations reported in Table

4.1 which were significant at the .95 level are included in these

tables. Test scores are grouped into four categories which are

indicated in a horizontal arrangement. Grades are listed vertically

from A down through F. A cross-tabular effect is thus presented,

showing the number (n) and percentage of students who earned a

specific grade (A through F) and whose test score was in a certain

category. Table 4.2 presents ACT scores and grades, cross-tabulated

for the three courses which had significant r levels, and the

composite of all ACT scores and grades. Table 4.3 presents the same

information for SAT scores.










Table 4.1

Correlation of Test Score and Grade for ACT and SAT
for Entry Level Mathematics Courses

STATISTICS

Course Test n r


Introductory Math ACT 20 -.01
SAT 3 -.50

Basic Math ACT 160 .31*
SAT 27 -.04

Elementary Algebra ACT 71 .04
SAT 32 .19

Intermediate Albegra ACT 77 -.07
SAT 54 .22*

College Algebra ACT 62 .21*
SAT 42 .47*

Principles of Mathematics ACT 7 -.42
SAT 5 -.24

Trigonometry ACT 11 .78*
SAT 10 .69*

Calculus ACT 13 .25
SAT 12 -.16

Composite ACT 423 .16

Composite SAT 188 .12


Note: Composite score included 4 scores on SAT and 2 scores on ACT
which were subsequently dropped because the courses were not
used in the study.

*Significant at the 95% level.











Table 4.2

Cross-Tabulation of ACT Scores and Grades


ACT Scores


0-7 8-15 16-23 24-32


Course Grades N % N % N % N %


Basic Math A 6 4% 29 18% 0 0% 0 0%
B 15 9% 26 16% 0 0% 0 0%
C 9 6% 19 12% 2 1% 0 0%
D 0 0% 7 4% 1 0% 0 0%
F 27 17% 19 12% 0 0% 0 0%

College A 0 0% 2 3% 5 8% 8 13%
Algebra B 0 0% 2 3% 10 16% 5 8%
C 0 0% 1 2% 10 16% 0 0%
0 0 0% 2 3% 2 3% 1 2%
F 0 0% 1 2% 11 18% 2 3%

Trigonometry A 0 0% 0 0% 2 18% 3 27%
B 0 0% 0 0% 1 9% 0 0%
C 0 0% 0 0% 3 27% 0 0%
0 0 0% 1 9% 1 9% 0 0%
F 0 0% 0 0 0 % 0% 0 0%

Composite A 8 2% 42 10% 20 5% 18 4%
B 19 5% 37 9% 29 7% 5 1%
C 14 3% 34 8% 37 9% 3 1%
D 1 0% 19 5% 17 4% 2 1%
F 36 9% 33 8% 40 10% 3 1%











Table 4.3

Cross-Tabulation of SAT Scores and Grades


SAT Scores


210-330 340-450 460-570 580-710


Course Grades N % N % N % N %


Intermediate A 0 0% 2 4% 2 4% 1 2%
Algebra B 0 0% 7 13% 7 13% 1 2%
C 0 0% 15 26% 1 2% 1 2%
0 1 2% 5 9% 1 2% 0 0%
F 1 2% 6 11% 2 4% 1 2%

College A 0 0% 2 5% 5 12% 4 10%
Algebra B 0 0% 3 6% 3 6% 1 2%
C 0 0% 5 12% 2 5% 0 0%
D 0 0% 5 12% 2 5% 0 0%
F 0 0% 5 12% 5 12% 0 0%

Trigonometry A 0 0% 0 0% 2 20% 2 20%
B 0 0% 0 0% 1 10% 0 0%
C 0 0% 3 30% 0 0% 0 0%
0 0 0% 1 10% 1 10% 0 0%
F 0 0% 0 0% 0 0% 0 0%

Composite A 9 5% 10 5% 12 6% 7 4%
B 9 5% 18 10% 17 9% 5 3%
C 1 0% 30 16% 7 4% 3 2%
D 2 1% 18 10% 4 2% 0 0%
F 6 3% 20 11% 9 5% 2 1%









Discussion of Results to Question One


Question one, which addressed the relationship between placement

test scores and performance in initial college mathematics courses,

was answered by correlating test scores and grades. As indicated in

Table 4.1, the Pearson product-moment coefficient of correlation (r)

was computed for each course, comparing test score to grade. The r

statistic is a numerical descriptive measure of the correlation

between two variables which measures the strength of the linear

relationship between them (McClave, 1982).

The implicit assumption in such a comparison of test scores to

grades would be that if a positive linear relationship existed

between test score and grades, that relationship would be apparent

from the r statistic. Correlations of a highly positive nature would

indicate such a relationship. However, the results of this study

indicate quite low correlations, with many of them actually

negative.

Only two of the positive correlations were at the .50 magnitude

or higher. Grades in MAC 1114, Trigonometry, were correlated with ACT

scores at .78 and with SAT scores at .69. Other moderately positive

correlations were .47 for grades in MAC 1004, College Algebra, and SAT

scores, .31 for grades in MAT 1002, Basic Mathematical Skills, and ACT

scores and .25 for grades in MAC 2311, Calculus, and ACT scores. The

most striking aspect of the correlations appears to be the seven

negative values of r, and the modest to low nature of the correlations

in general. High positive correlations would have indicated large









proportions of low test scores corresponding to low grades and high

test scores corresponding to high grades. Low to moderate

correlations are seen as indicating weak relationships, with high test

scores sometimes associated with high grades, but also occurring with

low grades. Thus, the results from the correlations performed in

response to question one indicate that there is only a moderate to

weak relationship between test scores and grades. Standardized test

scores appear to be of only minimal value in predicting success in

initial mathematics courses. Apparently, other factors must be used

in appropriate placement.


Results With Respect to Question Two


Question two was an attempt to determine additional factors which

could be of use in placement strategies. The question asked what

factors "high-risk" students attributed to their successful completion

of college mathematics courses. As previously described, students

that may have been expected to do poorly in college-level courses, or

be excluded altogether, because of low test scores were interviewed.

These students had successfully completed a college-level course (MAT

1024 or above) in their first attempt. The interviews focused on why

they felt they were able to perform as well as they did.

The students were all willing to talk about their experiences in

the math courses. Using the interview guide (in Appendix A) as a

framework, the students' responses were tabulated. Results appear in

Table 4.4.









Table 4.4

Factors Considered Important by Students for Successful
Completion of College-Level Math Courses


High School
Math
Instructor Background Tutor Effort/Time Math Lab


80% 100% 20% 90% 40%





Since there was no uniform manner of responding, each student's

responses were unique. There were, however, many similarities in the

comments the students made. The percentages given in Table 4.4 are

based on generalizations made by the interviewer in order to place

responses into one of the categories on the checklist. When

clarification was necessary, the interviewer asked a direct question

in order to determine the student's intended meaning.


Discussion of Results With Respect to Question Two


The intent of the survey of "high-risk" students was to identify

possible factors that could be of use in the placement process. By

speaking directly with students, the interviewer was able to get a

more personal feeling for which factors were apparently important.

The literature had identified many of the factors mentioned by the

students. The results of the interviews verified many of the factors

mentioned in the literature such as motivation/effort, high school

performance (grades), and "clinical evaluation" in the form of direct








personal assistance by the instructor, a tutor and/or use of the math

lab. Two factors not mentioned by the students that were found in the

literature were attitude toward math (affective domain) and learning-

style preference.

It would appear from the results of the interviews that three

main factors emerged as important. High school performance or

background in math, motivation and effort, and "clinical evaluation"

during the course were all mentioned prominently in the interviews.

It seems very likely that these factors, given adequate means of

defining and quantifying them, could be of great value in the placing

of students.

Every one of the students interviewed indicated that they had

completed high school mathematics courses beyond the first course in

algebra. Two of the students had taken calculus, 11 had taken

trigonometry/analytic geometry, 5 had taken the second course in

algebra, and 2 had taken geometry. This was unexpected since they

each were below the present cut-off score for placement in college

level courses.

A second important factor as identified by the interviews and in

the literature related to student effort. In general the students'

responses concerning effort and time spent preparing for the course

were related to high school background. For instance, the two

students who had taken calculus reported minimal time and effort

studying. Other students considered their time and effort to have

been extremely important. As reported, 40% made use of the math lab.

Quality of student effort is seen as an important factor in

student achievement. Pace (1980) maintained that pronouncements

should not be made concerning college impact without taking quality of









effort into consideration. Not only must the offerings of the

institution be considered but what the students do with those

offerings as well. The special value of measuring quality of effort

was demonstrated to be the most influential single variable in

accounting for students' attainment in the study by Pace.

The third factor suggested in the interviews and having

considerable support in the literature was "clinical evaluation."

Direct personal assistance by the instructor or a tutor and use of the

math lab pertain to this type of assessment. Methods of identifying

students' deficiencies and prescribing remedial work were seen as

another important factor which should be used in the placement and

instructional process.



Results With Respect to Question Three


The third question of this study pertains to the construction of

a clinical evaluation model which makes use of factors identified in

the literature and in the student interviews described in Question

two. Such a model was constructed as a result of this study and was

evaluated by the panel of experts described in Chapter III.

A description of the model is presented herein. The panel's

reactions to the model and the discussion thereof follows.



Description of the Clinical Evaluation Model


Clinical evaluation is defined as an analytical assessment of

deficiencies and the prescribing of treatment based on direct

observation by a practicing professional. Clinical evaluation allows








for a holistic approach to evaluation that is not possible using

standardized testing alone. Use of the model should enable the

evaluator and the student to understand more fully the nature of the

deficiency in a given area than would be possible using standardized

testing alone.

The model presumed a hierarchy of levels of skills in

mathematics, wherein certain skills cannot be mastered without

previous mastery of more fundamental skills. Use of the model

depended greatly on the professional capabilities of the evaluator--

ideally a community college mathematics instructor. The professional

judgment and skill of the evaluator were seen as extremely important.

The model was divided into subject areas--Arithmetic, Algebra,

Geometry, Statistics--and levels--Algorithms, Concepts,

Generalizations, and Problem Solving. Figure 4.1 is a flowchart of

the clinical evaluation process. Individual flowcharts for each

subject area are presented as Figures 4.2 through 4.5. Each level on

the model presumed mastery of certain prerequisite levels. These

levels are shown in relation to one another as they would occur in the

evaluation process. Arithmetic was seen as a prerequisite subject for

all other subjects on the model (see Appendix B).


Using the Model


The clinical evaluation would begin with Algebra. The flowchart

(Fig. 4.1) represents the sequences that would follow. If the student

demonstrates mastery of the required skills in Algebra, it is

theorized that the student could also perform the required skills in

Arithmetic and thus the evaluation would proceed to Geometry and then
























NO NO NO




STOP STOP STOP




Figure 4.1. Flowchart for Clinical Evaluation








to Statistics. If the evaluator determines that the student has not

demonstrated at least 70% mastery of the skills in Algebra, the

evaluation must then also include Arithmetic. If Algebra and

Arithmetic were evaluated at less than 70% competence, the evaluation

would terminate.

Figures 4.2 through 4.5 represent the sequences of the clinical

evaluation for each subject area. The sequencing provides for rapid

progress through the flowchart, specifically if the student

demonstrates competence of higher levels, by presuming mastery of the

lowest levels (Algorithms). If, however, the student did not

demonstrate mastery of the higher levels (concepts, or generalizations

and problem solving) the Algorithm level must also be evaluated.

Mastery is defined at 70% or greater for each subject area.

By following the sequence indicated by the flowcharts, the

evaluator would consult a checklist to determine what specific skills

comprise that particular subject and level. The evaluator would then

formulate a problem pertaining to the first skill or select a sample

problem from a data bank. The student would attempt to solve the

problem, listing or explaining the steps taken and the result. The

evaluator must then determine whether the student's result is correct.

If so, the evaluator would proceed to the next enumerated skill for

that subject/level. If not, the evaluator would formulate another

problem, ask a related line of questions or anything further that may

help determine whether or not the student could perform the designated

skill. Since clinical evaluation calls for direct oral communication

between the student and evaluator, the evaluator would initiate

dialogue by asking appropriate, related questions. If, in the








evaluator's judgment, the student could not perform that skill, the

evaluator would note the skill and continue to the next skill. When

all the skills had been evaluated, the flow chart would be consulted

to determine the next step in the evaluation process (see Appendix C).

Algebra. The evaluation for Algebra would begin with the

Concepts level. If the student demonstrated mastery at the 70% level

or above, the evaluation would proceed to the Generalization and

Problem Solving questions. If not, the evaluation would proceed to

the Algorithm level. If that level is evaluated at less than 70%, the

evaluation of Algebra would terminate and proceed to Arithmetic.

Similarly, if the evaluation of Generalizations and Problem Solving

were evaluated at less than 70%, the evaluation would proceed to

Algorithms. In order to proceed to Geometry, the student would

demonstrate proficiency in 70% of the skills. If not, the evaluation

would proceed to Arithmetic.

Arithmetic. If Arithmetic was evaluated, the starting point

would be Concepts. Mastery would be demonstrated by performance at

70% or above. If the evaluator determined that the student had not

demonstrated mastery at the concepts level, the evaluation would

proceed to Algorithms. If Algorithm skills in Arithmetic were

determined as deficient, the evaluation would terminate. If, however,

the student demonstrated mastery of Arithmetic Algorithms, the

evaluation would proceed to Generalizations and Problem Solving. If

the student demonstrated mastery at 70% or above the evaluation would

proceed to Geometry. If not (and thus the overall performance in

Arithmetic is less than 70%) the clinical evaluation would terminate.








Geometry. The evaluation of Geometry would begin with the

Concepts level. The student would demonstrate mastery of at least 70%

or the evaluation would proceed to Algorithms. The student would

demonstrate mastery of Algorithm skills or the Geometry component

would terminate. If the student did demonstrate mastery of Concepts

the evaluation would proceed to Generalizations and Problem Solving.

The demonstration of mastery would be at least 70% or the Algorithm

level would be evaluated as well. Overall mastery of over 70% should

be demonstrated for Geometry.

Statistics. The evaluation of students' competence in Statistics

would begin with the Concepts level. If mastery was demonstrated at

70% the evaluation would proceed to Generalizations and Problem

Solving. If not, Algorithms would be evaluated. All of the

Statistics algorithms should be demonstrated successfully or the

Statistics component would terminate. The Generalizations and Problem

Solving skills should be demonstrated at least 70% or the Algorithms

level would be evaluated as well. Overall mastery of Statistics would

be demonstrated at 70%.

Following the clinical evaluation, appropriate remedial work

for each deficient category should be prescribed. Math labs in the

community colleges would be equipped with remediation techniques

such as programmed instruction units (perhaps computerized),

workbooks, cassette tapes, flash cards, etc., that address the

enumerated skills.

The model, therefore, would be used to allow the evaluator and

the student to identify deficiencies and plan learning activities

which could strengthen those deficiencies. The model could be used as




























NO NO





GO TO GO TO
ARITHMETIC ARITHMETIC





Figure 4.2. Algebra Flowchart


























ALGORITHMS


NO NO





STOP STOP




Figure 4.3. Arithmetic Flowchart



























ALGORITHMS


NO NO




GO TO GO TO
ARITHMETIC ARITHMETIC




Figure 4.4. Geometry Flowchart



























ALGORITHMS


NO NO





GO TO GO TO
ARITHMETIC ARITHMETIC



Figure 4.5. Statistics Flowchart








a component of a well planned placement and instructional process in

mathematics. As noted previously, expert professional personnel would

be required in addition to a well-equipped "math lab" for the entire

developmental process to succeed. The extent to which these resources

and personnel are available determine the effectiveness and

feasibility of implementing a model such as this.

Discussion of Results to Question Three


The eight member panel which was consulted for their expert

opinion concerning the model was in agreement on certain aspects but

disagreed on others. Each member of the panel agreed that such a

model, if possible to construct, would be of great use in placement

and curriculum decisions. The panel split four to four over the

theoretical assumptions on which the model was based.

The model was based on the assumption that since mathematics was

of a hierarchical nature, the hierarchy could be used to evaluate

mathematical competencies. Wilson (1971) based an instructional model

on the hierarchical nature of mathematics. However, even though math

curriculum is structured according to such a hierarchy, four of the

panel members believed students do not reliably recall skills

according to that hierarchy. For this reason, the four panel members

doubted that the hierarchical nature of math could be used in the

evaluation model. Interestingly and probably significantly, the four

panel members to express such doubt were all community college

mathematics instructors who were familiar with the abilities and

skills of students. The counselor, administrator, and the two

university professors accepted the assumption that mathematical

hierarchy could be so used.





-74-


Finally, all of the panel members expressed concern over the

feasibility of implementing the model. Those who assumed that the

hierarchy could screen students were more optimistic. The assumption

was that fewer skills would actually be evaluated since many of the

skills would have been presumed mastered if students demonstrated

proficiency of the "higher" skills. Those who rejected the

hierarchical concept contended that nearly all skills would need to be

evaluated individually, thus greatly increasing the length of a

"clinical evaluation." The large amounts of time, resources, and

personnel necessary to implement a nonhierarchical model in the

colleges rendered it unfeasible in the opinion of the panel.













CHAPTER V

SUMMARY, CONCLUSIONS, RECOMMENDATIONS,
AND IMPLICATIONS


Summary


Placement for community college students in math has been

recognized as imperative. Effective, widely accepted placement

techniques have not been reported in the literature. This study has

analyzed the relationship between tests used for placement purposes

and subsequent grades in mathematics courses. In order to identify

and verify other factors which could be used in the placement process,

interviews were conducted with students who had successfully completed

their initial college level course, but had been identified as "high-

risk" as a result of low test scores. Three main factors emerged from

the interviews as important. High school preparation in mathematics,

quality of student effort, and clinical evaluation techniques were

considered important by the students in leading to their successful

completion of college-level courses.

A clinical evaluation model was developed which was based on

concepts found in the literature and confirmed by students in the

interviews. A panel of experts reacted'to the model. The model

was seen as lacking validity by four of eight experts and problems

with feasibility of implementing a variation of the model were

expressed.








Conclusions


The correlations for ACT and SAT scores and grades in initial

mathematics courses were found to be generally low. It is concluded

that the relationship between test scores and grades was weak for the

data under consideration in this study. The results of this study

lend support to criticism of using standardized test scores as a sole

means of evaluation in the placement process.

Other factors that could be used in placement strategies were

sought. Based on a review of related literature and interviews with

"high-risk" students, factors necessary for successful completion of

college-level courses were identified. High school math background,

student effort, and availability of clinical evaluation techniques

were the three most promising for use.

A clinical evaluation model based on theoretical concepts from

the literature was constructed. A panel of experts concluded that

an acceptable variation of the model would not be useful given

constraints of time, resources, and personnel.


Recommendations


Heretofore, no standardized test or other single means of

evaluation has emerged as an effective placement instrument.

Mandatory placement is a relatively recent practice in Florida's

community colleges and suitable methods of accomplishing it are still

being sought. Research designed to study possible methods of

placement is needed. A combination of two or more factors would seem

to offer the most promise for effective placement, particularly in

light of the lack of a single reliable instrument.








The following variables seem to be worthy of further study:

1. high school math grades

2. other standardized tests

3. student effort/motivation

4. clinical evaluation methods

5. learning-style preference

6. affective domain/attitude toward math

Problems have been identified for each of the possible variables

listed above. However, since no acceptable, widely used placement

techniques are available, their investigation would appear to be

warranted.

Grades have been criticized as a variable for use in research

studies because of lack of standardization. This characteristic does

not seem likely to change. Even though there are problems using

grades, other studies have shown them to be useful in predicting

academic success. Particularly in mathematics, where high school

courses are sequentially arranged, would consideration of grades be

appropriate because information on the level of mathematics in

students' backgrounds should be beneficial.

Standardized tests other than ACT and SAT may prove to be better

placement instruments. Certainly MAPS and ASSET should be studied to

determine their effectiveness as placement instruments. Other tests,

particularly competency-based, content exams should be considered as

possible placement instruments.

Quality of student effort has been identified as an important

factor. Outcomes have been predicted quite well using quality of

student effort as a variable. Further investigation in this area

should prove valuable.








Clinical evaluation techniques, in theory, seem to offer great

promise for appropriate placement. The analytical assessment of

competencies and deficiencies based on direct observation by a

competent observer would have advantages that standardized tests could

not provide. Specifically, the identification of exact areas of

deficiency in math would be possible. Considering the hierarchical

nature of mathematics, knowledge of certain deficiencies would be most

useful in the placement process.

Unfortunately, such a model for clinical evaluation in mathematics

would not be accepted by many practitioners. Specifically, the model

developed in this study was not acceptable based on opinions of

knowledgeable persons in the field. Various problems were identified

in the construction of such a model. Much of the evaluation would

greatly depend on the observer, and it would be very difficult to

maintain consistency from one observer to another. Problems also were

identified with quantification of the results of the observations.

Difficulty in defining a true hierarchy in math skills was also

discovered.

Nevertheless, if some or all of these problems could be

minimized, clinical evaluation of math skills would appear to have

possibilities for placement of students. More research in the area

is needed.

As previously noted, learning-style preference has been

considered in only a small number of the studies reported in the

literature. This area also holds potential for placement and should

be researched more fully.









The affective domain and the student's attitude toward math have

been difficult to study. Emphasis has too often been placed on the

cognitive domain. The affective domain represents a vital component

of how students learn, yet has often been neglected in instruction and

learning paradigms. Possibly because of the lack of reliable

instruments to measure it, the attitudes and emotions are not assessed

and considered in placement strategies.


Implications


One rather obvious implication of this study is to confirm the

opposition of several writers to the use of standardized tests as the

sole means of evaluation. Indeed, the composers of standardized tests

caution against their use as the sole criterion in a placement

decision. Even so, the recently mandated cut-off scores require the

use of standardized test scores, although provisions were made for the

use of additional means of evaluation for placement. Therefore, this

study clearly implies the need for other factors which can be used for

appropriate placement.

The concept of clinical evaluation based on the hierarchical

nature of mathematics formed the theoretical basis for the

construction of the clinical evaluation model. The hierarchical

nature of mathematics has long been recognized and utilized in

curriculum design. The sequential nature of mathematics courses in

general follows a logical progression. However, the assumption that

this hierarchy would be useful for evaluation purposes has not been

demonstrated by this study. Even though mathematics must be presented

in the proper sequence for the purpose of sound instruction, students








do not reliably recall skills in the order presented. Therefore, an

evaluation model which is based on strict application of the hierarchy

would lack validity. It is an oversimplification to presume that

because a student demonstrates proficiency in certain areas of the

mathematics curriculum that student could also perform at lower

levels. Similarly, it cannot be assumed that proficiency in higher

levels of the cognitive domain (applications, for example) necessarily

imply proficiency in lower levels such as algorithms.

The implication, then, is that for purposes of summative

evaluation, the assumed use of the hierarchy is insufficient. This

implies that for a valid surmative evaluation each skill at each level

should be evaluated separately. There is evidence that a clinical

means of evaluation would be more accurate than using standardized

tests, but in either case mastery of lower levels cannot be presumed

because of demonstration of proficiency at higher levels.

This is not to say that the hierarchy cannot be used in the

placement process. If it can be demonstrated that skills presumed to

be prerequisite are indeed such, a test or clinical evaluation to

determine mastery of these prerequisite skills would be most useful

for placing students into the proper course. The narrower the range

of skills, the more detectable the prerequisites would be. For these

reasons, use of the hierarchical nature of math would be helpful in

formative evaluation such as for placement. For example, if a series

of questions that have been demonstrated to be necessary prerequisites

for, say, an algebra course were available, evaluation of students'

mastery of those skills could be used to determine readiness for the

course. This evaluation could be a standardized test or a clinical





-81-


evaluation, although again the clinical evaluation is recommended

given adequate resources and personnel.

In conclusion, then, use of the hierarchy for assumption of

proficiency of lower skills because of proficiency in higher skills

has not been demonstrated to be valid. Therefore use of the

mathematical hierarchy for summative evaluation (such as CLAST) is not

recommended. However, using the hierarchical and sequential nature of

mathematics shows promise for use in formative evaluation such as

placement.













APPENDIX A

INTERVIEW GUIDE



1. Identification of caller.

2. Would you be willing to answer a few questions regarding the math
course you took in the fall (or summer) of 1984?

3. You've been identified as doing well in the course (name course
specifically). The questions relate to why you feel you were able
to successfully complete the course. What reasons would you give
for your success?

4. What other factors would you consider important?

5. Any others? What about ? (refer to checklist)

6. About how many hours per week did you study for this course?

7. Any other reasons that you can think of?

8. Thank you for your help.


Checklist

Reasons or factors given by students for successful completion.

INSTRUCTOR

HIGH SCHOOL PREPARATION

TUTOR

WEEKLY HOURS OF STUDY

MATH LAB

OTHER













APPENDIX B

SUBJECT BY LEVEL MATRIX



LEVELS


ALGORITHMS CONCEPTS GENERALIZATIONS PROBLEM-SOLVING



ARITHMETIC



S
U ALGEBRA
B
J
E
C
T GEOMETRY
S



STATISTICS













APPENDIX C

SUGGESTED STEPS FOR CLINICAL EVALUATION



1. The evaluator presents a problem pertaining to the skill that

is to be evaluated. The students should solve the problem, listing

the steps taken and the answer. The evaluator examines each step as

well as the final answer.

2. The evaluator presents a solution and answer to a problem

that pertains to the skill; however, the solution steps and answer

contain an error. The student is to identify the error or errors and

make corrections.

3. The student is asked to formulate a problem/question that

requires use of the skill under consideration, then list the solution

steps and the answer.

Some or all of these recommended steps may be used to initiate

discussion and thus provide information concerning the student's

understanding and competence. Other techniques may be used, such as a

related line of questioning, particularly if the evaluator senses the

need for such. Creativity on the part of the evaluator would be

beneficial.

It is here that the holistic approach to evaluation is necessary

as well as use of the hierarchical nature of mathematics.













REFERENCES


Allen, L.R. (1981). Fundamental algebra: Increasing mastery and
retention in college. [Position paper] Community College of
Allegheny County, PA. (ERIC Document Reproduction Service No.
ED 202 560)

Bersoff, D.N. (1973). Silk purses into sow's ears: The decline of
psychological testing and a suggestion for its redemption.
American Psychologist, 28, 892-898.

Bloom, B.S. (Ed.). (1956). Taxonomy of educational objectives: The
classification of educational goals. Handbook 1. Cognitive
domain. New York: McKay.

Clark, R.M. (1980). Summary analysis of students and grades:
Mathematics A, elementary algebra; mathematics B, plane geometry;
mathematics C, trigonometry; mathematics 0, intermediate algebra.
Reedley, CA: Kings River Community College. (ERIC Document
Reproduction Service No. ED 215 744)

Clemson, E. (1978). A study of the basic skills assessment: Direct
and indirect measures of writing ability. Princeton, NJ:
Educational Testing Service. (ERIC Document Reproduction Service
No. ED 204 409)

Cohen, A.M., & Brawer, F.B. (1982). The technology of instruction.
Community and Junior College Journal, 53(1), 34-37.

Corcoran, M., & Gibb, E.G. (1961). Appraising attitudes in the
learning of mathematics. In The National Council of Teachers of
Mathematics, Evaluation in mathematics (pp. 105-122).
Washington, DC: NCTM.

Cordrey, L.J. (1984). Evaluation of the skills prerequisite system
at Fullerton College (a two year follow-up). Fullerton, CA:
Fullerton College. (ERIC Document Reproduction Service No. ED
244 663)

Cross, K.P. (1971). Beyond the open door. San Francisco: Jossey-
Bass.

Cross, K.P. (1976). Accent on learning. San Francisco: Jossey-
Bass.

Davidman, L. (1981). Learning style: The myth, the panacea, the
wisdom. Phi Delta Kappan, 62, 641-645.








Domino, G. (1970). Interactive effects of achievement orientation
and teaching styles on academic achievement. Act Research Report
39, pp. 1-9.

Dubois, P.H. (1972). [Review of Scholastic Aptitude Test] in Buros,
O.K. (Ed.), The seventh mental measurements yearbook. Highland
Park, NJ: The Gryphon Press.

Dunn, K.J., & Dunn, R.S. (1979). Learning styles/teaching styles:
Should they . can they . be matched? Educational
Leadership, 36, 238-244.

Dunn, K.J., Dunn, R.S., & Price, G.E. (1977). Diagnosing learning
styles: A prescription for avoiding malpractice suits.
Phi Delta Kappan, 58, 418-420.

Dunn, K.J., Dunn, R.S., & Price, G.E. (1981). Learning styles:
Research vs. opinion. Phi Delta Kappan, 62, 645-646.

Easton, J.Q., Barshis, D., & Ginsberg, R. (1983-1984). Chicago
colleges identify effective teachers, students. Community
and Junior College Journal, 54(4), 27-31.

Evans, D.N. (1975). Standards are needed for crt's. Educational
Leadership, 32, 268-270.

Farr, 8.J. (1971). Individual differences in learning: Predicting
one's more effective learning modality. Dissertation Abstracts
International, 1332A.

Ferguson, M. (1980). The aquarian conspiracy. Los Angeles: J.P.
Tarcher.

Ferster, C.B., & Perrott, M.C. (1968). Behavior principles. New
York: Meredith.

Fincher, C. (1974). Is the SAT worth its salt? An evaluation of the
use of the Scholastic Aptitude Test in the university system of
Georgia over a thirteen-year period. Review of Educational
Research, 44, 293-305.

Florida MAPS: Technical manual. (1984). Princeton, NJ: Educational
Testing Service.

Fouche, R.S. (1961). Overview and practical interpretations. In The
National Council of Teachers of Mathematics, Evaluation in
mathematics (pp. 167-180). Washington, DC: NCTM,

Ginsburg, H. (1975, April). Talking with children: An alternative
to testing. Educational Technology, 15(4), 41.

Glaser, R. (1963). Instructional technology and the measurement of
learning outcomes. American Psychologist, 18, 519-521.








Goldman, R.D., & Slaughter, R.E. (1976). Why college grade point
average is difficult to predict. Journal of Educational
Psychology, 68, 9-14.

Haase, M., & Caffrey, P. (1983). Assessment procedures, Fall, 1982
& Spring, 1983 (Semi-annual research report, part I). Sacramento,
CA: Sacramento City College. (ERIC Document Reproduction
Service No. ED 231 494)

Halpin, G., Halpin, G., & Schaer, B.B. (1981, March). Research on
writing: A search for objective measures related to holistically
scored essays. Paper presented at the annual meeting of the
Eastern Educational Research Association, Philadelphia, PA.
(ERIC Document Reproduction Service No. ED 208 042)

Hambleton, R.K., Swaminathan, H., Algina, J., & Coulson, D.B. (1978).
Criterion-referenced testing and measurement: A review of
technical issues and developments. Review of Educational
Research, 48, 1-47.

Haney, W. (1980). Trouble over testing. Educational Leadership, 37,
640-650.

Henderson, L.N., Jr. (1983). Application of the theory on
incrementalism to statutory changes in the open door philosophy
of Florida's community colleges, 1957 to 1981 (Doctoral
dissertation, University of Florida, Gainesville, 1982).
Dissertation Abstracts International, 42, 4309A.

SHills, J.R. (1971). Use of measurement in selection and placement.
In R.L. Thorndike (Ed.), Educational measurement (pp. 680-732).
Washington, DC: American Council on Education.

Hunt, D.E. (1981). Learning style and the interdependence of
practice and theory. Phi Delta Kappan, 62, 647.

Justiz, M.J. (1985). Involvement in learning: The three keys.
Community and Junior College Journal, 55(7), 23-28.

Keesling, J.W. (1974). Empirical validation of criterion-references
measures. In C.W. Harris, M.C. Alkin, & W.S. Popham (Eds.),
Problems in criterion-referenced measurement. Los Angeles:
Center for the Study of Evaluation, University of California.

Kline, W.E. (1972). [Review of American College Testing Program] in
Buros, O.K. (Ed.), The seventh mental measurements yearbook.
Highland Park, NJ: The Gryphon Press.

Koos, L.V. (1970). The community college student. Gainesville:
University of Florida Press.








Kopfstein, R.W. (1980, March). Study skills evaluation: Why does
who do what and how? Paper presented at the Annual Conference of
the Western College Reading Association, San Francisco. (ERIC
Document Reproduction Service No. ED 187 406)

Kuliecke, M.J., Lloyd, J.S., & Mathis, B.C. (1982, March). The
competence and performance of foreign medical graduates:
Research and implications. Paper presented at the annual meeting
of the American Educational Research Association, New York.

Larson, J.R., & Scontrino, M.P. (1976). The consistency of high
school grade point average and the verbal and mathematical
portions of the Scholastic Aptitude Test of the CEEB, as
predictors of college performance: An eight year study.
Educational and Psychological Measurement, 36, 439-443.

Levine, A. (1977). Understanding psychology (2nd Ed.). New York:
Random House.

Linthicum, D.S. (1980). Dundalk community college developmental
education research project. Baltimore, MD: Dundalk Comnunity
College. (ERIC Document Reproductive Service No. 206 332)

Longstreet, W.S. (1975). The grading syndrome. Educational
Leadership, 32, 243-246.

McCabe, R.H., & Skidmore, S.B. (1983). Miami-Dade: Results justify
reforms. Community and Junior College Journal, 54(1), 26-29.

McClave, J.T., & Dietrich, F.H., II. (1982). Statistics (2nd Ed.).
San Francisco: Dellen.

McClelland, D.C. (1973). Testing for competence rather than for
intelligence. American Psychologist, 28, 1-14.

Medsker, L.L., & Tillery, D. (1971). Breaking the access barriers.
New York: McGraw-Hill.

Nolan, E.J. (1976). The relationship between ACT sub-test scores
and grades earned: A correlational study.
Southern West Virginia Community College. (ERIC Document
Reproduction Service No. ED 131 902)

Novick, M.R., & Lewis, C. (1974). Prescribing test length for
criterion-referenced measurement. In C.W. Harris, M.C. Alkin, &
.W.S. Popham (Eds.), Problems in criterion-referenced measurement.
Los Angeles: Center for the Study of Evaluation, University of
California.

O'Reilly, P., Vogler, D.E., & Asche, F.M. (1980). Evaluating
performances: Implementing competency based education in
community colleges. Virginia Polytechnic Institute and State
University, Blacksburg, VA. (ERIC Document Reproduction Service
No. ED 195 302)








Pace, C.R. (1980). Measuring the quality of student effort. Current
Issues in Higher Education, 2, 39-46.

Palow, W.P. (1979, April). Technology in teaching mathematics: A
computer managed, multimedia mathematics learning center. Paper
presented at the Annual Meeting of the National Council of
Teachers of Mathematics, Boston. (ERIC Document Reproduction
Service No. ED 184 609)

Popham, W.J. (1975). Educational evaluation. Englewood Cliffs, NJ:
Prentice-Hall.

Popham, W.J., & Husek, T.R. (1969). Implications of criterion-
referenced measurement. Journal of Educational Measurement, 6,
1-9.

Price, F.W., & Kim, S.H. (1976). The association of col lege
performance with high school grades and college entrance test
scores. Educational and Psychological Measurement, 36, 965-970.

Quinto, F., & McKenna, B. (1977). Alternatives to standardized
testing. Washington, DC: National Education Association,
Division of Instruction and Professional Development. (ERIC
Document Reproduction Service No. ED 190 591)

Reap, M.C. (1979, February). A community college user's approach
to American College Testing data. Paper presented at the
Southwest Educational Research Association, Houston, TX. (ERIC
Document Reproduction Service No. ED 207 615)

Roueche, J.E. (1980). Holistic literacy in college teaching. New
York: Media Systems.

Schade, H.C. (1977). The ability of the ACT and MCPT to predict
the college grade point average (Institutional Research Report
3-77). Neosho, MO: Crowder College. (ERIC Document Reproduction
Service No. ED 188 683)

Schinoff, R.B. (1982). No nonsense at Miami-Dade. Community and
Junior College Journal, 53(3), 34-44.

Shugars, D.A., May, K.N., & Vann, W.F. (1981). Comprehensive
evaluation in a preclinical restorative dentistry technique
course. Journal of Dental Education, 45, 801-803.

Sueltz, B.A. (1961). The role of evaluation in the classroom. In
The National Council of Teachers of Mathematics, Evaluation in
mathematics (pp. 7-20). Washington, DC: NCTM.

Thornton, J.W., Jr. (1966). The community junior college (2nd Ed.).
New York: John Wiley & Sons.

Travers, R.M.W. (1969). An introduction to educational research (3rd
Ed.). New York: Macmillan.





-90-


Wallace, W.L. (1972). [Review of Scholastic Aptitude Test] in Buros,
O.K. (Ed.), The seventh mental measurements yearbook. Highland
Park, NJ: The Gryphon Press.

Wattenbarger, J.L. (1971). The expanding roles of the junior and
community colleges. In W.K. Ogilvie & M.R. Raines (Eds.),
Perspectives on the community junior college (pp. 52-56). New
York: Meredith.

Wiener, S.P. (1985). Through the cracks: Learning basic skills.
Community and Junior College Journal, 55, 52-54.

Wilson, J.W. (1971). Evaluation of learning in secondary school
mathematics. In B.S. Bloom, J.T. Hastings, & G.F. Madaus (Eds.),
Handbook on formative and summative evaluation of student
learning (pp. 646-695). New York: McGraw-Hill.

Wood, J.P. (1980). Mathematics placement testing. New Directions
for Community College, 31, 59-64.














BIOGRAPHICAL SKETCH


Robert Norman McLeod was born August 5, 1950, in Jacksonville,

Florida. The first of four children of H.F. and the late Norma B.

McLeod, he moved with his family to Miami where he completed

elementary and secondary education in the public schools of Dade

county.

He attended the University of Florida from 1968 through 1972,

receiving the B.A. with honors, majoring in psychology. After

returning to Dade county he taught psychology, mathematics, and

reading there for six years, during which time he earned the Master's

and Ed.S. degrees from Nova University.

In 1980 he was married to Jackie Barbara Polly. They moved to

Ocala in 1981 as work began on the Ph.D. Two sons, Charles Robert and

Travis Gordon, were born in 1982 and 1984, respectively. He has

taught mathematics in the Marion county school system since August,

1981.











I certify that I have read this study and that in my opinion it
conforms to acceptable standards of scholarly presentation and is
fully adequate, in scope and quality, as a dissertation for the degree
of Doctor of Philosophy.



es L. attenoarger, airman
ofessor of Education 1 Leadership







I certify that I have read this study and that in my opinion it
conforms to acceptable standards of scholarly presentation and is
fully adequate, in scope and quality, as a dissertation for the degree
of Doctor of Philosophy.



ohn M.' Nickens,' C6chairman
Professor of Educational Leadership







I certify that I have read this study and that in my opinion it
conforms to acceptable standards of scholarly presentation and is
fully adequate, in scope and quality, as a dissertation for the degree
of Doctor of Philosophy.



Ernest H. St. Jacques
Associate Professor of Educational
Leadership




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs