Information perceived as useful for program quality-evaluation decision making

MISSING IMAGE

Material Information

Title:
Information perceived as useful for program quality-evaluation decision making by administrators in Florida community colleges
Physical Description:
x, 181 leaves : ; 28 cm.
Language:
English
Creator:
Rathburn, Carlisle Baxter, 1957-
Publication Date:

Subjects

Subjects / Keywords:
Community colleges -- Administration -- Florida   ( lcsh )
Curriculum evaluation   ( lcsh )
Quality assurance -- Decision making   ( lcsh )
Educational Administration and Supervision thesis Ph. D   ( lcsh )
Dissertations, Academic -- Educational Administration and Supervision -- UF   ( lcsh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1982.
Bibliography:
Includes bibliographical references (leaves 174-180).
Statement of Responsibility:
by Carlisle Baxter Rathburn III.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 020753391
oclc - 09551992
System ID:
AA00012954:00001


This item is only available as the following downloads:


Full Text











INFORMATION PERCEIVED AS USEFUL
FOR PROGRAM QUALITY-EVALUATION DECISION MAKING
BY ADMINISTRATORS IN FLORIDA COMMUNITY COLLEGES








BY

CARLISLE BAXTER RATHBURN III


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY








UNIVERSITY OF FLORIDA

1982














ACKNOWLEDGEMENTS


I wish to express my sincere gratitude to the many individuals

whose support and assistance made this research study possible.

Sincere appreciation and recognition are extended to Dr. James L.

Wattenbarger, the chairman of my doctoral committee for his professional

assistance, guidance, and encouragement during the preparation of this

study. Grateful acknowledgement is also given to the members of the

committee, Dr. John M. Nickens and Dr. James H. Pitts,for their profes-

sional assistance and expertise. I wish also to thank Mr. T. Al Steuart

for his timely suggestions and editorial assistance.

Special gratitude is extended to the member colleges of the Florida

Community/Junior College Inter-Institutional Research Council (IRC) and

their institutional representatives. These persons provided valuable

assistance in the development of the study questionnaire and the collec-

tion of the data for their respective colleges. Appreciation is also

extended to the study coordinators representing non IRC member colleges

for their assistance in the distribution and collection of the question-

naires at their respective colleges. Special thanks arealso extended to

the community college administrators across the state for their invest-

ment of time and effort in this study.

To his wife, Tami, the writer wishes to express his gratitude for

her patience, understanding, encouragement,and assistance during the

preparation of this study. My devotion belongs to her and God.
















TABLE OF CONTENTS


Page

ACKNOWLEDGEMENTS.............. ................................... ii

LIST OF TABLES...................... ............................ v

ABSTRACT.......................................................... ix

CHAPTER I INTRODUCTION................... .... ..... ... ...... 1

The Problem .. ................................................ 5
Need for the Study .............. ................... ............ 6
Delimitations and Limitations .................................. 7
Definition of Terms........................................... 8
Organization of the Study...... ............................... 9

CHAPTER II REVIEW OF RELATED LITERATURE............................. 10

Educational Quality........................................ 10
Graduate Education........................................ 12
Undergraduate Education .................................... 19
Quantifiable Approaches to Quality.......................... 23
Educational Evaluation................................. .... 26
Toward a Definition of Educational Evaluation............... 27
Contemporary Models of Educational Evaluation............... 30
Decision-Oriented Model of Educational Evaluation........... 33
Summary...................................................... 39

CHAPTER III METHODS AND PROCEDURES................................ 42

Design of the Study......................................... 42
Development of the Questionnaire............................. 43
Survey Population............................................ 47
Collection of the Data.......................................... 47
Analysis of the Data......................................... 49

CHAPTER IV RESULTS............................................... 51

Introduction .. ............................................... 51
Description of Respondents................................... 52
Results for All Respondents............................. ..... 55
Results for Program Areas.................................... 70
Advanced and Professional Program Area...................... 73
Occupational Program Area.................................. 81











Student Services Area...................................... 96
Developmental Program Area.... .......................... 108
Community Instructional Services Program Area............... 117
Summary ........ ........................... ....... 130

CHAPTER V SUMMARY, CONCLUSIONS, RECOMMENDATIONS ................... 131

Summary ...................................................... 131
Conclusions .................................................... 134
Recommendations ............................................... 136
Recommendations for Further Study ............................ 137

APPENDICES

A. DESCRIPTIONS OF THE PROGRAM AREAS ........................ 138

B. PERSONS INCLUDED IN REVIEW PANEL USED IN REFINING THE
LIST OF PROGRAM CHARACTERISTICS AND THE STUDY QUESTIONNAIRE 139

C. QUESTIONNAIRE......................... ..................... 141

D. POSITION CODES WITH FREQUENCIES USED IN THE CATEGORIZATION
OF RESPONDENTS BY PROGRAM AREA............................ 153

E. RANKS AND MEANS FOR ALL PROGRAM CHARACTERISTICS FOR ALL
RESPONDENTS AND FOR RESPONDENTS BY PROGRAM AREA........... 156

REFERENCES....................................................... 174

BIOGRAPHICAL SKETCH ................................................ 181














LIST OF TABLES

Table Page

1 Frequencies for All Respondents by Sex, Degrees Held,
Years at Present College, Years in Present Position,
Years in Community College Education, and Years in
Other Than Community College Education by Self-Report..... 53

2 Number of Respondents Per Program Area and Corresponding
Percentages of All Respondents ........................... 54

3 Ranks for Program Characteristics in the Upper Quartile of
Mean Usefulness-Ratings for All Respondents With Corres-
ponding Ranks for Respondents Classified into Program Areas 56

4 Distribution by Category of Program Characteristics in
the Upper Quartile of Mean Usefulness-Ratings by All Re-
spondents.............................................. 63

5 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Useful-
ness-Ratings by All Respondents With Ranks................ 64

6 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Use-
fulness-Ratings by All Respondents With Ranks............. 65

7 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by All Respondents With Ranks............. 67

8 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by All Respondents With Ranks.......... 69

9 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by All Respon-
dents..................................................... 71

10 Spearman Rank-Order Correlation Coefficients for the Upper
Quartile of Mean Usefulness-Ratings by All Respondents for
Respondents Classified in the Five Program Areas.......... 73

11 Distribution by Category of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Advanced and Professional Program Area.. 74









LIST OF TABLES (continued)

Table Page

12 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Useful-
ness-Ratings by Respondents Classified in the Advanced
and Professional Program Area With Ranks.................. 75

13 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Advanced
and Professional Program Area With Ranks.................. 76

14 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Advanced
and Professional Program Area With Ranks.................. 78

15 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by Respondents Classified in the Ad-
vanced and Professional Program Area With Ranks........... 79

16 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Advanced and Professional Program Area.. 82

17 Distribution by Category of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Occupational Program Area............... 83

18 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Useful-
ness-Ratings by Respondents Classified in the Occupational
Program Area With Ranks .................................. 84

19 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Occupation-
al Program Area With Ranks............................... 87

20 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Occupation-
al Program Area With Ranks............................... 89

21 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by Respondents Classified in the Occupa-
tional Program Area With Ranks............................ 92










LIST OF TABLES (continued)

Table Page

22 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Occupational Program Area............... 95

23 Distribution by Category of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Student Services Area................... 97

24 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Usefulness-
Ratings by Respondents Classified in the Student Services
Area With Ranks ...................................... 98

25 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Useful-
ness-Ratings by Respondents Classified in the Student Ser-
vices Area With Ranks..... ...................... ... ..... 101

26 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Student
Services Area With Ranks.................................. 104

27 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by Respondents Classified in the Student
Services Area With Ranks .................................. 105

28 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Student Services Area................... 107

29 Distribution by Category of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Developmental Program Area.............. 109

30 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Usefulness-
Ratings by Respondents Classified in the Developmental Pro-
gram Area With Ranks................................... 111

31 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Develop-
mental Program Area With Ranks........................... 113

32 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Develop-
mental Program Area With Ranks........................... 114










LIST OF TABLES (continued)

Table Page

33 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by Respondents Classified in the De-
velopmental Program Area With Ranks....................... 116

34 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Developmental Program Area.............. 118

35 Distribution by Category of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Community Instructional Services Program
Area.................................................... 120

36 Program Characteristics Relating to Students (Question-
naire Category I) in the Upper Quartile of Mean Usefulness-
Ratings by Respondents Classified in the Community Instruc-
tional Services Program Area With Ranks................... 121

37 Program Characteristics Relating to Faculty/Staff (Ques-
tionnaire Category II) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Community
Instructional Services Program Area With Ranks............ 123

38 Program Characteristics Relating to Costs/Resources (Ques-
tionnaire Category III) in the Upper Quartile of Mean Use-
fulness-Ratings by Respondents Classified in the Community
Instructional Services Program Area With Ranks............ 124

39 Program Characteristics Relating to General Information
(Questionnaire Category IV) in the Upper Quartile of Mean
Usefulness-Ratings by Respondents Classified in the Commun-
ity Instructional Services Program Area With Ranks........ 126

40 Information Profile of Program Characteristics in the
Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Community Instructional Services Pro-
gram Area With Ranks................................... 128














Abstract of Dissertation Presented
to the Graduate Council of the University of Florida
in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


INFORMATION PERCEIVED AS USEFUL FOR PROGRAM
QUALITY-EVALUATION DECISION MAKING BY ADMINISTRATORS
IN FLORIDA COMMUNITY COLLEGES

By

Carlisle Baxter Rathburn III

August 1982

Chairman: James L. Wattenbarger
Major Department: Educational Administration
and Supervision

The problem of the study was to identify measures of program

quality as perceived by community college administrators through the

determination of the degree of usefulness of various types of informa-

tion (program characteristics) for program quality-evaluation decision

making. The study was based on the Stufflebeam model of educational

evaluation as the process of providing useful information for educa-

tional decision making.

The review of related literature on the Stufflebeam and other

decision-oriented models of educational evaluation indicated that the

determination of information useful in education decision making

should be the responsibility of the decision maker, in this case the

respective program administrator. Therefore, a survey research design

was chosen for the study and a questionnaire was developed to measure








administrators' perceptions of the degree of usefulness of 434'program

characteristics for program quality-evaluation decision making. A

four point scale was used ranging from "essential" to "little or no

usefulness."

The study population consisted of administrators with instruc-

tional or student services responsibilities as identified by study

coordinators at each college. Responses were received from 450 admini-

strators representing 24 of Florida's 28 community colleges with a

response rate of 71.3%.

Using the mean ratings, ranks were calculated for each program

characteristic for all respondents and for respondents classified into

the five program areas (Advanced and Professional, Occupational,

Developmental, Community Instructional Services, and Student Services).

The upper quartile program characteristics as ranked by mean usefulness-

ratings for all respondents and for respondents classified by program

area were organized into information profiles of 11 types of information.

Spearman rank-order correlation coefficients were calculated between the

upper quartile of program characteristics for respondents in each

program area.

The study concluded that community college administrators consid-

ered a wide variety of program characteristics as essential for program

quality-evaluation decision making varying across program areas. The

implication was made that quality assessment in the community college

should be conducted by program area utilizing a multivariate approach.
















CHAPTER I
INTRODUCTION


Just as equity and access dominated the attention of those involved

in higher education during the 60's and 70's, quality is most certainly

going to emerge as the primary concern of the 1980's (Finn, 1980). In

recent years, waning public confidence in higher education coupled with

increased fiscal constraints placed academia in a dilemma. That dilemma

fostered an emphasis on accountability that resulted in a proliferation

of evaluation activities related to higher education. A major focus of

those evaluation activities was on the maintenance and improvement of the

quality of programs and services offered by higher education institutions

within the context of broadening student access in a time of fiscal con-

straint (Craven, 1980).

Efforts to determine the quality of educational programs and ser-

vices have drawn a great deal of attention from both within institu-

tions and from state level governing bodies (Bowen, 1974; Craven, 1980).

Many state government leaders have expressed the opinion that maintain-

ing and improving quality is and will continue to be the leading issue

in higher education ("Legislators Stress Quality Improvement," 1980).

This attitude was reflected in the educational achievement goal recently

adopted by the Florida State Board of Education:

On a statewide average, educational achievement in the state
of Florida will equal that of the upper quartile of states
within five years, as indicated by commonly accepted criteria
of attainment. (Florida State Board of Education, January 20,
1981)









As with the terms equity and access, the words educational achieve-

ment and educational quality can mean all things to all people (King,

1981). If the terms are used loosely and given no definition, they

provide little counsel. If, on the other hand, they are defined too

closely, they possess at best limited use for the diverse system of

higher education (Finn, 1980).

Philosophers to statesmen to scholars have attempted to define the

concept of quality. In most instances, the results of their labors were

an agreement that quality was a subjective judgment by an individual

based, at least in part, on some supporting evidence (Lawrence & Green,

1980, p. 8). Researchers in higher education have noticeably refrained

from providing a constitutive definition of educational quality but have,

through research design and choice of various evaluative criteria, oper-

ationally defined the concept. The evaluative criteria include, among

others, library resources (Cartter, 1966), quality of students (Astin &

Henson, 1977), students' success (Krause & Krause, 1970), and faculty

qualifications (Blackburn & Lingenfelter, 1973). Many of the more repu-

table studies of quality education have utilized numerous evaluative

criteria in the determination of their quality rankings, but still repre-

sent a limited view of educational quality. This limited view of quality

has "consistently identified 20 to 30 outstanding institutions, leaving

them to vie with each other for the highest absolute rank in the hier-

archical structure, and virtually ignoring the rest of our colleges and

universities" (Lawrence & Green, 1980, p. 1).

The major innovative force in American higher education, the commu-

nity college, has continually rated poorly, utilizing these traditional

means of assessing quality (Bowen, 1974). Traditionally, academia has









utilized subjective evaluations by "experts" to evaluate the quality of

an institution or its programs. The bases of those evaluations were

inextricably tied to the missions and goals of those institutions which

continually rated among the highest and, for the most part, ran in oppo-

sition to the missions and goals of the community college (Fotheringham,

1978). For this reason, present definitions of quality have little util-

ity for the community college in demonstrating the quality of programs

or services they offer to their constantly changing clientele.

Harlacher described the community college as a multipurposed insti-

tution designed to meet the needs of a constantly changing society (1969,

p. 3). In attempting to meet these diverse needs, community colleges

have committed themselves to five major purposes represented by the five

major program areas of a comprehensive community college. These pur-

poses include: preparation for advanced study (Advanced and Professional

Program Areas), terminal, career-oriented studies (Occupational Program

Areas), remedial and basic education (Developmental Program Area), var-

ious credit and non-credit community education programs (Community

Instructional Services Program Area), and student development and gui-

dance (Student Services Program Area)(p. 3). Each of these program

areas serves a different clientele representing different needs, goals,

and desires (Gleazer, 1980). It is these differences that make quality

assessment in the comprehensive community college difficult. John

Gardner (1971), former Secretary of Health, Education, and Welfare,

summarized the diverse nature of the community college in relation to

quality assessment in the following:

The traditionalists might say, of course, let Princeton create
a junior college and one would have an institution of unquestion-
able excellence! That is correct, but it leads us down precisely
the wrong path. If Princeton Junior College were excellent in










the sense that Princeton University is excellent, it would not
be excellent in the most important way that a junior college can
and may be excellent. It would simply be a truncated version of
Princeton. A comparable meaningless result would be achieved if
General Motors tried to add to its line of low-priced cars by
marketing the front end of a Cadillac. (p. 33)

Gardner (p. 32) stated that if the community college is to maintain

its diverse mission in an era of increased concern for quality that

various aspects of this diversity should be honored. It was Gardner's

opinion that diversity and quality were not mutually exclusive goals,

rather a more flexible conception of quality was needed that allowed each

program area to achieve quality in terms of its goals and objectives

(p. 32).

The challenge facing higher education is to maintain and
strengthen the quality of its programs within rela-
tively fixed resource constraints. Effective information
decision systems are increasingly critical in enabling
higher educational institutions to meet this challenge
successfully. (Craven, 1975, p. 125)

The determination of educational quality, regardless of how quality

is defined, involves decision making by program administrators. This

decision-making process requires the use of some information about the

program.

Stufflebeam, Foley, Gephart, Guba, Hammond, Merriman, and Provus

(1971) viewed educational decision making as a process involving individ-

ual and organizational values interacting with various types of infor-

mation and various options resulting in a decision or choices. Educa-

tional evaluation was viewed as a process by which information needed to

make a particular decision was made available to the responsible deci-

sion maker. Stufflebeam et al. defined evaluation as "the process of

delineating, obtaining, and providing useful information for judging

decision alternatives" (1971, p. 40). In this light, evaluating the









quality of educational programs may be viewed as a process involving the

identification of what information about a program is perceived as most

useful to the responsible decision maker and presenting that information

in a format useful for program quality-evaluation decision making.

Utilizing this reference, quality-evaluation is not itself a determina-

tion of quality; that determination becomes a decision on the part of a

responsible individual (Alkin & Fitz-Gibbeon, 1975).

Stufflebeam et al. (1971) divided the evaluation process into three

basic steps: delineating, obtaining, and providing. The delineating

phase was the primary operational step and involved the identification of

the most useful information. Stufflebeam et al. stressed the importance

of input from the potential decision maker in the determination of perti-

nent evaluative information. Stufflemeam et al. stated that the deter-

mination of useful information "can be obtained by the evaluator only in

interaction with his client [decision maker]" (p. 41). Alkin (1969)

maintained a similar view of evaluation to that of Stufflebeam et al. and

postulated that the process of selecting the appropriate information is

the pivotal step in any evaluation process. This study focused on the

delineation phase of evaluation in relation to one particular type of

decision: the determination of quality. The basis of this study was the

model of educational evaluation and decision making described by

Stufflebeam et al. This model henceforth will be referred to as the

Stufflebeam model of educational evaluation.
The Problem

The problem in the study was to utilize the Stufflebeam model of

educational evaluation to identify measures of quality for use in Florida

community colleges as indicated by administrators' perception of the









usefulness of various types of information (program characteristics) for

program quality-evaluation decision making. The study viewed educational

quality as a value judgement or decision on the part of an individual.

This study was based on the Stufflebeam model of educational evaluation,

in which the primary step is the delineation of information useful for

educational decision making. The particular decision of concern in the

study was the assessment of program quality with the responsible decision

maker being the respective program administrator.

Specifically the study proposed to:

1. Identify what program characteristics were considered most useful

for program quality-evaluation decision making by administrators in

Florida public community colleges.

2. Identify what program characteristics were considered most useful

for program quality-evaluation decision making for administrators repre-

senting the five program areas of a comprehensive community college.

(See Appendix A for a description of the program areas.)

3. Develop information profiles consisting of the program character-

istics considered most useful for program quality-evaluation decision

making for each program area.

4. Determine if community college administrators representing the

five program areas differed in the information they identified as most

useful for program quality-evaluation decision making.

Need for the Study

The premise of the study was that a gap existed in the research

concerned with the information requirements of administrators in the

community college system of Florida who make program quality-evaluation

decisions. (Wtih the rapid growth in the areas of data processing and










computerized management information systems, the process by which to

deliver the information necessary for program quality-evaluation

decision making is available. The gap lies in the determination of what

information is considered most useful by various decision makers for

their particular situations. The study was designed to identify the

information needs of various decision makers in making program quality-

evaluation decisions and to produce profiles of program characteristics

to facilitate the quality-evaluation decision making process through the

organization of timely and relevant information. These quality-evalua-

tion information profiles will be especially suited for decision making

in today's comprehensive community college.

Delimitations and Limitations

The study was confined to personnel in Florida's public community

colleges who have some instructional or student personnel responsibili-

ties and who are classified by their institutions as "Executive, Admini-

strative, and Managerial personnel" under Part 3 of the "Personnel and

Salary Report (SA-1)" in the Community College Management Information

System Procedures Manual (Division of Community Colleges, 1980, pp.

10.1-10.2). The data were collected by means of a survey instrument

(questionnaire) and represented the expressed opinions of the administra-

tors being surveyed. The results of this study are descriptive of the

situation in Florida's public community colleges, although the findings

may be applicable to similar community colleges or community college

systems in other parts of the nation.

Several factors limited this study:

1. Since an individual's attitudes and perceptions constantly

change, the perceptions identified in this study were reflective only









of the time period during which this study was conducted.

2. The instrument utilized to gather information for the study was

developed for this particular study. Some administrators may have

responded to various items in terms of their understanding of the items

as held by the researcher. Face validity for the questionnaire was

established.

Definition of Terms

For the purpose of this study, terms used herein were defined as

follows:

1. Evaluation the process of providing timely and relevant infor-

mation for decision making.

2. Program Quality-Evaluation Decision Making the process,

involving the use of relevant information, leading to a judgement of

the quality of a program by a responsible administrator.

3. Program Areas the five basic operational areas of a compre-

hensive community college in Florida including the four academic areas

of Advanced and Professional, Occupational, Community Instructional

Services, and Developmental, and the Student Services Program Area

(Division of Community College, 1981b, p. 6). Each of these program

areas is described in Appendix A.

4. Program Characteristic any information relating to or describ-

ing a program or service of a community college.

5. Usefulness the determination of the serviceability or

utility of a program characteristic in making judgement about the

quality of a program.







9


.r- ,r j, r. 'r :.F t r.r tAd.

The study was organized into five chapters. Chapter I contained

an introduction, definition of terms, a statement of the problem, and

the delimitations and limitations of the study. Chapter II provided

a review of related literature, including a discussion of the decision-

making model of educational evaluation which was the basis of the

approach to quality-evaluation used in this study. The second chapter

also included a short discussion of higher education's attempts to

address this illusive issue of quality. Chapter III provided a discus-

sion of the development of the questionnaire and the methodology utilized

in the study. Chapter IV presented the results of the study. Chapter V

contained a summary of the study, the conclusions, and recommendations.















CHAPTER II
REVIEW OF RELATED LITERATURE


In reviewing the literature in this area of quality education, two

of the more discussed topics in higher education emerged for considera-

tion: educational quality and educational evaluation. This chapter re-

viewed attempts to address the issue of quality in higher education. It

also included a discussion of the concept of educational evaluation with

special emphasis on the decision-oriented model of evaluation which was

the basis for the methodology utilized in this study.

Educational Quality

( Quality you know what it is, yet you don't know what it
is. But that's self-contradictory. But some things are bet-
ter than others, that is, they have more quality. But when
you try to say what the quality is, apart from the things that
have it, it all goes poof! There's nothing to talk about.
But if you can't say what Quality is, how do you know what it
is, or how do you know that it even exists? If no one knows
what-it is, then for all practical purposes it doesn't exist
at all. But for all practical purposes it really does exist.
What else are the grades based on? Why else would people pay
fortunes for some things and throw others in the trash pile?
Obviously some things are better than others but what's
the betternesss?" So round and round you go, spinning
mental wheels and nowhere finding anyplace to get traction.
What the hell is Quality? What is it? (Pirsig, 1974, p. 184)

The quandary described by Pirsig provides an appropriate summary of

the attempts to address the issue of quality in higher education. If

by no other means, educators intuitively recognize a substantial vari-

ance in program and institutional quality among the diverse institu-

tions that comprise the American system of higher education. Yet studies









conducted by different researchers for different reasons in different set-

tings using different methodologies have resulted in a variety of qual-

ity attributes providing little assistance in operationally defining qual-

ity (Lawrence & Green, 1980).

During a recent Southern Regional Educational Board Symposium, SREB

President W.L. Goodwin addressed the problem of defining quality:

a part of our problem in higher education is that too often
we have confused quality with prestige We need to
increase the understanding that quality education is not a
monopoly of a few dozen universities in the nation, but is
attainable by all types of higher education institutions.
("Legislators Stress Quality," 1980, p. 3)

Dr. Maxwell King, President of Brevard Community College in Florida, in

a recent message to his faculty, made the following comments on educa-

tional quality:

Quality in education is not an absolute. It can only be eval-
uated in terms of arbitrarily determined standards, and these
in turn depend partly on subjectively formulated aims and
partly on objective statistical procedures Education
is quality education to the extent that it meets the needs of
the people being served. (King, 1981, p. 1)

S These two quotes are representative of the general view of quality

in higher education. That view is vague, subjective, and broad. On one

hand, such a perspective on quality has limited use in that it provides

little guidance for educational improvement. On the other hand, it is

a perspective which maintains maximum flexibility, which is needed con-

sidering the diversity found in higher education today.

This section of the literature review is presented in three parts.

The first part reviews the major reputational assessments of graduate

programs. These studies have formed the basis of attempts to investi-

gate the quality issue in higher education. The second part provides

an overview of quality assessment at the undergraduate and two-year









college level. The final part reviews those studies designed to identify

quantifiable indicators of quality.

Graduate Education

Beginning with Hughes (1925) and continuing through the prestigious

American Council on Education (ACE) sponsored studies (Cartter, 1966;

Roose & Anderson, 1970), reputational ratings of graduate programs have

constituted the basis of attempts to address the issue of quality in

higher education. The methodology incorporated in a vast majority of

these studies involved a peer review, in which programs were rated by

eminent faculty in the same discipline, as experts, and their ratings

reflected the quality of graduate education and research in the system.

These studies attempted to identify the outstanding research and teach-

ing institutions by program and have consistently identified 20 to 30

institutions, virtually ignoring the balance of the system (Lawrence &

Green, 1980, p. 2).

Using a panel of distinguished scholars from each field, Hughes

(1925) conducted the first comprehensive reputational study of graduate

programs. At the time of this study, only 65 American universities

awarded the doctoral degree. Hughes ranked 38 of these universities in

20 graduate disciplines according to the number of outstanding scholars

each employed. During the next decade the number of American universi-

ties awarding the doctoral degree had nearly doubled. This prompted a

second Hughes study (1934) which ranked 59 universities in 35 discip-

lines according to the quality of facilities and staff for the prepara-

tion of doctoral candidates. The stated purpose of both of the Hughes

studies was to educate undergraduate students about various graduate

programs. These studies went well beyond this purpose in establishing









procedures for quality ratings including the identification of the na-

tion's leading institutions through numerical ranks based upon the infor-

mal opinions of academicians.

For the next 20 years, the Hughes studies were regarded as authori-

tative. It was not until Keniston's (1959) work that an attempt was

made to update or validate the Hughes studies. Using department chair-

men selected from the institutional members of the American Association

of Universities as raters, Keniston ranked 24 graduate programs based on

a combined measure of doctoral program quality and faculty quality.

These rankings were subsequently used to produce a rank-ordered list of

the top 20 institutions which were compared with Hughes' results.

The major weakness of the Hughes and Keniston studies, according to

Cartter (1966), was the geographical and rater biases which were not

controlled. Other flaws in these studies noted by Cartter included the

failure to distinguish measures of faculty quality from measures of edu-

cational quality, the failure to account for the biases of raters toward

their alma maters, and the choice of department chairmen as raters. It

was Cartter's opinion that the chairmen were not necessarily the most

distinguished scholars,not typical of their peers in age, specializa-

tion, or rank and tended to be more conservative and thus to favor the

traditional institutions.

These criticisms were accounted for in Cartter's design of the ACE

studies in which great care was taken to assure the representation of

various institutions and raters from all geographical areas. Cartter

surveyed 106 institutions representing 1,163 graduate programs which

resulted in rankings of 29 disciplines. The over 4,000 survey respon-

dents included both senior and junior scholars as well as department









chairmen. The respondents were requested to rate each doctoral program

in their area of study from an alphabetical list of the institutions on

two components: quality of graduate faculty and effectiveness of the

doctoral program. To support the representativeness of the raters, the

respondents were requested to supply basic biographical information.

The leading departments were ranked separately on the basis of the rat-

ers' responses on each of the components. In most disciplines, the

rankings for each component were very similar. Whenever the discipline

areas overlapped, Cartter compared his ratings with those of Hughes

(1925) and Keniston (1959).

Cartter also compared his ratings with various objective measures.

He found that his rankings correlated highly with Bowker's (1964), who

used enrollment of graduate award recipients in institutional programs

as a criterion. Cartter found a high correlation between his ratings

and other institutional measures such as faculty salaries, library re-

sources, and publication indices. Cartter used these relationships as

a primary point in his argument supporting peer ratings for quality

assessment.

Cartter was not willing to aggregate departmental ratings to produce

institutional ratings as was done by Hughes and Keniston. It was

Cartter's opinion that this was inappropriate for three reasons. First,

not every institution considered offered doctorates in every field.

Second, it was impossible to assign weights to the various fields.

Third, various departments represented various specializations within

a field and as such were difficult to weight (Cartter, 1966).

Cartter's study was the premier work until 1966 in the area of qual-

ity assessment in higher education. Realizing that the continuous










changes in higher education would result in changes in the rankings of

institutions in various fields, Cartter committed himself to do a five-

year follow-up study. The 1970 ACE-sponsored Roose-Anderson study ful-

filled this commitment by essentially replicating Cartter's study. The

Roose-Anderson study ranked 130 institutions across 29 disciplines util-

izing Cartter's methodology. The ratings were based on the same two

components Cartter used in 1966: quality of graduate faculty and effec-

tiveness of the doctoral program.

The Roose-Anderson (1970) study, through its omission of the word

"quality" from its title, represented a change in the philosophy of qual-

ity assessment studies:

Since it is evident that the appraisal of faculty and
programs as reflected by their reputations rather than as
they partake of specific components of an amorphous attri-
bute called 'quality,' we have resolved to use as a title
simply a description of the book's contents, A Rating of
Graduate Programs. (p. xi)

The Roose-Anderson report presented a range of raters' scores rather

than absolute raw departmental scores and spoke in terms of quality

ranges instead of specific institutional rankings. Even with this

apparent change in philosophy, the results of the Roose-Anderson study

were very similar to Cartter's (1966) study.

Although both ACE-sponsored studies refrained from and discouraged

the aggregation of departmental scores into overall institutional rat-

ings, other researchers (Magoun, 1966; Morgan, Kearney, & Regens, 1976;

National Science Foundation, 1969; Petrowski, Brown, & Duffy, 1973) were

quick to report such aggregations. Using the reputational rating pro-

cedures refined by the ACE studies, other researchers produced similar

program or institutional rankings based on the two ACE criteria or

other similar criteria (Carpenter & Carpenter, 1970; Cartter & Solmon,









1977; Cole & Lipton, 1977; Cox & Catt, 1977; Gregg & Sims, 1972;

Margulies & Blau, 1973; Munson & Nelson, 1977).

Lawrence and Green (1980) gave considerable attention to weakness-

es in reputational ratings, the most apparent being their lack of agree-

ment on the meaning of quality. The definition of quality appeared to

be dependent upon the discipline, the program area, and the individual

rater. The lack of agreement on defining quality appeared to make pro-

gram or institutional comparisons nonsensical. Lawrence and Green ex-

pressed the opinion that higher education was far too intricate to rank

solely on the basis of one or two dimensions. "To measure them (insti-

tutions) all by the same yardstick is to do a disservice not only to

the higher education system but also to prospective students and to the

public as a whole" (p. 53).

Many of the criticisms ascribed to the reputational rating approach

involved several forms of rater bias. The first form was commonly re-

ferred to as a "halo effect" in which a rater's opinion of a particular

program was based primarily on the prestige of the institution as a

whole and not on the particular program in question. The halo effect

worked in reverse when a particular department's reputation lagged be-

hind depending upon the institution within which it was located. A

second form of rater bias involved the "alumni effect" in which the

raters tended to give high scores to their respective alma maters. This

effect was compounded by the fact that the largest departments also pro-

duced the largest number of raters. A third rater bias reflected an in-

stitution's size or age in reputational ratings (Lawrence & Green, 1980,

pp. 8-10).

Dolan (1976) criticized the reputational approach because of its

tendency to restrict change and innovation through maintenance of the









status quo. Dolan expressed the opinion that subjective ratings of pro-

gram quality reflected elitist and traditionalist views of higher educa-

tion which stifled or restricted diversity including experimental pro-

grams and multi-dimensional approaches. Dolan believed that with the

increasing consumer awareness in higher education, students should be

involved in any attempts to rate graduate programs.

One advantage of reputational ratings as a method of assessing in-

stitutional quality was that those who should know best about academic

quality of a particular program or discipline could be and were often

utilized as raters. Another point in support of this methodology was

that the process usually produced results with a high degree of face

validity in that those programs or institutions that the educated general

public considered to be "quality" were often identified (Webster, 1981).

Blackburn and Lingenfelter (1973) defended the ACE reputational

ratings on the following grounds:

(1) Panel bias has been largely eliminated by the careful selec-
tion procedures of the ACE studies; (2) subjectivity cannot be
escaped in evaluation no matter what technique is used; (3) pro-
fessional peers are competent to evaluate scholarly work, the
central criterion in reputational studies; and (4) although not
a sufficient condition of general excellence, scholarly ability
is necessary for a good doctoral program. (p. 25)

Lawrence and Green (1980) summarized their opposition to reputa-

tional rankings as follows:

The unfortunate consequences of this situation are perhaps more
attributable to the higher education community's competitiveness,
the mass media's lust for sensational headlines, and the Ameri-
can public's obsession with knowing who's at the top, than to
any fault of the studies themselves. Despite their repeated
cautions against aggregating departmental scores to produce in-
stitutional scores and their constant reminders that the ratings
represent the subjective judgments of faculty and that they prob-
ably reflect prestige rather than quality, scores do get aggre-
gated, institutions do get compared with one another, and high








prestige is translated to mean educational excellence. As a
result, research and scholarly productivity are emphasized
to the exclusion of teaching effectiveness, community service,
and other possible functions; undergraduate education is de-
nigrated; and the vast number of institutions lower down in
the pyramid are treated as mediocrities, whatever their actual
strengths and weaknesses. (pp. 15-16)

One other study of graduate education quality more in line with the

approach taken in this project was conducted under the sponsorship of

the Council of Graduate Schools and the Educational Testing Service

(Clark, Hartnett,& Baird, 1976). A sample of 73 departments equally di-

vided among three fields-psychology, chemistry, and history-was sur-

veyed with the purpose of determining ways to assess quality. Four ma-

jor conclusions resulted from the study. First, timely, relevant,and

useful information (program characteristics) related to educational qual-

ity could be reasonably obtained. Second, approximately 30 program

characteristics were identified as especially useful. Third, these pro-

gram characteristics appeared to be applicable across diverse program

areas. Fourth, two clusters of program characteristics were identified:

research-oriented and educational experience indicators. The research-

oriented indicators included department size, reputation, physical and

financial resources, student ability, and faculty publications. The

educational experience indicators were concerned with the educational

process and academic climate, faculty interpersonal relations, and

alumni ratings of dissertation experiences.

The Clark et al. study used faculty, student, and alumni input in

a separate peer-rating component of the study similar in approach to

the ACE studies. The most interesting finding of this component of the

study was that reputational ratings of graduate programs had little re-

lationship to teaching and educational effectiveness as measured by the








input of the students and alumni. Clark et al. concluded that the peer

ratings were based primarily on scholarly publications with little or

no emphasis on the quality of instruction.

Undergraduate Education

Although considerably fewer studies have been conducted designed to

assess quality at the undergraduate level than at the graduate level,

those studies rating undergraduate education have demonstrated that col-

leges differ substantially in the more traditional measures of quality.

Jordan (1963), in a study involving 119 undergraduate programs, found

that those institutions which spent more on salaries for library staff

and had higher numbers of library volumes per student tended to score

higher on a quality index based upon multiple weighted factors. Brown's

(1967) study of undergraduate education ranked colleges on the basis of

eight criteria including total current income per student, proportion of

students entering graduate school, proportion of graduate students, num-

ber of library volumes per student, total number of full-time faculty,

faculty-student ratio, proportion of faculty with doctorates,and average

faculty compensation. These two studies represented approaches to under-

graduate quality assessment similar to those utilized for graduate pro-

grams. Lawrence and Green (1980) expressed the opinion that these and

similar studies (Dube, 1974; Krause & Krause, 1970; Tidball &

Kistiakowski, 1976) which used quality measures more typically associ-

ated with graduate quality assessment (e.g., publication record of stu-

dents, percent of students who finish professional schools,or terminal

graduate degrees, etc.) failed in their purpose because they did not

take into account the "special nature of the undergraduate experience"

(p. 33).







Astin, through a series of studies (1965, 1971; Astin & Henson, 1977)

approached one specific aspect of undergraduate quality which he termed

the selectivity index. Astin (1971) defined the selectivity index as a

relative measure of the academic ability of a college's entering fresh-

men (pp. 1-2). In another study involving the selectivity index, Astin

and Henson (1977) used ACT and SAT scores to approximate the selectivity

of all accredited two- and four-year institutions. Astin and Henson de-

fended this approach on the basis of its acceptance by the mainstream of

faculty and administration in higher education (p. 2). The validity of

this approach was supported by its correlations with selected institu-

tional characteristics such as student-faculty ratios (Astin & Solmon,

1979).

In a related study, Astin further developed the selectivity index

by examining the preferences of academically talented students for var-

ious institutions (Astin & Solmon, 1979). Realizing that this measure

was confounded by a number of variables such as institutional popularity

and regionalism, Astin and Solmon still maintained that a measure of an

institution's drawing power for highly able students was a valid quality

measure (p. 49).

In a later study of undergraduate education qualify, Astin and Solmon

(Astin & Solmon, 1981; Solmon & Astin, 1981) expanded their view of qual-

ity to multiple criteria. This study utilized faculty members represent-

ing seven disciplines from institutions in four states (California, Illi-

nois, New York, and North Carolina) who were requested to rate institu-

tions from two lists: a national list and a state list. The state

list included those institutions in the rater's state which awarded a

minimum of five undergraduate degrees in the rater's field during 1977.










The national list was composed of 100 of the "most visible institutions

in the rater's field" (Astin & Solmon, p. 14). Each rater was asked to

evaluate each institution from both lists according to six quality cri-

teria including overall quality of undergraduate education, preparation

of students for graduate and professional school, preparation of stu-

dents for employment after college, faculty commitment to undergraduate

teaching, scholarly or professional accomplishments of faculty, and in-

novativeness of curriculum and pedagogy (Solmon & Astin, p. 24).

Utilizing a factor analysis of the mean ratings on each of the qual-

ity criteria for each of the undergraduate disciplines, Astin and Solmon

(1981) concluded that

these ratings showed that the seven fields form a single "over-
all quality" dimension. In practical terms, this means that
quality differences among fields at a given institution tend
to be minimal, and that ratings of one department may suffice
as an estimate of the quality in the other departments at the
institution. (pp. 14-15)

Considering the limited view of quality expressed in the choice of the

six quality criteria used in the study, the conclusion appeared warrant-

ed.

Probably the best known studies of undergraduate quality, the Gourman

studies (1967, 1977), provided little or no explanation of the procedures

used to arrive at the reported ratings. Scores on two sets of varia-

bles-strength of the institution's academic departments and quality

of non-departmental areas-were averaged to produce an average academic

departmental rating and an average non-departmental rating and an over-

all "Gourman rating" for each institution.

Although the Gourman ratings have been accepted as a measure of un-

dergraduate study, many of the assumptions in these ratings were ques-

tionable. Gourman assumed that 10 years were required following









graduation to produce an excellent classroom teacher and thus rated old-

er faculty higher. Gourman gave equal weight to faculty effectiveness,

public relations, library, a college's alumni association, and the ath-

letic-academic balance as measures of institutional quality. Gourman

held a bias toward larger institutions, consistently rating them higher

than smaller liberal arts colleges (Lawrence & Green, 1980). In 1977,

Gourman changed the format of his ratings, making them similar to that

of the 1970 Roose-Andersen study. In his 1977 study, Gourman rated 68

undergraduate programs, again providing no information on the procedures

used in developing the ratings.

Utilizing approaches such as those discussed previously, other re-

searchers have attempted to address the issue of undergraduate quality

(Johnson, 1978; Nichols, 1966; Solmon, 1975). Other, possibly less ac-

ademic, attempts to evaluate undergraduate quality included the popular

college guides (e.g., Hawes Comprehensive Guide to Colleges, 1978).

Webster (1981) criticized many of these attempts on the basis of their

limited view of the undergraduate experience. Central to this criticism

was the lack of emphasis on undergraduate teaching in preparation for

the job market and the overriding view of undergraduate programs serving

primarily as preparatory periods for graduate study.

Very little research has been conducted in the community/junior col-

lege setting in relation to the quality issue. In general, many of the

premises underlying traditional views of quality in higher education run

in opposition to the basic principles of the community college philoso-

phy. An example of this is the discrepancy between the selectivity in-

dex (Astin & Solmon, 1979) and the "open door" admission policy of the

community college. One of the more quoted studies which addressed the








issue of quality in the community college involved the identification

of quality indicators from peer opinions expressed in evaluations of

selected junior colleges during accreditation team visits (Walters,

1970). Walters identified 58 specific indicators from a list of 516

recommendations made by visiting accreditation teams to 126 public jun-

ior colleges over the period of 1960-1969. Most of the indicators re-

lated to college procedures, the efficiency of operations, staffing lev-

els, and organizational structure. Walters postulated that the 58 indi-

cators taken collectively described a quality public junior college al-

though only two of them were based on any specific quantitative measures.

One other study of educational quality in the two-year college, the Pike

study (1963), involved an analysis of the relationship of current expen-

ditures, enrollment, and expenditure per student to certain variables

associated with educational quality in junior colleges in Texas.

Quantifiable Approaches to Quality

In recent years, higher education researchers have explored numerous

ways of providing objective measures of educational quality. Many of

these attempts have involved correlating various objective quantifiable

measures with established rankings of institutional quality. These

quantifiable measures include, among others, institutional size (Elton

& Rose, 1972; Hagstrom, 1971), research productivity (Drew, 1975; Wispe,

1969), publication productivity (Lewis, 1968), amount of money spent

(Ousiew & Castetter, 1960), and number of library volumes (Lazarsfield

& Thielens, 1958). Many of these "correlates of prestige" (Lawrence &

Green, 1980, p. 23) used the popular ACE ratings as their basis for com-

parison. Cartter (1966), anticipating the identification of quantifi-

able quality indicators in his ratings, stated that such indicators "are

for the most part subjective measures once removed" (p. 4).








The list of factors which significantly correlated with reputational

quality ratings was lengthy. Differentiating between a correlational re-

lationship and causation, Blackburn and Lingenfelter (1973) listed the

following items as being positively correlated with the 1966 ACE ratings:

1. Magnitude of the doctoral program.
2. Amount of federal funding for academic research and develop-
ment.
3. Non-federal current fund income for educational and general
purposes.
4. Baccalaureate origins of graduate fellowship recipients.
5. Baccalaureate origins of doctorates.
6. Freshman admissions selectivity.
7. Selection of institutions by recipients of graduate fellow-
ships.
8. Postdoctoral students in science and engineering.
9. Doctoral awards per faculty member.
10. Doctoral awards per graduate student.
11. Ratio of doctorate to baccalaureate awards.
12. Compensation of full professors.
13. The proportion of full professors on a faculty.
14. Higher graduate student-faculty ratios.
15. Departmental size of seven faculty members or more. (p. 11)

Fotheringham (1978) described traditional quality indicators as in-
cluding context, faculty input, faculty-student interaction, and student

input. Fotheringham defined context as "the setting for the education-

al process" (p. 17). The context variables included such things as num-

ber of library volumes, administrative policies, and physical facilities.

Pike (1963), in his study of the relationship between 72 variables asso-

ciated with educational quality and enrollment, current expenditures,

and expenditure per student, found expenditures to be the most important

measure of context. Banghart, Kraprayoon, and Clewell (1978) identified

other context variables including curriculum, administrative practices,

and amount of external funding.

Meder (1955) defined faculty input as including the instructor's
training, skill, ability, and morale. Blackburn and Lingenfelter (1973)

included degrees, awards, faculty compensation, and post-doctoral studies





I










as indicators of faculty input. Other faculty input indicators included

research productivity (Hagstrom, 1971), publication productivity (Cox &

Catt, 1977), and faculty size (Balderston, 1970). The most difficult

indicators of faculty input to measure were faculty morale, vigor, cohe-

sion, and progressiveness, which Balderston (1974) suggested could only

be subjectively measured.

Faculty-student interaction has been traditionally defined as the

faculty-student ratio (Meder, 1955). This view has been expanded to

include the accessibility of the faculty (Roose & Anderson, 1970) as

well as the extent and nature of the faculty contact with students

(Fotheringham, 1978).

Student input indicators of quality have often been held as the most

valuable type of indicator. Student input has been defined as the char-

acteristics of the student at the time of admission (Fotheringham, 1978).

Blackburn and Lingenfelter (1973) proposed a more comprehensive defini-

tion simply as the students' quality. Many researchers have concluded

that not enough has been done to control for variations in student in-

put indicators when measuring various outcome indicators of quality

(Richards, Holland, & Lutz, 1966; Rock, Centra, & Linn, 1969).

Fotheringham (1978) identified three more categories of quality in-

dicators, labeling them output, student change, and intellectual climate.

Output was described as including both faculty output (publications and

other productivity measures) and student output (accomplishments of stu-

dents following graduation). The variability in the specific measures

used to assess these output indicators was reflected in the work of

Keller (1969) and Lawrence, Weathersby, and Patterson (1970).










The student change or student development indicators attempted to

assess the extent of learning that took place during the students' en-

rollment (Turnball, 1971). Ostar (1973) described this as the value-

added concept. It was his opinion that in assessment of the develop-

ment of a student, in both the cognitive and affective domains, spe-

cific attention should be given to both the student's initial abilities

and the student's goals (Ostar, 1973). Measures of student change iden-

tified by Ostar included post-graduate employment, personal achievements,

motivation, and achievements in graduate school.

Fotheringham (1978) defined intellectual climate as "an attitude

toward learning and scholarship shared by students, faculty and admini-

stration" (p. 26). Several researchers have expressed the opinion that

campus climate is of primary importance in assessing institutional qual-

ity (Astin, 1963; Boyer, 1964; Bowen, 1963). Indicators in this cate-

gory included both academic attributes, such as faculty concern for

scholarship, and non-academic attributes such as student's residential

experience, democratic participation of the students in campus affairs,

and counseling or other supplementary services.

Educational Evaluation
During the past decade, evaluation in education has become a topic

of broad scope. It has been the failure of many educators to recognize

that evaluation is a process of immense complexity, thus requiring ex-

amination in its broadest perspective (Alkin, 1969). Pyatte (1970) em-

phasized the importance of evaluators in education looking beyond the

immediate problems and contemplating the intricate meanings and legiti-

mate functions that embody evaluation theory.








The dynamics of evaluation compel attention from many vantage

points. This section of the literature review is presented in three

parts. The initial part introduces the concept of educational eval-

uation through a discussion of various definitions of educational

evaluation. The second part provides a brief review of educational

evaluation from a broad perspective with special attention given to

contemporary models of educational evaluation. The third part dis-

cusses the decision-oriented model of educational evaluation which

was the basis for this study's approach to the quality issue.

Toward a Definition of Educational Evaluation

There are numerous definitions of educational evaluation in vogue

today. These definitions differ in level of abstraction and often re-

flect the specific concerns of the people who formulated them. At the

basic level, evaluation has been defined as the assessment of merit

(Popham, 1975, p. 8). Wolf (1979) found this definition in need of

further clarification as to the meaning of the terms assessment and

merit.

A more descriptive definition was offered by Cronbach (1963), who

defined evaluation as "the collection and use of information to make

decisions about an educational program" (p. 539). This definition of

evaluation was proposed initially during the curriculum development

era of the late fifties. Cronbach's studies suggested various kinds

of information that could be examined within the evaluation framework

and later analyzed and used in decision making designed for course im-

provement (Wolf, 1979).

Doll (1970) defined educational evaluation as "a broad and contin-

uous effort to inquire into the effects of utilizing educational









content and process according to clearly defined goals" (p. 361). In

terms of this definition, educational evaluation had to transcend the

levels of simple measurement techniques or the primary application of

the evaluator's values and beliefs. If evaluation was to be a compre-

hensive and continuous effort, it had to depend on "a variety of in-

struments which are used according to carefully ascribed purposes"

(Doll, 1970, p. 380).

Beeby proposed an extended definition of evaluation as "the system-

atic collection and interpretation of evidence, leading, as a part of

the process, to a judgment of value with a view to action" (Wolf, 1979,

p. 117). Wolf (1979) developed the important elements of the defini-

tion. First, the term systematic implied that information needed would

be defined with precision and obtained in an organized fashion. The

second element, the interpretation of evidence, emphasized the role of

critical judgment or consideration in the evaluation process. Wolf

stated that this element was often neglected in evaluation activities.

The third element of Beeby's definition described by Wolf involved the

judgment of value. This required the evaluator to be responsible for

making judgments from his or her evaluative work about the worth of an

educational endeavor. The last element, with a view to action, intro-

duced the notion that an evaluative undertaking should be designed de-

liberately for the sake of future action (pp. 117-124).

Pyatte (1970) emphasized the importance of a rational plan element

in the definition of educational evaluation. He stated that "evalua-

tion is the deliberate act of gathering and processing information

according to some rational plan the purpose of which is to render, at

some point in time, a judgment about the worth of that on which the









information is gathered" (p. 360). According to Pyatte, six elements

are included: the agent, the object, the inputs, the plan, the time,

and the product.

In defining evaluation, Bloom, Hastings, and Madaus (1971) dis-

cussed the purpose of educational evaluation as:

1. A method of acquiring and processing the evidence needed
to improve the student's learning and the teaching;
2. Including a great variety of evidence beyond the usual
final paper and pencil examination;
3. An aid in clarifying the significant goals and objectives
of education and as a process for determining the extent
to which students are developing in these desired ways;
4. A system of quality control in which it may be determined
at each step in the teaching-learning process whether the
process is effective or not, and, if not, what changes
must be made to ensure its effectiveness before it is too
late; and
5. A tool in education practice for ascertaining whether al-
ternative procedures are equally effective or not in
achieving a set of educational ends. (p. 8)

The obvious variety in definitions of educational evaluation stemmed

from the fact that three different schools of thought have co-existed

for over 30 years (Worthen & Sanders, 1973). Stufflebeam et al. (1971)

provided an excellent discussion of three basic definitions of educa-

tional evaluation from which most others have developed. The first

definition was an early one equating evaluation with measurement (p.

10). The second definition involved the determination of the congru-

ence between performance and objectives, especially behavioral objec-

tives (p. 11). The third definition was the process commonly referred

to as professional judgment (p. 13).

Many other definitions of educational evaluation have emerged in

recent years. The most popular have been those in which evaluation

has been viewed as "a process of identifying and collecting informa-

tion to assist decision makers in choosing among available decision









alternatives" (Worthen & Sanders, 1973, p. 20). An expanded discussion

of this definition of educational evaluation is presented in the final

part of this section of the literature review.

Contemporary Models of Educational Evaluation

With the increased call for accountability in educational institu-

tions, the body of literature on educational evaluation has expanded

rapidly in recent years. Many models of educational evaluation have

emerged. There have been numerous attempts to categorize the array of

evaluation models, the most comprehensive of which were done by

Stufflebeam et al. (1971), Worthen and Sanders (1973), Anderson, Ball,

and Murphy (1975), and Gardner (1977). The more prominent of these

educational evaluation models included the measurement model, the con-

gruence model, the professional judgment model, the goal-free model,

and the decision-oriented model (Gardner, 1977).

The measurement model of evaluation as described by Gardner (1977)

equated evaluation with measurement (p. 575). In this model, evalua-

tion was viewed as the science of instrument development and inter-

pretation (p. 576). The use of measurement instruments results in

scores on other indices which are mathematically and statistically ma-

nipulated so that masses of data can be handled and comparisons done of

individual or group scores with established norms (Stufflebeam et al.,

1971, pp. 10-11). The model has been widely used and is illustrated

by the use of SAT and GRE scores. Gardner (1977) further described

the model as being based on the assumption that the phenomena to be

evaluated have significant measurable attributes and that instruments

can be designed which are capable of measuring these attributes.










Perhaps no other theory of evaluation has received more attention in

recent evaluation literature, especially in its application for the

classroom, than the congruence model. The origin of this model was most

closely associated with the work of R.W. Tyler (1950). Tyler stated

that educational objectives were essentially defined in terms of ex-

pected changes in human behavior. It followed that evaluation was the

process for determining the degree to which changes in behavior actually

took place. Gardner described this model as:

the process of specifying or identifying goals, objectives or
standards of performance; identifying or developing tools to
measure performance; and comparing the measurement data col-
lected with the previously identified objectives or standards
to determine the degree of discrepancy or congruence which
exists. (p. 577)

Probably the most widely used but least discussed model of evaluation

is the so-called professional judgment model (Stufflebeam et al., 1971,

p. 3). In this model, evaluation is professional judgment. Values or

criteria which form the basis of the judgment may or may not be explic-

itly stated. Often a commonly shared value system is assumed (Gardner,

1977, p. 574). Examples of the uses of this model include the judgments

of visiting teams of professionals in the accreditation process and the

use of peer review panels for various programs such as faculty commit-

tees passing judgment on promotion or tenure (Worthen & Sanders, 1973,

pp. 126-127).

The goal-free concept is a recent addition to the body of knowledge

on educational evaluation. This model, originally proposed by Scriven

(1972, 1973), argued that if the main objective of evaluation was to

assess the worth of outcomes, then no distinction should be made be-

tween intended versus unintended outcomes and that an evaluation should

be conducted without reference to a program's goals or objectives.










(Gardner, 1977, p. 583). The evaluation was not totally goal-free, but

standards for comparison could be chosen from a wider range of possibil-

ities than those which might be prescribed by a program's objectives (p.

584). The final outcome of the evaluation "should be accurate, descrip-

tive, and the interpretative information relative to the most important

aspects of the actual performance, effects, and attainments [of the

program being evaluated]" (p. 585).

All of the previous models of evaluation are similar in that they

include reference to the use of information and some judgment made in

relation to this information. The models vary in the emphasis placed

on these areas. Gardner discussed the merits and shortcomings of each

of these models and proposed that each one had advantages depending

upon the specific circumstances in which the evaluation occurred.

A brief guide to selecting an appropriate model, or a combination

of models, was given by Gardner. He suggested that in situations

where a high degree of objectivity is not required, time is short, a

simple evaluation is desired, and expert human resources are available,

then the professional judgment model might be most appropriate. In

situations where high objectivity, reliability, and comparability are

required, where mathematically manipulative results are desired, where

relevant measurable attributes can be identified, and valid and relia-

ble instruments can be designed and used, then the measurement approach

might be most appropriate. In situations where goals are a primary

concern, specific objectives or criteria of performance can be identi-

fied, and valid ways to assess performance can be devised and applied,

then the use of a congruency model might be most appropriate. Finally,

in situations where all observable effects are potentially of value,









human concerns are highly valued, a relatively high degree of objectiv-

ity is not required, and the situation is highly fluid or lacking well-

defined goals or objectives, then a goal-free model might be most

appropriate (pp. 591-592).

Decision-Oriented Model of Educational Evaluation

Stufflebeam and the Phi Delta Kappa National Study Committee have

been credited with the refinement of what Gardner referred to as the

decision-oriented model of educational evaluation. In this model,

"evaluators collect information and communicate this information to

someone else" (Alkin & Fitz-Gibbon, 1975, p. 1). The process by which

this information is collected is systematic and deliberate, an attempt

to obtain an unbiased assessment upon which to base an evaluation (Alkin

& Fitz-Gibbon, 1975; Guba, 1975; Stufflebeam, 1969).

In this model the results of evaluation are directed toward those

individuals who are "intimately connected with the program being eval-

uated" (Alkin & Fitz-Gibbon, p. 1) or the administrative decision mak-

ers (Gardner, 1977; Guba, 1975; Stufflebeam, 1974; Stufflebeam et al.,

1971). In this context, the role of the evaluator is to collect and

present summary information to decision makers (Alkin & Fitz-Gibbon,

p. 5). The decision-oriented model was designed to benefit decision

makers. The evaluators collect and present the information needed by

someone else who determines its worth. "Decision-facilitation evalu-

ators view the final determination of merit as the decision maker's

[individual's] province, not theirs" (Popham, 1975, p. 25).

Alkin (1969) viewed the decision-oriented model as a process con-

sisting of four steps. These steps included determining the areas of

concerns for possible decisions, determining the appropriate data,










collecting and analyzing the data, and reporting the summary information

in a form useful for the decision makers. These steps were condensed

and described by Stufflebeam et al. in their definition of educational

evaluation as "the (process) of (delineating), (obtaining), and (provid-

ing)(useful)(information) for judging)(decision alternatives)" (p. 40).

This statement contained eight elements, set off by parentheses, each of

which had significant implications for the process and techniques of

evaluation. They were defined as follows:

1. Process. A particular and continuing activity subsuming
many methods and involving a number of steps and operations.
2. Decision alternatives. Two or more different actions that
might be taken in response to some situation requiring
altered action.
3. Information. Descriptive or interpretive data about enti-
ties (tangible or intangible) and their relationships, in
terms of some purpose.
4. Delineating. Identifying evaluative information required
through an inventory of the decision alternatives to be
weighed and the criteria to be applied in weighing them.
5. Obtaining. Making information available through such pro-
cesses as collecting, organizing, and analyzing and through
such formal means as measurement, data processing, and
statistical analysis.
6. Providing. Fitting information together into systems or
subsystems that best serve the purposes of the evaluation,
and reporting the information to the decision maker.
7. Useful. Satisfying the scientific, practical, and pruden-
tial criteria of Chapter I [internal validity, external
validity, reliability, objectivity, relevance, importance,
scope, credibility, timeliness, pervasiveness, and effi-
ciency] and pertaining to the judgmental criteria to be
employed in choosing among the decision alternatives.
8. Judging. The act of choosing among the several decision
alternatives; the act of decision making. (Stufflebeam,
et al., 1971, pp. 40-43)

Stufflebeam et al. contended that evaluation was an extension of

the decision-making process. In this process, the evaluator assisted

the decision maker during each step of the decision process. The eval-

uator assisted the decision maker by helping to delineate the informa-

tion which was needed, by providing that information, and by









assisting the decision maker in the interpretation of the information.

Each of these tasks was performed in conjunction with each step of the

decision-making process (awareness, design, choice, and action) for all

types of decision questions (planning, structuring, implementing, and

recycling) in different decision settings (homeostatic, incremental,

metamorphic, and neomobilistic)(pp. 49-103).

Utilizing this orientation, Stufflebeam et al. developed what is

commonly referred to as the Context-Input-Process-Product (CIPP) model

of educational evaluation. The model discriminated between the differ-

ent settings in which decisions are made. In homeostatic settings, de-

cisions involved the maintenance of internal balance in an educational

setting. Decisions that denoted developmental activity, which had as

their purpose the continuous improvement of a program, occurred in in-

cremental settings. Neomobilistic settings were characterized by large

innovative efforts to solve significant problems. The metamorphic deci-

sion-making setting was represented by "utopian activity intended to

produce complete changes in an educational system based upon full know-

ledge of how to effect the desired changes" (Worthen & Sanders, 1973,

pp. 131-132).

The CIPP model, in addition to identifying the four decision set-

tings, also distinguished four classes of educational decisions, each

of which was serviced by a particular type of evaluation. The four

classes of decisions were planning decisions for determining objec-

tives, structuring decisions for designing procedures, implementing

decisions for utilizing, controlling, and refining procedures, and re-

cycling decisions for judging and reacting to attainments (Stufflebeam

et al., 1971, pp. 80-84). Corresponding to these four classes of









decisions were four types of evaluations-context, input, process, and

product-which provided the acronym for this model (p. 218).

In the CIPP model, context evaluation assisted the decision maker

in the determination of program objectives (Gardner, 1977, p. 581).

It defined the relevant environment of the program, described the de-

sired and actual conditions pertaining to that environment, identified

unmet needs and unused opportunities, and diagnosed the problems that

prevent needs from being met and opportunities from being used

(Stufflebeam et al., 1971, p. 218). Input evaluation provided informa-

tion for determining how to utilize resources to meet program goals

(p. 222). Process evaluation performed the service of providing peri-

odic feedback to persons responsible for implementing plans and proce-

dures (p. 229).

Product evaluation served the purpose of providing information for

assessing and interpreting program objectives, whether at the end of a

program cycle or at intermediate points relating to decisions to con-

tinue, modify, terminate, or repeat certain program activities (Gardner,

1977, p. 581). In design, the result of the CIPP model was a continu-

ous flow of systematically collected, timely, and relevant information

for decision makers who have the responsibility of interpreting the in-

formation provided. In contrast, Alkin and Fitz-Gibbon (1975) suggested

that it was the information itself, from a well-designed evaluation,

that would pass judgment.

In general terms, Stufflebeam (1969) viewed evaluation as the sci-

ence of providing information for decision making. The assumption was

made that the ultimate goal of the decision-making process was educa-

tional improvement. Educational improvement implied changes resulting









from choices selected by decision makers from various alternatives. The

process of decision making or choosing among options was firmly rooted

in the decision maker's and organization's value system. In this frame-

work, valid and reliable information was necessary to facilitate the

decision maker's judgment of the degree to which various options measured

up against a personal or organizational value system (Stufflebeam et al.,

1971, p. 38).

Stufflebeam (1968) summarized the rationale for the model in the

following statements:

1. The quality of programs depends upon the quality of deci-
sions in and about the programs.
2. The quality of decisions depends upon decision maker's
abilities to identify the alternatives which comprise
decision situations and to make sound judgments about
these alternatives.
3. Making sound judgments requires timely access to valid
and reliable information pertaining to the alternative.
4. The availability of such information requires system-
atic means to provide it.
5. The processes necessary for providing this information
for decision making collectively comprise the concept
of evaluation. (p. 6)

The University of California at Los Angeles Center for the Study of

Evaluation (CSE) developed a model of evaluation similar to the CIPP

model (Alkin, 1969). The CSE model defined evaluation as the process

of ascertaining the decision areas of concern, selecting
appropriate information, and collecting and analyzing in-
formation in order to report summary data useful to deci-
sion makers in selecting alternatives. (Gardner, 1977, p. 580)

The CSE model is similar to the CIPP model except that the former recon-

ceptualized what Stufflebeam et al. (1971) referred to as process eval-

uation (Popham, 1975). The CSE approach differentiated among the kinds

of decisions that are made at five identified stages:

1. Needs Assessment: The initial stage focuses on the pro-
vision of information regarding the extent to which edu-
cational programs are meeting their objectives.








2. Program Planning: This stage provides information regard-
ing the sorts of instructional programs that meet the pre-
determined needs.
3. Implementation Evaluation: The third stage provides infor-
mation on the degree to which the instructional program is
actually being carried out in accordance with the program
plan.
4. Progress Evaluation: This stage provides information re-
garding the extent to which the planned program is achiev-
ing its objectives.
5. Outcome Evaluation: This stage emphasizes the provision
of information regarding the general worth of the program
as reflected by the outcomes it produces. (Popham, 1975,
p. 38)

An important feature of the CSE model was that its proponents de-

veloped a wide range of instructional materials and other resources to

familiarize educators with this approach. Thus, the CSE model has in-

fluenced actual evaluation practice as much as any of the previous

models presented (Popham, 1975).

Provus (1971) developed an approach to evaluation in which the dis-

crepancies between established standards and actual performance were

closely observed. Provus described his evaluation model as including

five stages: design, installation, process, product, and cost. He

asserted that "on the basis of the comparisons made at each stage, dis-

crepancy information is provided to the program staff, giving them a

rational basis on which to make adjustments in their program" (p. 46).

The influence of the decision-oriented models of educational eval-

uation provided some of the impetus for the recent development of man-

agement information or decision information systems in higher educa-

tion. Craven (1975) described a decision information system as "any

method that provides the right decision maker with the right informa-

tion in the right form at the right time so as to facilitate the de-

cision-making process in pursuit of organizational and/or personal

goals and objectives" (p. 127). Craven went on to say:










information-if it is to be useful-must be relevant, intel-
ligible, and timely [and] every effort should be made
at the outset to secure the full support and participation
of top level decision makers in order to ensure an appropri-
ate analysis of decision-making activities and resulting in-
formation requirements. (pp. 127-132)

Craven expressed the opinion that "the process by which information re-

quirements are identified and defined is, perhaps, the most important

phase of information system development" (p. 132). Craven summarized

his case for decision information systems with the following statements:

Information that responds to those decision-making needs in a
valid, reliable, and timely manner will assist higher educa-
tional institutions during this period in making decisions
that will maintain and strengthen the quality of its programs
and faculty and that will enable them to meet the future edu-
cational needs of students, society, and scholarship. (p. 138)

In summary, the decision-oriented model of evaluation involves "a

continual exchange between evaluators and administrators regarding in-

formation needs and a continuous flow of systematically collected,

timely, and relevant information to satisfy those needs" (Gardner, 1977,

p. 582). During this process, the evaluator should maintain continual

communication with the appropriate administrator regarding what infor-

mation is needed and in what format for each decision.

Summary

In this chapter, selected related literature in the areas of quality

assessment and educational evaluation has been reviewed. Lawrence and

Green (1980) summarized higher education's attempts to assess quality

as subjective attempts to identify the best institutions. Whether or

not these attempts were based on peer review or the utilization of a

traditionally-based set of quantifiable indicators (which generally

correlate highly with each other and with peer reviews), they simply

ignored a vast majority of the nation's institutions of higher









education. By continuing to reinforce the traditional hierarchy of qual-

ity institutions, these ratings provide little or no impetus for improve-

ment in higher education, especially for the community college segment.

The attempts to define educational quality have been both subjec-

tive and vague. Although various research methodologies have opera-

tionally defined quality as number of library books per student, num-

ber of faculty with doctoral degrees, etc., conceptually quality has

been viewed as an individual's value judgment. Hence, the use of the

peer rating approach. In essence, educational quality has been viewed

as an individual's value judgment of an institution or program and not

a measurable attribute or characteristic.

In the process of arriving at that value judgment, evaluation takes

place. Numerous definitions or models of educational evaluation have

emerged. With the increased emphasis on management information systems

and the resulting availability of timely and relevant information, the

decision-oriented models of educational evaluation have recently re-

ceived increased support. In this approach, evaluation was viewed as

the process of identifying and providing useful information for deci-

sion making. The type of information necessary for decision making is

a function of the decision setting, the particular decision, and, most

importantly, the values of the decision maker. In this approach, the

evaluator facilitates the identification of useful information and pro-

vides this information in a format most useful to the decision maker.

It is the decision maker's responsibility to make a value judgment

based upon this information.






41


The approach to the educational quality issue utilized in this

study was based upon the Stufflebeam decision-oriented model of

educational evaluation. In this study, quality was viewed as an indi-

vidual's judgment of a program or service. The study proposed to iden-

tify measures of quality for Florida community colleges as determined

by administrators' perceptions of the degree of usefulness of various

types of information for program quality-evaluation decision making.















CHAPTER III
METHODS AND PROCEDURES


Design of the Study

This study was designed to identify measures of quality for use in

Florida Community Colleges as indicated by administrators perceptions

of the degree of usefulness of various program characteristics for

program quality-evaluation decision making. This study was based on

the Stufflebeam model of educational evaluation as the process of pro-

viding useful information for judging among decision alternatives. The

particular decision situation addressed by this study was the determina-

tion of educational quality. The review of related literature on the

Stufflebeam model and other decision-oriented models of educational

evaluation indicated that the determination of what types of information

to be used in educational decision making should be the responsibility

of the responsible decision maker, in this case the respective program

administrator (Alkin, 1969; Craven, 1975; Stufflebeam et al., 1971).

Therefore, a survey research design was adopted for this study and a

questionnaire was developed to measure administrators' perceptions of the

degree of usefulness of various program characteristics for program qual-

ity-evaluation decision making. This study was part of a larger project

conducted by the Florida Community/Junior College Inter-Institutional

research Council (IRC) at the University of Florida that focused upon

describing quality in Florida Community Colleges.









The following sections describe the development of the question-

naire, collection of the data, the survey population, and the analysis

of the data.

Development of the Questionnaire

In making program quality-evaluation decisions, administrators

may desire information related to many aspects or characteristics of

a program. The questionnaire used in this study contained a list of

434 program characteristics for respondents to rate for degree of use-

fulness in making program quality-evaluation decisions.

The program characteristics rated in this study were identified from

two basic areas including:

1. A review or evaluative criteria utilized to rate the quality of

programs or institutions in various quality-evaluation studies. These

studies also included those designed to identify "indicators of quality"

(e.g., Banghart et al., 1978; Fotheringham, 1978) for educational pro-

grams or institutions.

2. A review of various state and federal government reports

identifying different types of information currently being collected

and reported. The primary source in this area was the Community College

Management Information System Procedures Manual for Florida (Division of

Community Colleges, 1980) which contains copies of many reporting forms,

including the required data with formating requirements, used for

various state and federal reports.

From those sources, a list of program characteristics was compiled.

The list was submitted for review by a panel of community college manage-

ment information specialists and institutional researchers consisting

of IRC institutional representatives for the year 1980-81 (See Appendix










B). A letter was sent to each representative with a list of the program

characteristics, requesting that the list be reviewed and characteris-

tics added, deleted, or modified in relation to their potential use in

program quality-evaluation decision making. The review resulted in the

addition of six new characteristics and the modification of various

others. The list of 434 program characteristics included in the study

questionnaire resulted from this process.

Using these program characteristics, the researcher developed a

questionnaire to collect the required data. The questionnaire was

submitted for review to the same panel of IRC representatives utilized

for the refinement of the program characteristics. The review panel

evaluated the questionnaire and provided input in the following areas:

1. Refinement of the questionnaire directions.

2. Refinement of the statements describing the program

characteristics.

3. Refinement of the organization of the characteristics.

4. Refinement of the rating scale.

5. Refinement of the questionnaire format.

6. Determination of the approximate amount of time needed for

completion of the questionnaire.

This process resulted in various modifications of the question-

naire which was sent out again for review by the panel. The final form

of the questionnaire resulted from this second review. A copy of the

questionnaire can be found in Appendix C.

The questionnaire was organized to collect data in four areas:

1. Demographic data of respondents. These data included the

respondent's name, position, college, years in present position, years









at present college, years in community college education, years in

education other than community college education, age, sex, and highest

degree held.

2. The program perspective respondents used in rating the useful-

ness of the program characteristics. The perspectives were general (no

specific program area in mind), advanced and professional, occupational,

developmental, community instructional services, student support ser-

vices, and other.

3. Usefulness-rating of the program characteristics for program

quality-evaluation decision making.

4. Opinions of respondents of the amount of time spent in program

quality-evaluation activities, the extent of their involvement in pro-

gram quality-evaluation decision making, their perceived level of expe-

rience in program quality-evaluation, and the degree to which their

position was associated with each program area.

The questionnaire consisted of five sections. Section one re-

quested respondents to print their name, current position, and name

of college. Section two contained a description of the purpose of the

study, the organization of the questionnaire, and the directions for

rating the program characteristics. The program characteristics were

organized into four categories concerning information about students,

faculty/staff, costs/resources, and general information. Examples de-

scribing the rating process were provided at the beginning of each

category. Respondents were requested to add any program characteristics

which they thought were of use but which were not included in the ques-

tionnaire.









Section two also contained a description of the four point rating

scale used to rate the program characteristics for degree of useful-

ness in program quality-evaluation decision making. The scale was:

(1) essential, (2) very useful, (3) some usefulness, and (4) little or

no usefulness. Respondents were requested to rate any program character-

istics they perceived as not applicable to their respective program or

service area with a "4." The rating scale was printed on a loose insert

providing respondents a quick reference when completing the questionnaire

(Appendix C).

Section three requested that the respondents indicate the program

perspective they would use in rating the program characteristics. Six

choices of perspectives were listed: general, advanced and professional,

occupational, developmental, community instructional services, and

student support services. An "other" choice was provided for respon-

dents to specify a perspective different from those listed. Following

section three, the respondents proceeded in rating the program character-

istics.

Section four consisted of a series of questions designed to collect

basic demographic data on the respondents. These data included years in

present position, years at present college, years in community college

education, birthdate, sex, and highest degree held.

The fifth section of the questionnaire requested respondents to

indicate their opinion of the degree to which their position was

associated with each of the program areas, the amount of time they spent

in program quality-evaluation decision making, and their level of exper-

ience in program quality-evaluation. Also respondents were requested to

add any comments regarding the design of the study, the questionnaire,

or the program quality-evaluation process at their college.









Data used in this study were collected by means of a questionnaire

designed for use in the IRC Quality Indicators Project. Data collected

by various sections of this questionnaire did not pertain directly to

the purpose of this study and were therefore not considered. In parti-

cular this included data collected by sections three and five.

Survey Population

The population surveyed in this study included all administrators

in the community college system of Florida who make quality-evaluation

decisions regarding programs or services at their institutions. The

identification of the decision makers included in the study was the

responsibility of the designated study coordinator at each partici-

pating college. Study coordinators identified, by name and position,

the persons at their institutions who were involved in quality-evalua-

tion decisions. These persons included all administrators with some

instructional or student personnel services responsibility as identi-

fied on the institution's yearly personnel classification report (SA-1,

part 3) as administrative, managerial, or professional (Division of

Community Colleges, 1980, p. 10.1).
Collection of Data

In refining the questionnaire, the review panel was asked to

approximate the amount of time needed for its completion. The consensus

of the review panel was that approximately 45 minutes to one hour was

needed. Realizing the difficulty of securing the participation of

administrators in a study that required such a substantial investment

of time, procedures for the collection of data were used to increase the

probability of obtaining their participation.









To gain publicity and support for the study, the endorsement of

the Council of Presidents of the Florida Community College System was

requested and received. Under this endorsement, a letter was sent to

each community college president describing the study and requesting

that they appoint an individual at their college to serve as a study

coordinator. Twenty-four of the 28 public community colleges in Florida

chose to participate in the project through their appointment of study

coordinators.

Study coordinators were sent a letter thanking them for agreeing to

serve and describing their role as study coordinator for their college.

The first task of the study coordinator was to identify, by name and

position, all administrators at their college who met the criteria for

participation in the project (see p. 47).. Forms and self-addressed

stamped envelopes were included for their convenience in completing this

task.

Letters were sent to all administrators identified by the study

coordinators briefly describing the study and encouraging their parti-

cipation. Packets were prepared for each participating administrator

which included a cover letter, a one-page synopsis describing the

purpose of the study, the questionnaire, and a return label addressed

to their institution's study coordinator.

The second task of the study coordinators was to distribute and

collect the questionnaire. Each study coordinator was sent a letter

describing the distribution and collection process along with the pre-

pared packets for each identified administrator at his or her college.

This letter explained that the packets were to be distributed as soon as

possible to the participating administrators. The participating










administrators were requested to complete the questionnaire within 10

days and return it to their college's study coordinators by affixing the

included return label. The study coordinators were requested to allow

approximately two weeks from the data of the distribution of the ques-

tionnaires for their return and to forward, to the researcher, the

questionnaires that had been returned by that date.

With the return of the completed questionnaires, study coordinators

were sent a letter thanking them for their assistance with the study,

requesting the return of any subsequently received questionnairesand

informing them that they were not responsible for conducting follow-up

activities. The follow-up procedure involved two steps. First a

letter was sent to those administrators from whom questionnaires had not

been received requesting that they complete the questionnaire at their

earliest convenience and return it as soon as possible. If this pro-

cess was ineffective, a second letter was sent which included a copy

of the questionnaire and a request that the administrator complete and

return it as soon as possible. Each administrator completing and re-

turning the questionnaire was sent a letter thanking them for their in-

vestment of time and effort in the study.

When received, each questionnaire was given a position code based

on the reported position and an institutional code based on the reported

college. These codes were used to facilitate classification of the

respondents by program area. A copy of the position codes used for

classifying the respondents can be found in Appendix D.

Analysis of the Data

The data were analyzed with the assistance of the SAS (Statistical

Analysis System) computer system for data analysis. Means were









calculated for each program characteristic for all respondents and for

respondents classified in each program area. Using the calculated

means, program characteristics were ranked for all respondents and for

respondents in each program area. Spearman rank-order correlation

coefficients were calculated for the upper quartile of program charac-

teristics ranked by the mean usefulness-ratings for all respondents

and for respondents in each program area.

For all respondents and for respondents classified into the five

program areas, the program characteristics in the upper quartile of

ranked mean usefulness-ratings were organized into four categories as

they were presented in the questionnaire (program characteristics re-

lating to students, faculty/staff, costs/resources, and general infor-

mation). The differences or similarities in the program characteristics

and in the ranks of the program characteristics contained in these

groupings were discussed. For all respondents and for respondents

classified into the five program areas, the study characteristics in

the upper quartile of ranked mean usefulness-ratings were organized into

information profiles using 11 types of information for all program areas

except Student Services which required a 12th type of information. The

areas of similarities or differences in these information profiles for

each of the five program areas were discussed.















CHAPTER IV
RESULTS


Introduction

The study was designed to identify measures of quality for use in

Florida community colleges as indicted by administrators' perceptions

of the usefulness of various program characteristics for program qual-

ity-evaluation decision making. The study was based on the Stufflebeam

model of educational evaluation as the process of providing useful in-

formation for judging among decision alternatives. The particular deci-

sion of concern in the study was the determination of educational qual-

ity.

Specifically, the study proposed to:

1. Identify what program characteristics were considered most

useful for program quality-evaluation decision making by administrators

in Florida public community colleges.

2. Identify what program characteristics were considered most

useful for program quality-evaluation decision making for administra-

tors representing the five program areas of a comprehensive community

college (see Appendix A for a description of the program areas).

3. Develop information profiles consisting of the program

characteristics considered most useful for program quality-evaluation

decision making for each program area.

4. Determine if community college administrators representing










the five program areas differed in the information they identified as

most useful for program quality-evaluation decision making.

This chapter presents the results of this study. The results are

presented in three sections: a description of the study respondents,

presentation of the results for all respondents, and presentation of

the results for respondents classified by program areas. A summary of

these results with conclusions and recommendations is presented in

Chapter V.

Description of Respondents

The results are based upon an analysis of responses received from

450 administrators representing 24 of Florida's 28 public community/

junior colleges. Four colleges did not participate in the study. Of

the 631 administrators identified by the study coordinators, responses

were received from 450 for a response rate of 71.3%. All of the de-

scriptive data collected on participants were by self-report as indi-

cated on the questionnaire. The number of males to females responding

was approximately 3 to 1 (Table 1). Ninety percent of the respondents

reported having a masters degree or higher (Table 1).. This was an in-

dication only of level and not type of degree. Seventy-three percent

had been at their present college for more than five years (Table 1).

Almost 42% had been in their present position for more than five years

(Table 1). More than three-fourths of the respondents reported seven

years or more of experience in community college education and more than

half reported more than five years in education other than community

college education (Table 1).








Table 1

Frequencies for All Respondents by Sex, Degree Held, Years at Present Col-
lege, Years in Present Position, Years in Community College Education and
Years in Other Than Community College Education by Self-Report (N = 450)

Variable Frequency Percent
of N

Sex
Female 122 27.1
Male 321 71.3
Not reported 7 1.6

Degree Held
Less than Bachelors 6 1.3
Bachelors 25 5.6
Masters 162 36
Specialist 36 8
Doctorate 207 46
Not reported 14 3.1

Years at Present College
5 years or less 104 23.1
6 through 10 years 125 27.8
11 through 15 years 126 28
More than 15 years 79 17.6
Not reported 16 3.6

Years in Present Position
2 years or less 138 30.7
3 through 5 years 113 25.1
6 through 10 years 101 22.4
More than 10 years 86 19.1
Not reported 12 2.7

Years in Community
College Education
6 years or less 94 20.9
7 through 11 years 128 28.4
12 through 15 years 14 25.3
More than 15 years 104 23.1
Not reported 10 2.2

Years in Education Other
Than Community College
None 80 17.8
1 through 5 years 101 22.4
6 through 10 years 84 18.7
More than 10 years 160 35.6
Not reported 25 5.6










A position code was assigned to each respondent based upon the posi-

tion title reported on the questionnaire. Position codes with corre-

sponding titles and frequencies are listed in Appendix D. Position codes

were used to classify respondents into five program areas (Advanced and

Professional, Occupational, Developmental, Community Instructional Ser-

vices, and Student Services) used in the analysis. Appendix A contains

a description of each program area with a list of position codes in-

cluded in each area. Only respondents having major responsibility (based

on reported position title) in one of the five program areas were in-

cluded in the analysis by program area. A total of 262 respondents was

identified as having major responsibility in one of the five program

areas representing 58% of the 450 respondents who participated in the

study. The number of respondents classified in each program area is

reported in Table 2.
Table 2

Number of Respondents Per Program Area and Corresponding
Percentages of All Respondents (N = 450)


Program Area No. of Respondents Percentage of N

Advanced and Professional 65 14.4

Occupational 83 18.4

Developmental 5 1.1

Community Instructional 21 4.7
Services

Student Services 88 19.6


TOTAL 262 58.2








Results for All Respondents

Mean usefulness-ratings were calculated for each program character-

istic in the questionnaire for all respondents and for respondents

classified in each of the program areas. Using these means, ranks were

calculated for the program characteristics for each classification of

respondents. When values were tied for a rank, the tied values received

the mean of the ranks that would have been assigned had the ranks not

tied. The mean usefulness-rating and rank for each characteristic are

reported in Appendix E for all respondents and for respondents classi-

fied in the program areas.

The ranks for the program characteristics in the upper quartile of

ranked mean usefulness-ratings, based on the ratings by all respondents,

are reported in Table 3 for all respondents and for respondents classi-

fied by program area. The 108 characteristics listed in Table 3 are in

order by rank as determined for all respondents with the corresponding

rank for each characteristic indicated for respondents classified in the

five program areas of Advanced and Professional (Advan. & Prof.), Occupa-

tional (Occup.), Developmental (Develop.), Community Instructional Ser-

vices (Comm. Instr. Serv.), and Student Services (Stu. Serv.).

For all respondents, the program characteristics ranked 1 through 91

ranged respectively from a mean usefulness-rating of 1.38 to a mean use-

fulness-rating of 1.99 with the program characteristic ranked 108 having

a mean usefulness-rating of 2.05. Since the rating scale ranged from 1

(indicating that the program characteristic was considered "essential"

in making a judgment about the quality of a program) to 4 (indicating

that the program characteristic was considered of "little or no useful-

ness" or "not applicable" in arriving at a judgment of the quality of a








Table 3

Ranks for Program Characteristics in the Upper Quartile
of Mean Usefulness-Ratings for All Respondents With Cor-
responding Ranks for Respondents Classified into
Program Areas

Advan. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

9 7 1.5 1 3 1 Total cost of program
4 2 12.5 7 18 2 Ratings of program curriculum by program completers
15 1 162.5 11 11 3 Employer opinion of program completers
3 3 19 13 4 4 Number or percent of students completing a program
21 4 4.5 2 16 5 Clearly stated program objectives
7 9 1.5 8.5 22.5 6 Cost of instructional personnel per total program
6 27 76.5 37 14 7 Total cost per program FTE
23.5 15 27 4.5 11 8 Level of demand for program/service in service area
1 26 12.5 64 15 9 Number or percent of full-time faculty/staff by de-
grees held
5 5 76.5 3 17 10 Number of students enrolling in a program
10 13 76.5 16 7 11 Number or percent of students withdrawing from a pro-
gram
55.5 8 294 23 9 12 Number or percent of students passing state board or
licensure exams
19 6 4.5 11 22.5 13 Cost of materials per total program
18 11 12.5 4.5 8 14 Level of demand for program/service by students
14 33 76.5 45.5 33 15 Cost of instructional personnel per program FTE
30 43 47 42 1 16 Ratings of accessibility of student services by cur-
rently enrolled students
12 42 76.5 29.5 19 17 Total cost of program per unduplicated headcount
31.5 10 27 8.5 87 18 Ratings of program facilities/equipment by faculty/
staff
63 11.5 47 18.5 50 19 Ratings of program facilities/equipment by program
completers








Table 3-Continued

Advan. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

37.5 63 47 42 2 20 Ratings of ease of use of student services by cur-
rently enrolled students
22 18 226.5 14.5 29 21 Job satisfaction ratings by program completers
23.5 15 47 64 34.5 22 Number or percent of full-time faculty/staff by
number of students per term
29 22 12.5 6 103.5 23 Ratings of program instructional strategies by
faculty/staff
13 20.5 47 64 25 24 Number or percent of full-time faculty/staff by years
taught/service
33.5 58.5 47 80 5.5 25 Ratings of usefulness of student services by current-
ly enrolled students
26 18 27 39 39 26 Number or percent of full-time faculty/staff by
average class size
20 23 4.5 56.5 31 27 Number or percent of full-time faculty/staff by num-
ber of student contact hours per term
64.5 39 12.5 18.5 24 28 Cost of program administration per total program
69 53.5 12.5 25.5 26 29 Cost of support services per total program
25 29 76.5 145 27 30 Number or percent of full-time faculty/staff by num-
ber of course hours taught per term
16 60.5 105.5 33.5 37.5 31 Cost of instructional personnel per program undupli-
cated headcount
11 30 27 117 32 32 Number or percent of full-time faculty/staff by
length of service in program
17 64.5 76.5 25.5 45.5 33 Number or percent of part-time faculty/staff by de-
grees held
40 35 27 48.5 36 34 Ratings of program staff by program completers
62 38 294 71 44 35 Number or percent of program completers by type of
license, certificate, or registration received
129 60.5 47 67.5 5.5 36 Ratings of accessibility of student services by pro-
gram completers









Table 3-Continued

Advari. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

58 49.5 27 33.5 48.5 37 Number of support staff per total program
31.5 24 27 18.5 91.5 38 Ratings of program administration by faculty/staff
91.5 69 47 93.5 13 39 Ratings of usefulness of student services by pro-
gram completers
82.5 28 294 37 55.5 40 Number or percent of program completers taking
state board or licensure exams
27.5 56.5 76.5 37 77.5 41 Cost of materials per program FTE
99 18 12.5 21.5 99.5 42 Ratings of program instructional strategies by
program completers
2 40 47 222 58 43 Ratio of part-time to full-time faculty/staff
45 44 4.5 48.5 48.5 44 Cost of equipment maintenance per total program
77 32 162.5 71 83 45 Ratings by accreditation agencies
75.5 37 76.5 56.5 43 46 Number or percent of full-time faculty/staff by pro-
ductivity ratio
33.5 41 76.5 95 28 47 Number or percent of full-time faculty/staff by
rate of faculty/staff turnover
39 53.5 76.5 42 41.5 48 Cost of materials per program unduplicated headcount
58 35 47 18.5 77,5 49 Number or percent of part-time faculty/staff by
average class size
129 82 47 75.5 11 50 Ratings of ease of use of student services by pro-
gram completers
42 45 76.5 126.5 63.5 51 Equipment utilization per total program
109 15 27 14.5 144 52 Ratings of a program curriculum by faculty/staff
43 53.5 105.5 25.5 105.5 53 Ratings of support services by faculty/staff
86 47 168 29.5 105.5 54 Ratings by certification boards
44 71 194.5 87.5 47 55 Program admission requirements
58 35 76.5 42 87 56 Number or percent of part-time faculty/staff by num-
ber of students per term









Table 3-Continued


Advan. Occup. Der
& Prof.


106

8

75.5

129
142

81
51
80

61
109

51

53.5

73
91.5

115

100
64.5
46


20.5

97

58.5

74
82

56.5
99.5
67

49.5
67

47

29

117
47

51

92
77
113


velop. Comm. Stu. All Program Characteristics
Instr. Serv.
Serv.

268 75.5 71 57 Number or percent of program completers holding jobs
for which trained
27 117 34.5 58 Number or percent of full-time faculty/staff by
certification/rank
47 33.5 87 59 Ratings of program facilities/equipment by currently
enrolled students
27 113 20.5 60 Ratings of support services by program completers
12.5 87.5 20.5 61 Ratings of support services by currently enrolled
students
47 87.5 67.5 62 Level of demand for program/service in state
76.5 71 52 63 Cost of program administration per program FTE
12.5 33.5 73 64 Number/types of changes as a result of program eval-
uation
12.5 21.5 164.5 65 Ratings of program staff by faculty/staff
27 59.5 81.5 66 Ratings of program staff by currently enrolled stu-
dents
130.5 161 109 67 Number or percent of full-time faculty/staff by
level of use of alternative instructional methods
76.5 117 69 68.5 Number or percent of full-time faculty/staff by
level of participation in program decision making
105.5 51 70 68.5 Cost of support services per program FTE
12.5 48.5 111.5 70 Number or percent of part-time faculty/staff by num-
ber of student contact hours per term
105.5 29.5 96.5 71 Number or percent of part-time faculty/staff by pro-
ductivity ratio
27 59.5 45.5 72 Cost of space utilized per total program
130.5 87.5 98 73 Space utilization per total program
76.5 126.5 76 74 Number or percent of entering students by types of
developmental or remedial assistance desired









Table 3-Continued


Advan. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

90 119.5 105.5 67.5 37.5 75 Cost of support services per program unduplicated
headcount
113 64.5 47 67.5 53.5 76 Number or percent of entering students by type of
handicap
41 111 47 165 93 77 Number or percent of entering students by academic
skills level as measured by local instruments
71 31 358 139 61 78 Number or percent of entering students by major area
of study
97.5 92 76.5 113 41.5 79 Cost of program administration per program undupli-
cated headcount
49 86 47 199.5 80 80 Number or percent of full-time faculty/staff by num-
ber of FTE per term
115 70 47 53.5 147.5 81 Ratings of program instructional strategies by cur-
rently enrolled students
53.5 53.5 358 148.5 57 82 Number or percent of currently enrolled students by
major area of study
36 79 145 93.5 127 83 Number or percent of program completers by average
time taken for completion of a program
109 75.5 47 146 99.5 84 Number/types of changes as a result of accreditation
studies
66.5 88 76.5 11 111.5 85 Number or percent of part-time faculty/staff by years
taught/service
88 111 105.5 157 119.5 86 Number or percent of currently enrolled students by
types of developmental or remedial assistance desired
120.5 62 162.5 99.5 119.5 87 Number or percent of part-time faculty/staff by num-
ber of course hours taught per term
55.5 106.5 226.5 217.5 75 88 Number or percent of currently enrolled students by
average GPA of students in program
51 102.5 76.5 132.5 130 89 Cost of equipment maintenance per program FTE









Table 3-Continued

Advan. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

60 80 76.5 29.5 132 90 Number or percent of part-time faculty/staff by
length of service in a program
103 106.5 226.5 110 107 91 Number or percent of currently enrolled students by
percent of total college FTE in program
126.5 86 318 175.5 65.5 92 Number or percent of currently enrolled students by
number of hours with failing grade
120.5 84 268 315.5 65.5 93 Number or percent of currently enrolled students by
cumulative GPA categories for program-related course-
work
135 90 47 25.5 138 94 Ratings of a program curriculum by currently enrolled
students
95.5 137.5 76.5 120 124 95 Number of support staff per program FTE
72 98 105.5 157 114.5 96 Equipment utilization per program unduplicated head-
count
94 101 194.5 260.5 140 97 Number of library holdings per total program
140.5 75.5 47 87.5 91.5 98 Number or percent of currently enrolled students by
type of handicap
136 133 105.5 126.5 63.5 99 Number of support staff per program unduplicated head-
count
143.5 86 27 126.5 131 100 Ratings of program administration by program complet-
ers
37.5 104.5 76.5 184 164.5 101 Equipment utilization per program FTE
106 111 162.5 165 116 102 Number or percent of entering students by level of
previous academic achievement
27.5 129 76.5 62 119.5 103 Number or percent of part-time faculty/staff by
certification/rank
132 145 145 80 55.5 104 Number or percent of currently enrolled students by
percent of total college unduplicated headcount in
program








Table 3-Continued

Advan. Occup. Develop. Comm. Stu. All Program Characteristics
& Prof. Instr. Serv.
Serv.

161 123 105.5 45.5 101 105 Ratings of usefulness of student services by faculty/
staff
131 67 130.5 99.5 186 106 Number or percent of part-time faculty/staff by level
of use of alternative instructional methods
95.5 92 76.5 82 102 107 Number or percent of full-time faculty/staff by level
of compensation
35 137.5 130.5 209 144 108 Number or percent of currently enrolled students by
academic skills level as measured by local instruments









program), all 108 characteristics in Table 3 had a mean usefulness-rating

on the "essential" side of the rating scale. The mean usefulness-ratings

for all respondents ranged from 1.38 to 3.48 for all 434 program charac-

teristics (Appendix E).

In the questionnaire, the program characteristics were organized

into four categories relating to students, faculty/staff, costs/resour-

ces, and general information (Appendix C). The distribution of the pro-

gram characteristics in the upper quartile of mean usefulness-ratings by

all respondents among all four categories of program characteristics is

reported in Table 4.

The program characteristics relating to students in the upper quar-

tile of mean usefulness-ratings for all respondents are listed in Table

5. Program characteristics relating to program completers received the

highest ratings for usefulness in making program quality-evaluation de-

cisions combined with number of students enrolling in a program (rank 10)

Table 4

Distribution by Category of Program Characteristics in the Upper
Quartile of Mean Usefulness-Ratings by All Respondents

Category Number of Percentage of Upper
Characteristics Quartile Characteristics

I. Program Characteristics 22 20.4
Relating to Students

II. Program Characteristics 25 23.1
Relating to Faculty/Staff

III. Program Characteristics 26 24.1
Relating to Costs/Resources

IV. Program Characteristics 35 32.4
Relating to General
Information









and number or percent of students withdrawing from a program (rank 11).

Next in perceived usefulness in program quality-evaluation decision mak-

ing were program characteristics relating to entering students followed

by program characteristics relating to currently enrolled students. Four

of the five program characteristics rated as highly useful for program

quality-evaluation decision making relating to entering students were in-

cluded in the program characteristics relating to currently enrolled stu-

dents.

The program characteristics relating to faculty/staff in the upper

quartile of mean usefulness-ratings by all respondents are listed in

Table 5

Program Characteristics Relating to Students (Questionnaire Category I) in
the Upper Quartile of Mean Usefulness-Ratings by All Respondents With Ranks


Ranks Program Characteristics

10 Number of students enrolling in a program
Number or percent of entering students:
74 by types of developmental or remedial assistance desired
76 by type of handicap
77 by academic skills level as measured by local instruments
78 by major area of study
102 by level of previous academic achievement
Number or percent of currently enrolled students:
82 by major area of study
86 by types of developmental or remedial assistance desired
88 by average GPA of students in program
91 by percent of total college FTE in program
92 by number of hours with failing grade
93 by cumulative GPA categories for program-related coursework
98 by type of handicap
104 by percent of total college unduplicated headcount in program
108 by academic skills level as measured by local instruments
Number or percent of students completing a program
Number or percent of program completers:
12 passing state board or licensure exams
35 by type of license, certificate or registration received
40 taking state board or licensure exams
57 holding jobs for which trained
83 by average time taken for completion of a program
11 Number or percent of students withdrawing from a program









Table 6. Although the rank-order differed, all of the program character-

istics rated as highly useful in making program quality-evaluation deci-

sions and which related to part-time faculty/staff appeared in the list

for full-time faculty/staff. The program characteristic "Number or per-

cent of faculty/staff by degrees held" had the highest rank for the pro-

gram characteristics related to both full-time and part-time faculty/

staff. The 10 program characteristics common to both full-time and part-

time faculty/staff appeared to be indicators of: level of preparation

(degrees held and certification/rank), level of experience (years taught/

Table 6

Program Characteristics Relating to Faculty/Staff (Questionnaire Category
II) in the Upper Quartile of Mean Usefulness-Ratings
by All Respondents With Ranks


Ranks Program Characteristics

43 Ratio of part-time to full-time faculty/staff
Number or percent of full-time faculty/staff:
9 by degrees held
22 by number of students per term
24 by years taught/service
26 by average class size
27 by number of student contact hours per term
30 by number of course hours taught per term
32 by length of service in program
46 by productivity ratio
47 by rate of faculty/staff turnover
58 by certification/rank
67 by level of use of alternative instructional methods
68.5 by level of participation in program decision making
80 by number of FTE per term
107 by level of compensation
Number or percent of part-time faculty/staff:
33 by degrees held
49 by average class size
56 by number of students per term
70 by number of student contact hours per term
71 by productivity ratio
85 by years taught/service
87 by number of course hours taught per term
90 by length of service in a program
103 by certification/rank
106 by level of use of alternative instructional methods








service, length of service), length of productivity (number of students,

student contact hours, course hours taught per term, average class size,

productivity ratio), and level of instructional skill (use of alternative

instructional methods). For full-time faculty/staff there was an addi-

tional productivity indicator (number of FTE per term). Four program

characteristics related to full-time faculty/staff appeared to be indica-

tors of the functioning of program/service administration: ratio of

part-time to full-time faculty/staff, rate of faculty/staff turnover,

level of participation in decision making, and level of compensation.

The program characteristics related to costs/resources in the upper

quartile of mean usefulness-ratings by all respondents are listed in

Table 7. Three of the top 10 program characteristics ranked by mean use-

fulness-ratings by all respondents were included in this category of pro-

gram characteristics: total cost per total program (rank 1), total cost

per program FTE (rank 7), and cost of instructional personnel per total

program (rank 6). There were seven types of program characteristics

rated as highly useful in program quality-evaluation decision making re-

lated to costs/resources: total cost, cost of administration, cost of

instructional personnel, cost of support services, cost of materials,

cost of equipment maintenance, and cost of space utilized. For the

first five of these, the program characteristic had the highest useful-

ness-rating per total program, followed by per program FTE, which was

followed by per program unduplicated headcount. Four other types of pro-

gram characteristics related to costs/resources were rated as highly use-

ful in making program quality-evaluation decisions: number of support

staff, equipment utilization, space utilization, and number of library

holdings. The order of emphasis for usefulness in program quality-









Table 7

Program Characteristics Relating to Costs/Resources (Questionnaire Cate-
gory III) in the Upper Quartile of Mean Usefulness-Ratings
by All Respondents With Ranks


Ranks Program Characteristics

Total cost:
1 per total program
7 per program FTE
17 per program unduplicated headcount
Cost of administration:
28 per total program
63 per program FTE
79 per program unduplicated headcount
Cost of instructional personnel:
6 per total program
15 per program FTE
31 per program unduplicated headcount
Cost of support services:
29 per total program
68.5 per program FTE
75 per program unduplicated headcount
Cost of materials:
13 per total program
41 per program FTE
48 per program unduplicated headcount
Cost of equipment maintenance:
44 per total program
89 per program FTE
72 Cost of space utilized per total program
Number of support staff:
37 per total program
95 per program FTE
99 per program unduplicated headcount
Equipment utilization:
51 per total program
96 per program unduplicated headcount
101 per program FTE
73 Space utilization per total program
97 Number of library holdings per total program


evaluation decision making per total program was: total cost, cost of

instructional personnel, cost of materials, cost of administration, cost

of support services, number of support staff, cost of equipment mainten-

ance, equipment utilization, cost of space utilized, space utilization,









and number of library holdings. The order of emphasis per program FTE

and per program unduplicated headcount varied slightly from the order

of emphasis per total program.

The program characteristics related to general information in the

upper quartile of mean usefulness-ratings by all respondents are listed

in Table 8. Four of the top 10 program characteristics ranked by mean

usefulness-ratings by all respondents were in this category: ratings of

a program curriculum by program completers (rank 2), employer opinion of

program completers (rank 3), clearly stated program objectives (rank 5),

and level of demand for program/service in service area (rank 8). Rat-

ings of various aspects of a program/service by various groups were rated

as highly useful in making program quality-evaluation decisions. Ratings

of a program curriculum and program staff by program completers had the

highest usefulness-rating followed by ratings by faculty/staff and then

ratings by currently enrolled students. A different ranking for type

of rater occurred for ratings of program facilities/equipment, program

instructional strategies, program administration, and support services.

For these aspects of a program/service, ratings by faculty/staff were

rated as most useful followed by ratings by program completers which was

followed by ratings by currently enrolled students, except that the lat-

ter ratings were not rated as highly useful for rating program adminis-

tration. For the ratings of student services, a different ranking for

raters occurred. Ratings by currently enrolled students of accessibil-

ity of student services, ease of use of student services, and usefulness

of student services were rated as most useful followed by ratings by

program completers. Program admission requirements, ratings by accredi-

tation agencies and certification boards, and changes resulting from









Table 8

Program Characteristics Relating to General Information (Questionnaire
Category IV) in the Upper Quartile of Mean Usefulness-Ratings
by All Respondents With Ranks


Ranks Program Characteristics

3 Employer opinion of program completers
5 Clearly stated program objectives
Level of demand for program/service:
8 in service area
14 by students
62 in state
55 Program admission requirements
45 Ratings by accreditation agencies
54 Ratings by certification boards
Number/types of changes as a result of:
64 program evaluation
84 accreditation studies
21 Job satisfaction ratings by program completers
Ratings of a program curriculum:
2 by program completers
52 by faculty/staff
94 by currently enrolled students
Ratings of program facilities/equipment:
18 by faculty/staff
19 by program completers
59 by currently enrolled students
Ratings of program instructional strategies
23 by faculty/staff
42 by program completers
81 by currently enrolled students
Ratings of program staff:
34 by program completers
65 by faculty/staff
66 by currently enrolled students
Ratings of program administration:
38 by faculty/staff
100 by program completers
Ratings of support services:
53 by faculty/staff
60 by program completers
61 by currently enrolled students
Ratings of accessibility of student services:
16 by currently enrolled students
36 by program completers
Ratings of ease of use of student services:
20 by currently enrolled students
50 by program completers
Ratings of usefulness of student services:
25 by currently enrolled students
39 by program completers
105 by faculty/staff









program evaluations and accreditation studies were rated, also, as highly

useful in making quality-evaluation decisions about programs or services.

From these tables (Table 5 to Table 8), it may be seen that a wide

variety of program characteristics as rated for degree of usefulness in

program quality-evaluation decision making by all respondents had high

mean usefulness-ratings (ranging from 1.38 to 2.05). In Table 9, the

108 program characteristics in the upper quartile of mean usefulness-

ratings by all respondents are organized into 11 types of information

including: need for and structure of a program, size, costs, utiliza-

tion rates, support services, information on entering students, infor-

mation on currently enrolled students, information on faculty/staff,

information from external/internal evaluations, quantitative outputs,

and ratings. In this one table, all the program characteristics rated

as highly useful by all respondents are displayed in a comprehensive

multi-variate information profile for making quality-evaluation deci-

sions about programs or services as perceived by administrators in

Florida's community colleges.

Results for Program Areas

The preceding describes the program characteristics rated as most

highly useful based upon the mean usefulness-ratings by all respondents.

There were differences in the program characteristics in the upper quar-

tile of mean usefulness-ratings when the responses were analyzed by re-

spondents classified in the five program areas: Advanced and Profes-

sional, Occupational, Developmental, Community Instructional Services,

and Student Services. Only respondents who, based upon their position ti-

tles, were perceived as having major responsibility in one of the five









Table 9

Information Profile of Program Characteristics in the Upper Quartile of
Mean Usefulness-Ratings by All Respondents (N = 450)


Information
Type Program Characteristics Relating to Information Type

Need and Clearly stated program objectives, level of demand for
structure program, program admission requirements

Size Number enrolling, percent of total college FTE and undu-
plicated headcount

Costs Total, instructional personnel, materials, administration,
support services, equipment maintenance, space utilized

Utiliza- Equipment, space
tion rates

Support Number of support staff, number of library holdings
services

Entering Type of developmental or remedial assistance desired, type
students of handicap, academic skills level (previous and as
assessed by local instruments), major area of study

Currently Same as for entering students without level of previous
enrolled academic achievement and adding information on performance
students of students in program (GPA, cumulative GPA, hours with
failing grade)

Faculty/ For both full-time and part-time: ratio of part-time to
staff full-time; level of preparation (degrees held, certifica-
tion/rank);'level of experience (years taught/service,
length of service); level of productivity (number of stu-
dents, student contact hours, and course hours taught per
term, average class size, productivity ratio); level of
instructional skill (use of alternative instructional
methods). For full-time: rate of turnover; level of par-
ticipation in program decision making; level of compensa-
tion; number of FTE per term

External/ Ratings by accreditation agencies and certification boards,
internal number/types of changes as result of these studies, and
evaluations other program evaluation

Quanti- Number or percent: completing; taking and passing state
tative board or licensure exams; by type of license, certifi-
outputs cate, or registration received; holding jobs for which
trained; by average time for completion; withdrawing









Table 9-Continued

Information
Type Program Characteristics Relating to Information Type

Ratings Of program completers by employers; of job satisfcation
by program completers; of a program's curriculum, facil-
ities/equipment, instructional strategies, staff, and
administration by various types of raters; of support
services and student services by various types of raters


program areas were included in the analysis by program area. The posi-

tion codes included in each program area are listed in Appendix A. The

number of respondents in each program area and the percentage of all re-

spondents which this represents are given in Table 2.

Using the ranked mean usefulness-ratings for the same 108 program

characteristics as reported in Table 3, Spearman rank-order correlation

coefficients were calculated between the five program areas and between

all respondents and each program area. These are reported in Table 10.

They ranged from a low of .25 to a high of .64, excluding the correla-

tion coefficients of each program area with all respondents. The

Spearman rank-order correlation coefficients were a measure of the de-

gree of similarity in the ranks of the program characteristics as or-

dered by mean usefulness-ratings across the five program areas. The

coefficients indicated considerable variability in the degree of simi-

larity of the ranked upper quartile mean usefulness-ratings among the

program areas. To determine where the differences and similarities

occurred in the program characteristics among the program areas, the

program characteristics in the upper quartile of mean usefulness-rat-

ings for each program area were identified. They are reported for each

program area in the same manner as the program characteristics in the

upper quartile of mean usefulness-ratings for all respondents were









Table 10

Spearman Rank-Order Correlation Coefficients for the Upper Quartile of
Mean Usefulness-Ratings by All Respondents for Respondents Classified
in the Five Program Areas


All Advan. Occup. Develop. Comm. Stu.
& Prof. Instr. Serv.
Serv.

All 1.00 .69 .85 .43 .66 .77

Advan. 1.00 .55 .25 .35 .47
& Prof.

Occup. 1.00 .34 .64 .50

Develop. 1.00 .43 .29

Comm. Instr. 1.00 .33
Serv.

Stu. Serv. 1.00


presented, i.e., organized into the four categories (program character-

istics related to students, faculty/staff, costs/resources, and general

information) and then displayed in a summary information-profile table.

The mean usefulness-ratings for all program characteristics for each pro-

gram area are reported in Appendix E.

Advanced and Professional Program Area

The distribution of the program characteristics in the upper quar-

tile of mean usefulness-ratings by respondents classified in the Ad-

vanced and Professional Program Area among the four categories of pro-

gram characteristics is reported in Table 11. Compared to the distri-

bution for all respondents (Table 4), respondents classified in the Ad-

vanced and Professional Program Area identified more characteristics re-

lated to students and fewer characteristics related to general informa-

tion as highly useful in program quality-evaluation decision making.








Table 11

Distribution by Category of Program Characteristics in the Upper Quartile
of Mean Usefulness-Ratings by Respondents (N = 65) Classified in the Ad-
vanced and Professional Program Area


Category Number of Percentage of Upper
Characteristics Quartile Characteristics

I. Program Characteristics 30 28.0
Related to Students

II. Program Characteristics 24 22.4
Related to Faculty/Staff

III. Program Characteristics 29 27.2
Related to Costs/Resources

IV. Program Characteristics 24 22.4
Related to General
Information -

TOTAL 107 100.0


The means of the usefulness-ratings for these program characteristics

ranged from 1.28 to 1.95 (Appendix E). The program characteristics in

the upper quartile of mean usefulness-ratings by respondents classified

in the Advanced and Professional Program Area are presented with ranks

in the next four tables. Those related to students are presented in

Table 12, those related to faculty/staff in Table 13, those related to

costs/resources in Table 14, and those related to general information in

Table 15.

Table 12 shows that respondents classified in the Advanced and Pro-

fessional Program Area rated as highly useful in making quality-evalua-

tion decisions program characteristics concerning the measuring of aca-

demic skills through testing for all categories of students (entering,

currently enrolled, completers). Types of testing identified included

local, state, and national instruments with the mean usefulness-ratings









Table 12

Program Characteristics Relating to Students (Questionnaire Category I)
in the Upper Quartile of Mean Usefulness-Ratings by Respondents Classi-
fied in the Advanced and Professional Program Area With Ranks


Ranks Program Characteristics

5 Number of students enrolling in a program
Number or percent of entering students:
41 by academic skills level as measured by local instruments
46 by types of developmental or remedial assistance desired
47.5 by academic skills level as measured by state instruments
71 by major area of study
88 by academic skills level as measured by national instruments
102 by degree level sought
106 by level of previous academic achievement
Number or percent of currently enrolled students:
35 by academic skills level as measured by local instruments
53.5 by major area of study
55.5 by average GPA of students in program
68 by academic skills level as measured by state instruments
82.5 by average course load for students in program
88 by academic skills level as measured by national instruments
88 by types of developmental or remedial assistance desired
93 by number of hours of developmental/remedial work
101 by performance on standardized state tests
103 by percent of total college FTE in program
3 Number or percent of students completing a program
Number or percent of program completers:
36 by average time taken for completion of a program
47.5 by performance on standardized state tests
55.5 passing state board or licensure exams
62 by type of license, certificate, or registration received
70 by performance on standardized national tests
74 by major area of study
82.5 taking state board or licensure exams
84.5 by academic skills level as measured by local instruments
106 holding jobs for which trained
106 by academic skills level as measured by state instruments
10 Number or percent of students withdrawing from a program


ranked in the order as cited for entering and currently enrolled stu-

dents. For program completers, the mean usefulness-ratings rank order

was standardized state tests, standardized national tests, and local

instruments. Only one program characteristic related to testing

appeared in the upper quartile of mean usefulness-ratings by all









respondents: number or percent of entering and currently enrolled stu-

dents by academic skills level as measured by local instruments (Table

5).

Three of the top 10 program characteristics ranked by mean useful-

ness-ratings by respondents classified in the Advanced and Professional

Program Area related to faculty/staff (Table 13): number or percent of

full-time faculty/staff by degrees held (rank 1), by certification/rank

(rank 8), and the ratio of part-time to full-time faculty/staff (rank 2).

All of the program characteristics related to full-time faculty/staff in

Table 13

Program Characteristics Relating to Faculty/Staff (Questionnaire Cate-
gory II) in the Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Advanced and Professional Program Area With Ranks


Ranks Program Characteristics

2 Ratio of part-time to full-time faculty/staff
78 Ratio of faculty to student support staff
84.5 Ratio of faculty/staff to clerical staff
Number or percent of full-time faculty/staff:
1 by degrees held
8 by certification/rank
11 by length of service in program
13 by years taught/service
20 by number of student contact hours per term
23.5 by number of students per term
25 by number of course hours taught per term
26 by average class size
33.5 by rate of faculty/staff turnover
49 by number of FTE per term
51 by level of use of alternative instructional methods
53.5 by level of participation in program decision making
75.5 by productivity ratio
99.5 by level of compensation
Number or percent of part-time faculty/staff:
17 by degrees held
27.5 by certification/rank
58 by average class size
58 by number of students per term
60 by length of service in a program
66.5 by years taught/service
91.5 by number of student contact hours per term








the upper quartile of mean usefulness-ratings by all respondents (Table

6) were in the upper quartile of mean usefulness-ratings by respondents

classified in the Advanced and Professional Program Area. Two program

characteristics appeared which were not in the upper quartile of mean

usefulness-ratings by all respondents: the ratio of faculty to student

support staff and the ratio of faculty/staff to productivity ratio, num-

ber of course hours taught per term, and level of use of alternative in-

structional methods.

Program characteristics related to costs/resources (Table 14) also

had high mean usefulness-ratings for respondents classified in the Ad-

vanced and Professional Program Area. Three of the top 10 program char-

acteristics ranked by mean usefulness-ratings by respondents classified

in this program area related to costs/resources: total cost per pro-

gram FTE (rank 6) and per total program (rank 9) and cost of instruc-

tional personnel per total program (rank 7). The order of emphasis for

usefulness in Advanced and Professional program quality-evaluation deci-

sion making per total program was cost of instructional personnel, total

cost, cost of materials, equipment utilization, cost of equipment main-

tenance, number of support staff, cost of administration, space utiliza-

tion, cost of support services, number of library holdings, and cost of

space utilized. The order of emphasis per program FTE and per program

unduplicated headcount varied slightly from the order of emphasis per

total program. In addition to the program characteristics related to

costs/resources which appeared in the upper quartile of mean usefulness-

ratings by all respondents (Table 7), respondents classified in the Ad-

vanced and Professional Program Area indicated as highly useful space

utilization per program FTE and per program unduplicated headcount, cost









Table 14

Program Characteristics Relating to Costs/Resources (Questionnaire Cate-
gory III) in the Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Advanced and Professional Program Area With Ranks


Ranks Program Characteristics

Total cost:
6 per program FTE
9 per total program
12 per program unduplicated headcount
Cost of administration
51 per program FTE
64.5 per total program
97.5 per program unduplicated headcount
Cost of instructional personnel:
7 per total program
14 per program FTE
16 per program unduplicated headcount
Cost of support services:
69 per total program
73 per program FTE
90 per program unduplicated headcount
Cost of materials:
19 per total program
27.5 per program FTE
39 per program unduplicated headcount
Cost of equipment maintenance:
45 per total program
51 per program FTE
97.5 per program unduplicated headcount
100 Cost of space utilized per total program
Number of support staff:
58 per total program
95.5 per program FTE
Equipment utilization:
37.5 per program FTE
42 per total program
72 per program unduplicated headcount
Space utilization:
64.5 per total program
66.5 per program FTE
79 per program unduplicated headcount
Number of library holdings:
94 per total program
104 per program FTE


of equipment maintenance per program unduplicated headcount, and number

of library holdings per program FTE. Number of support staff per









program unduplicated headcount, in the upper quartile of mean useful-

ness-ratings by all respondents (Table 7) did not appear in the upper

quartile of mean usefulness-ratings by respondents classified in the

Advanced and Professional Program Area.

Only one of the top 10 program characteristics ranked by mean use-

fulness-ratings for respondents classified in the Advanced and Profes-

sional Program Area related to general information (Table 15): ratings

Table 15

Program Characteristics Relating to General Information (Questionnaire
Category IV) in the Upper Quartile of Mean Usefulness-Ratings by Respon-
dents Classified in the Advanced and Professional Program Area With Ranks


Ranks Program Characteristics

21 Clearly stated program objectives
44 Program admission requirements
Level of demand for program/service:
18 by students
23.5 in service area
81 in state
77 Ratings by accreditation agencies
86 Ratings by certification boards
80 Number/types of changes as a result of program evaluation
15 Employer opinion of program completers
22 Job satisfaction ratings by program completers
4 Ratings of a program curriculum by program completers
Ratings of program facilities/equipment:
31.5 by faculty/staff
63 by program completers
75.5 by currently enrolled students
Ratings of program instructional strategies:
29 by faculty/staff
99 by program completers
Ratings of program staff:
40 by program completers
61 by faculty/staff
31.5 Ratings of program administration by faculty/staff
43 Ratings of support services by faculty/staff
30 Ratings of accessibility of student services by currently enrolled
students
37.5 Ratings of ease of use of student services by currently enrolled
students
Ratings of usefulness of student services;
33.5 by currently enrolled students
91.5 by Droaram comoleters









of a program curriculum by program completers (rank 4). In order, by

mean usefulness-rating rank, this was followed by employer opinion of

program completers (rank 15), level of demand for program/service by

students (rank 18), clearly stated program objectives (rank 21), and

job satisfaction ratings by program completers (rank 22). Ratings by

faculty/staff of program instructional strategies, program facilities/

equipment, program administration, and support services had higher mean

usefulness-ratings than ratings by either program completers or cur-

rently enrolled students. For ratings of program staff, ratings by pro-

gram completers rather than by faculty/staff had a higher mean useful-

ness-rating rank. Neither ratings of program instructional strategies,

program staff, or support services by currently enrolled students nor

ratings of program administration or support services by program com-

pleters appeared in the upper quartile mean usefulness-ratings by re-

spondents classified in the Advanced and Professional Program Area.

They were in the upper quartile of mean usefulness-ratings by all re-

spondents (Table 8).

Ratings of accessibility, usefulness, and ease of use of student ser-

vices by currently enrolled students had a higher mean usefulness rating

rank than such ratings by either program completers or faculty/staff.

Three other types of characteristics related to general information were

rated as highly useful in program quality-evaluation decision making by

respondents classified in the Advanced and Professional Program Area:

program admission requirements, ratings by accreditation agencies and

certification boards, and number/types of changes as a result of program

evaluation. These occurred in the upper quartile of mean usefulness-

ratings by all respondents (Table 8).









In Table 16, the program characteristics in the upper quartile of

mean usefulness-ratings by respondents classified in the Advanced and

Professional Program Area were organized into a program quality-evalua-

tion information profile using the same 11 types of information which

were used in the program quality-evaluation information profile for all

respondents (Table 9). A comparison of Tables 9 and 16 shows clearly

the similarities and differences in the two information profiles. Al-

though the order of emphasis varied relating to usefulness of the pro-

gram characteristics in program quality-evaluation decision making, the

areas of similarity were in the information types of need and structure,

size, costs, utilization rates, support services, external/internal

evaluations, and ratings. The areas of differences were in the infor-

mation types of entering students, currently enrolled students, faculty/

staff, and quantitative outputs.

Occupational Program Area

The distribution of the program characteristics in the upper quar-

tile of mean usefulness-ratings by respondents classified in the Occu-

pational Program Area among the four categories of program characteris-

tics is reported in Table 17. There were 109 program characteristics

identified for the Occupational Program Area because two were tied for

rank 108. Compared to the distribution for respondents classified in

the Advanced and Professional Program Area (Table 11), respondents clas-

sified in the Occupational Program Area identified approximately 10%

more program characteristics related to general information, 6% more

program characteristics related to costs/resources, and 6% less pro-

gram characteristics related to students. The distribution of the pro-

gram characteristics among the four categories for respondents









Table 16

Information Profile of Program Characteristics in the Upper Quartile of
Mean Usefulness-Ratings by Respondents Classified in the Advanced and
Professional Program Area


Information
Type Program Characteristics Relating to Information Type

Need and Level of demand for program, clearly stated program objec-
structure tives, program admission requirements

Size Number enrolling, percent of total college FTE

Costs Instructional personnel, total, materials, equipment main-
tenance, administration, support services, space utilized

Utiliza- Equipment, space
tion rates

Support Number of support staff, number of library holdings
services

Entering Academic skills level as measured by local, state, and na-
students tional instruments, types of developmental or remedial
assistance desired, major area of study, degree level
sought, level of previous academic achievement

Currently Same as for entering students without degree level sought
enrolled and level of previous academic achievement and adding aver-
students age GPA and average course load of students in program,
number of hours of developmental/remedial work, performance
on standardized state tests

Faculty/ For both full-time and part-time: ratio of part-time to
staff full-time; level of preparation (degrees held, certifica-
tion/rank); level of experience (years taught/service,
length of service in program); level of productivity (num-
ber of student contact hours per term, number of students
per term). For full-time: level of productivity (number
of course hours taught per term, number of FTE per term,
productivity ratio); level of instructional skill (level
of use of alternative instructional methods); rate of
turnover; level of participation in program decision mak-
ing; level of compensation; ratio of faculty to student
support staff; ratio of faculty/staff to clerical staff

External/ Ratings by accreditation agencies and certification boards,
internal number/types of changes as a result of program evaluation
evaluations








Table 16-Continued

Information
Type Program Characteristics Relating to Information Type

Quanti- Number or percent: completing; taking and passing state
tative board or licensure exams; by type of license, certificate,
outputs or registration received; holding jobs for which trained;
by average time for completion; performance on standard-
ized state and national tests and on local and state in-
struments; by major area of study; withdrawing

Ratings Of program completers by employers; of job satisfaction by
program completers; of a program's curriculum, facilities/
equipment, instructional strategies, staff, and administra-
tion by various types of raters; of support services and
student services by various types of raters


classified in the Occupational Program Area was similar to the distrib-

ution for all respondents reported in Table 4. For the program charac-

teristics in the upper quartile of mean usefulness-ratings by respon-

dents classified in the Occupational Program Area, the means of the

usefulness-ratings ranged from 1.21 to 1.99 (Appendix E).

Table 17

Distribution by Category of Program Characteristics in the Upper Quar-
tile of Mean Usefulness-Ratings by Respondents Classified in the Occu-
pational Program Area


Category Number of Percentage of Upper
Characteristics Quartile Characteristics

I. Program Characteristics 24 22.0
Relating to Students

II. Program Characteristics 28 25.7
Relating to Faculty/Staff

III. Program Characteristics 23 21.1
Relating to Costs/Resources

IV. Program Characteristics 34 31.2
Relating to General
Information -

TOTAL 109 100.0









The program characteristics in the upper quartile of mean useful-

ness-ratings by respondents classified in the'Occupational Program

Area are presented with ranks in the next four tables. Those related

to students are reported in Table 18, those related to faculty/staff

in Table 19, those related to costs/resources in Table 20, and those

related to general information in Table 21.

The top ranked program characteristics by mean usefulness-ratings

related to students for respondents classified in the Occupational Pro-

gram Area (Table 18) were similar to those for respondents classified
Table 18

Program Characteristics Relating to Students (Questionnaire Category I)
in the Upper Quartile of Mean Usefulness-Ratings by Respondents Classi-
fied in the Occupational Program Area With Ranks


Ranks Program Characteristics

5 Number of students enrolling in a program
Number or percent of entering students:
31 by major area of study
64.5 by type of handicap
72 by career decision status
94 by degree level sought
102.5 by level of awareness of college's programs, services, etc.
Number or percent of currently enrolled students:
53.5 by major area of study
75.5 by type of handicap
84 by cumulative GPA categories for program-related coursework
86 by number of hours with failing grade
95.5 by career decision status
104.5 by degree level sought
106.5 by average GPA of students in program
106.5 by percent of total college FTE in program
3 Number or percent of students completing a program
Number or percent of program completers:
8 passing state board or licensure exams
20.5 holding jobs for which trained
28 taking state board or licensure exams
38 by type of license, certificate, or registration received
78 by salary categories
79 by average time taken for completion of a program
95.5 by major area of study
99.5 by employment status
13 Number or percent of students withdrawing from a program








in the Advanced and Professional Program Area (Table 12): number or

percent of students completing a program (rank 3), number of students

enrolling in a program (rank 5), and number or percent of students

withdrawing from a program (rank 13). The exception was the program

characteristic "number or percent of program completers passing state

board or licensure exams" which had a mean usefulness-rating rank of 8

for respondents classified in the Occupational Program Area and a rank

of 55.5 for respondents in the Advanced and Professional Program Area

(Table 12). There were no program characteristics which related to the

measurement of academic skills in the upper quartile of mean usefulness-

ratings for respondents classified in the Occupational Program Area.

This contrasted sharply with the number of such measures identified by

respondents classified in the Advanced and Professional Program Area

(Table 12). Only two program characteristics which related to entering

students were in the upper quartile mean usefulness-ratings by both Oc-

cupational and Advanced and Professional respondents: number or percent

of entering students by major area of study and by degree level sought.

Instead of academic measures for testing of entering students, which

appeared in the upper quartile mean usefulness-ratings by respondents

classified in the Advanced and Professional Program Area (Table 12),

respondents classified in the Occupational Program Area rated as most

highly useful the number or percent of entering students by type of

handicap, by career decision status, and by level of awareness of the

college's programs and services. Four of the five program characteris-

tics rated by respondents classified in the Occupational Program Area

as highly useful in relation to entering students were rated by the

same respondents as highly useful in relation to currently enrolled










students: number or percent of currently enrolled students by major area

of study, by type of handicap, by career decision status, and by degree

level sought. Only three of the 10 characteristics which related to cur-

rently enrolled students rated as highly useful by respondents classified

in the Advanced and Professional Program Area were similarly rated by re-

spondents classified in the Occupational Program Area: number or percent

of currently enrolled students by major area of study, by average GPA of

students in program, and by percent of total college FTE in program. In

addition to these characteristics which related to currently enrolled

students, respondents classified in the Occupational Program Area rated

as highly useful in Occupational program quality-evaluation decision mak-

ing the number or percent of currently enrolled students by cumulative

GPA categories for program-related coursework and by number of hours

with failing grade. Although the rank-order differed, there was more

agreement among respondents classified in the Occupational Program Area

and the Advanced and Professional Program Area regarding program charac-

teristics related to program completers. Six of the 10 program charac-

teristics in the upper quartile of mean usefulness-ratings by respon-

dents classified in the Advanced and Professional Program Area which re-

lated to program completers were similarly identified by respondents

classified in the Occupational Program Area. The differences were that

respondents classified in the Occupational Program Area rated number or

percent of program completers by salary categories and by employer

status as highly useful rather than skills as measured by tests.

Whereas three of the top 10 program characteristics ranked by mean

usefulness-ratings by respondents classified in the Advanced and Profes-

sional Program Area related to faculty/staff (Table 13), none of the









top 10 program characteristics ranked by mean usefulness-ratings by re-

spondents classified in the Occupational Program Area related to faculty/

staff (Table 19). However, although the rank-order differed, all of the

program characteristics which related to faculty/staff in the upper quar-

tile of mean usefulness-ratings by respondents classified in the Advanced

and Professional Program Area were similarly rated by respondents classi-

fied in the Occupational Program Area with one exception: number or

Table 19

Program Characteristics Relating to Faculty/Staff (Questionnaire Category
II) in the Upper Quartile of Mean Usefulness-Ratings by Respondents Clas-
sified in the Occupational Program Area With Ranks


Ranks Program Characteristics

40 Ratio of part-time to full-time faculty/staff
73 Ratio of faculty/staff to clerical staff
89 Ratio of faculty to student support staff
108.5 Ratio of faculty/staff to administrative personnel
Number or percent of full-time faculty/staff:
15 by number of students per term
18 by average class size
20.5 by years taught/service
23 by number of student contact hours per term
26 by degrees held
29 by number of course hours taught per term
29 by level of participation in program decision-making
30 by length of service in program
37 by productivity ratio
41 by rate of faculty/staff turnover
47 by level of use of alternative instructional methods
86 by number of FTE per term
92 by level of compensation
97 by certification/rank
Number or percent of part-time faculty/staff:
35 by number of students per term
35 by average class size
47 by number of student contact hours per term
51 by productivity ratio
62 by number of course hours taught per term
64.5 by degrees held
67 by level of use of alternative instructional methods
80 by length of service in a program
88 by years taught/service
108.5 by level of compensation








or percent of part-time faculty/staff by certification/rank. There were

some large differences between the Occupational and Advanced and Profes-

sional program areas in the mean usefulness-rating rank order for some of

the program characteristics. The program characteristic "ratio of part-

time to full-time faculty/staff" had a mean usefulness-rating rank of

2 for respondents classified in the Advanced and Professional Program

Area and a mean usefulness-rating rank of 40 for respondents classified

in the Occupational Program Area. The program characteristic "number or

percent of full-time faculty/staff by certification/rank" had a mean use-

fulness-rating rank of 8 for respondents classified in the Advanced and

Professional Program Area and a mean usefulness-rating rank of 97 for re-

spondents classified in the Occupational Program Area. The program char-

acteristic "number or percent of part-time faculty/staff by degrees held"

had a mean usefulness-rating rank of 17 for respondents classified in the

Advanced and Professional Program Area and a mean usefulness-rating rank

of 64.5 for respondents classified in the Occupational Program Area.

In addition to these program characteristics, respondents classified

in the Occupational Program Area rated ratio of faculty/staff to admin-

istrative personnel as highly useful and three additional program charac-

teristics related to part-time faculty/staff: productivity ratio, num-

ber of course hours taught per term, and level of use of alternative in-

structional methods. Although the rank-order differed slightly, all of

the program characteristics rated as most useful by respondents classi-

fied in the Occupational Program Area which related to full-time faculty/

staff also appeared in the list for part-time faculty/staff except for

four characteristics: level of participation in program decision making,








rate of faculty/staff turnover, number of FTE per term, and certifica-

tion/rank.

Program characteristics related to costs/resources also received

high mean usefulness-ratings by respondents classified in the Occupa-

tional Program Area (Table 20). Three of the top 10 program character-

istics ranked by mean usefulness-ratings by respondents classified in

this program area related to costs/resources: cost of materials per total

Table 20

Program Characteristics Relating to Costs/Resources (Questionnaire Cate-
gory III) in the Upper Quartile of Mean Usefulness-Ratings by Respondents
Classified in the Occupational Program Area With Ranks


Ranks Program Characteristics

Total cost:
7 per total program
27 per program FTE
42 per program unduplicated headcount
Cost of administration:
39 per total program
92 per program unduplicated headcount
99.5 per program FTE
Cost of instructional personnel:
9 per total program
33 per program FTE
60.5 per program unduplicated headcount
53.5 Cost of support services per total program
Cost of materials:
6 per total program
53.5 per program unduplicated headcount
56.5 per program FTE
Cost of equipment maintenance:
44 per total program
102.5 per program FTE
92 Cost of space utilized per total program
49.5 Number of support staff per total program
Equipment utilization:
45 per total program
98 per program unduplicated headcount
104.5 per program FTE
77 Space utilization per total program
101 Number of library holdings per total program
82 Cost of program evaluation per total program









program (rank 6), total cost per total program (rank 7), and cost of in-

structional personnel per total program (rank 9). Two of these, total

cost per total program and cost of instructional personnel per total

program, were of similar mean usefulness-rating rank for respondents

classified in the Advanced and Professional Program Area (Table 14),

but "cost of materials per total program" was of rank 19 for respon-

dents classified in the Advanced and Professional Program Area. Re-

spondents classified in the Advanced and Professional Program Area

rated eight categories of program characteristics related to costs/re-

sources as highly useful when reported per total program, per program

FTE, and per program unduplicated headcount (Table 14). Respondents

classified in the Occupational Program Area rated as highly useful only

five such categories: total cost, cost of administration, cost of in-

structional personnel, cost of materials, and equipment utilization.

For respondents classified in the Occupational Program Area, the order

of emphasis for usefulness in program quality-evaluation decision mak-

ing per total program was cost of materials, total cost, cost of in-

structional personnel, cost of administration, cost of equipment main-

tenance, equipment utilization, number of support staff, space utiliza-

tion, cost of support services, cost of program evaluation, cost of

space utilized, and number of library holdings. This order of emphasis

differed from the order of emphasis for respondents classified in the

Advanced and Professional Program Area (Table 14). Also, the order of

emphasis was different by program FTE and by program unduplicated head-

count. Cost of program evaluation was the single program characteristic

related to costs/resources which appeared in the upper quartile of mean

usefulness-ratings by respondents classified in the Occupational Program




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EUQ44AVXL_0N3JUZ INGEST_TIME 2013-01-23T15:01:10Z PACKAGE AA00012954_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES