Citation
A Multiple-factor analysis to identify underlying dimensions of multiple indicators of quality

Material Information

Title:
A Multiple-factor analysis to identify underlying dimensions of multiple indicators of quality rated as useful in making program quality-evaluation decisions by administrators in Florida's community colleges
Creator:
Steuart, Thomas Albert, 1938-
Publication Date:
Copyright Date:
1983
Language:
English
Physical Description:
x, 207 leaves : ill. ; 28 cm.

Subjects

Subjects / Keywords:
Academic communities ( jstor )
Axes of rotation ( jstor )
Colleges ( jstor )
Community based instruction ( jstor )
Community colleges ( jstor )
Educational evaluation ( jstor )
Factor analysis ( jstor )
Higher education ( jstor )
Ratings ( jstor )
Students ( jstor )
Community colleges -- Administration -- Florida ( lcsh )
Decision making ( lcsh )
Dissertations, Academic -- Educational Administration and Supervision -- UF
Educational Administration and Supervision thesis Ph. D
Educational accountability -- Florida ( lcsh )
City of Tallahassee ( local )
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1983.
Bibliography:
Bibliography: leaves 199-206.
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
by Thomas Albert Steuart.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright [name of dissertation author]. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
029402090 ( AlephBibNum )
10026397 ( OCLC )
ACB2145 ( NOTIS )

Downloads

This item has the following downloads:


Full Text








A MULTIPLE-FACTOR ANALYSIS
TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY
RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS
BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES









BY

THOMAS ALBERT STEUART


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL
FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY









UNIVERSITY OF FLORIDA

1983














ACKNOWLEDGEMENTS


During the past two years, many persons have assisted and encour-

aged me while I have been engaged in the research that has culminated

in this dissertation. Regretfully, only a few can be mentioned here.

I would like to thank Dr. John M. Nickens, my committee chairman, and

the many members of the Florida Community/Junior College Inter-Institu-

tional Research Council, who, through a research assistantship, supplied

most of my financial support. I would like to express my gratitude to

the other members of my committee, Dr. James L. Wattenbarger and Dr.

Robert S. Soar, whose patience with me has been unending. I express a

great debt to C.B. "Bix" Rathburn, III, who, as a fellow research assis-

tant, provided me with constant feedback and invaluable emotional sup-

port. I wish to thank Dr. Wilson Guertin for his consultations regard-

ing the factor analysis procedures used in this study. Teresa Agrillo,

who typed and edited this dissertation, deserves more than I can give

her. Finally, I wish to acknowledge James D. Cook for his continuing

emotional and financial support, without which this dissertation would

never have been completed.














TABLE OF CONTENTS


Page

ACKNOWLEDGEMENTS ................................................... ii

LIST OF TABLES................................................vi

ABSTRACT......................................................... ix

CHAPTER

I INTRODUCTION............................................1

Rationale .............................................3

Theoretical Rationale.............................. 3
Operational Rationale...............................7

The Problem...................................... .9

Need for the Study ................. .................. 10

Delimitations and Limitations.........................11

Definition of Terms .................................. 12

Organization of the Research Report...................13

II REVIEW OF RELATED LITERATURE.............................14

Educational Evaluation................................14

Toward a Definition of Educational Evaluation.......15
Contemporary Models of Educational Evaluation.......17
Decision-Oriented Model of Educational Evaluation...20

Quality Assessment in Higher Education................23

Graduate Education............................... 25
Undergraduate Education.............................31
Quantifiable Approaches to Quality.................36

Determining Underlying Dimensions: Factor Analysis....40








TABLE OF CONTENTS (continued)

Page

Applicability of Factor Analysis...................40
Definition of Factor Analysis......................43
Steps in Factor Analysis...........................44

III METHODOLOGY.................... ..... .....................50

Description of Data Used...............................50

Analysis of the Data......... .......................... 53

Research Question One..............................53
Research Question Two..............................56

IV RESULTS AND DISCUSSION..................................58

Factor Analysis Results..................................59

Interpretation of the Factors..........................70

Factor Score Comparisons..............................82

Program Areas ................................... 82
Administrative Areas................................90

Summary ............................................... 97

V SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER
STUDY................................................. 99

Summary............................................... 99

Conclusions............................................102

Recommendations for Further Study......................109

APPENDICES

A CLASSIFICATIONS OF RESPONDENTS USED IN DATA ANALYSIS......111

B DESCRIPTION OF IRC PROJECT METHODS AND PROCEDURES.........113

C PROGRAM QUALITY INDICATORS PROJECT QUESTIONNAIRE.........123

D POSITION CODES USED IN THE CATEGORIZATION OF RESPONDENTS
BY ADMINISTRATIVE AREA AND PROGRAM AREA WITH FREQUENCIES..136

E MEAN RATINGS FOR PROGRAM CHARACTERISTICS FOR N=450 AND
N=315................................................139









TABLE OF CONTENTS (continued)


Page
F CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PRO-
GRAM CHARACTERISTICS FOR N=450 ...........................143

G CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PRO-
GRAM CHARACTERISTICS FOR N=315 ...........................151

H PRINCIPAL AXES SOLUTION BASED UPON N=450 WITH FINAL
COMMUNALITY ESTIMATES AND EIGENVALUES....................159

I PRINCIPAL AXES SOLUTION BASED UPON N=315 WITH FINAL
COMMUNALITY ESTIMATES AND EIGENVALUES.....................165

J FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCI-
PAL AXES BASED UPON N=315................................171

K FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCI-
PAL AXES BASED UPON N=450 ................................ 181

L t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN
PROGRAM AREAS BASED ON ASSUMPTION OF EQUAL VARIANCES......191

M t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN
ADMINISTRATIVE AREAS BASED ON ASSUMPTION OF EQUAL
VARIANCES...............................................194

REFERENCES. ................ ........................................ 199

BIOGRAPHICAL SKETCH...............................................207














LIST OF TABLES


Table Page

1 Variance Accounted for by Successive Principal Axes for
N=315.................................................... 60

2 Program Characteristics With Factor Loadings of .50 or
Greater in the Three Rotations of the Principal Axes
Solution Based Upon N=315 ........... ........ .............. 6

3 Variance Accounted for by Successive Principal Axes for
N=450....................................................65

4 Program Characteristics With Factor Loadings of .50 or
Greater in the Three Rotations of the Principal Axes
Solution Based Upon N=450.................................66

5 Intercorrelations of the Factors for the 10-Factor Ro-
tation of the Principal Axes Solutions for N=315 and
N=450 ..................................................... 69

6 Coefficients of Congruence Between the Comparable Fac-
tors for the 10-Factor Structures for N=315 and N=450...... 0

7 Program Characteristics With .50 or Greater Loadings on
Factor 1 ............................................. 71

8 Program Characteristics With .50 or Greater Loadings on
Factor 2.................................................. 72

9 Program Characteristics With .50 or Greater Loadings on
Factor 3.......... ..................................... 73

10 Program Characteristics With .50 or Greater Loadings on
Factor 4................................ ............ ..... 74

11 Program Characteristics With .50 or Greater Loadings on
Factor 5 ............................................. 7

12 Program Characteristics With .50 or Greater Loadings on
Factor 6............ .. .................................76

13 Program Characteristics With .50 or Greater Loadings on
Factor 7........................... .................. ...... 77








LIST OF TABLES (continued)


Table Page

14 Program Characteristics With .50 or Greater Loadings on
Factor 8................................................... 78

15 Program Characteristics With .50 or Greater Loadings on
Factor 9.................................................... 79

16 Number of Respondents Per Program Area and Corresponding
Percentages of All Respondents (N=450).......................83

17 Mean Factor Scores and Standard Deviations for Respondents
Grouped by Program Area.................................... 84

18 Number of Respondents Per Administrative Area and Corre-
sponding Percentages of All Respondents (N=450).............91

19 Mean Factor Scores and Standard Deviations for Respondents
Grouped by Administrative Area..............................93














LIST OF FIGURES


Figure Page

1 Sample Format for Program Quality-Evaluation Information
Report.................................................... .106

2 Sample Format for Program Quality-Evaluation Information
Profile....................................................108


viii











Abstract of Dissertation Presented
to the Graduate Council of the University of Florida
in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


A MULTIPLE-FACTOR ANALYSIS
TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY
RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS
BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES

BY


Thomas Albert Steuart


April 1983


Chairman: John M. Nickens
Major Department: Educational Administration
and Supervision

The purpose of this study was the identification of any underlying

dimensions within multiple quality indicators rated by administrators in

Florida public community/junior colleges as highly useful in making pro-

gram quality-evaluation decisions. It was theorized that utilization of

such dimensions to organize and provide information to administrators

should result in a format that they would find most useful since it

should reflect those aspects of their value systems relevant to the de-

fined decision situations.

Of 631 administrators identified to participate in the study, 450 re-

sponded by rating 454 items on a survey questionnaire for degree of use-

fulness in program quality-evaluation decision making. The correlation

matrix of the intercorrelations of the mean responses of the 108 most








highly rated items were factor analyzed using the iterated principal

axes method and an orthogonal rotation to the varimax criterion with an

oblique rotation to determine intercorrelation of factors. This analy-

sis resulted in the identification of a factor structure accounting for

80.5% of the common variance that contained nine interpretable factors.

The nine dimensions involved information relating to. (1) fiscal,

physical, and human resources; (2) student ratings of support services;

(3) instructional productivity of faculty; (4) assessments of any physi-

cal or cognitive needs of students relevant to their performance in their

selected programs; (5) ratings of selected aspects of programs by stu-

dents; (6) indicators of the quantitative output of a program; (7) se-

lected attributes of full-time and part-time faculty; (8) ratings of se-

lected aspects of programs by faculty; and (9) indicators of the respon-

siveness of a program to certification and accreditation agencies, the

local community, the students, and the state.

The recommendation was made that further research in program quality-

evaluation involve more direct investigation of the attitudes of the de-

cision maker involved and the development of instruments that will facil-

itate the identification of attitudinal dimensions relevant to the de-

fined decision situation.















CHAPTER I
INTRODUCTION


During the 1970s when public confidence in higher education waned

and financial resources became less abundant, there was an emphasis on

accountability. This resulted in a rapid increase of evaluation activi-

ties related to higher education. A major focus of these activities was

the maintenance or improvement of the quality of programs offered by

higher education institutions within the context of a broadening of stu-

dent access in a time of fiscal constraint (Craven, 1980, p. vii).

The conditions of fiscal austerity and the demands for accountability

within the context of broadening student access to higher education have

continued into the 1980s (Craven, 1980). There has been an increasing

concern for maintaining or improving the quality of programs offered by

higher education institutions. The concern is shared by persons within

higher education institutions, state level coordinating or governing

boards, other state executives, and state legislators (Bowen, 1974;

Craven, 1980; "Legislators stress quality improvement," 1980). As Finn

(1980) correctly perceived, quality has emerged as the premier concern

in higher education for the 1980s.

Although it is the premier concern, quality in higher education is a

concept that can mean all things to all people (King, 1981). If used

too loosely with little or no definition, the concept provides little

guidance. If defined too strictly, the concept is of limited use for a

diverse system of higher education (Finn, 1980, p. 2).









Traditionally, the quality of a program or institution in higher ed-

ucation has been determined by subjective evaluations of experts. One

criticism of this approach has been that 20 to 30 higher education in-

stitutions have been identified consistently as institutions of quality,

with all other institutions of higher education virtually ignored

(Lawrence & Green, 1980, p. 1). Another criticism has been that the

bases of these evaluations have been related to the missions and goals

of the institutions identified and institutions with other missions and

goals, such as community colleges, have been excluded (Bowen, 1974;

Fotheringham, 1978). Usually researchers in higher education have tried

to avoid constitutively defining quality, but have operationally defined

it through their choices of research designs and evaluative criteria

(Astin & Henson, 1977; Blackburn & Lingenfelter, 1973; Cartter, 1966;

Krause, 1970).

However quality is defined, the determination of educational quality

involves decision making by program administrators, which requires the

use of some information about the program being evaluated. This is con-

sonant with the theory of evaluation developed by Stufflebeam, Foley,

Gephart, Guba, Hammond, Merriman, and Provus (1971). They defined eval-

uation as "the process of delineating, obtaining, and providing useful

information for judging decision alternatives" (p. 4). Thus, the making

of quality evaluations about educational programs may be described as a

process of delineating what information about programs is useful to ad-

ministrators making quality-evaluation decisions, obtaining that infor-

mation, and providing it in a format useful to those administrators.

This definition of program quality evaluation formed the basis of the

rationale for this study.









Rationale

Theoretical Rationale

Delineation, the first operational step in program quality-evaluation

decision making, involves "the identification of the most useful infor-

mation" (Stufflebeam et al., 1971, p. 41). Although Stufflebeam et al.

did not specify a methodology for accomplishing this step, they did spec-

ify that it could be accomplished successfully "by the evaluator only in

interaction with his client [the decision maker]" (p. 41). The second

step, obtaining, was described as "the more technical aspect of evalua-

tion" (p. 42) and consists of "collecting, organizing, and analyzing

[the data delineated as most useful]" (p. 42). The providing phase of

evaluation involves reporting the delineated and obtained data to the

decision maker "in ways that he finds credible and helpful" (p. 17).

According to Stufflebeam et al. (1971), although there existed much

knowledge and many methodologies for collecting data, "the interface role

of delineating information needs with the decision makers and the simi-

lar interface role of providing information to audiences are not so well

developed in theory or practice" (pp. 139-140). Furthermore, they stated

that "a most glaring and conspicuous omission in this [their] book is the

failure to provide operational guidance for the evaluator as he plays

this interface role [of providing information]" (p. 336). It was the

theory and methodology of the providing phase of evaluation as defined

by Stufflebeam et al. (1971) with which this study was concerned.

Craven (1980) indicated that evaluation processes for the 1980s must

be capable of "providing the desired information in an appropriate for-

mat" (p. 111). How might an evaluator determine an appropriate format

for providing the desired information to decision makers when multiple









items of information have been identified as highly useful in a particu-

lar decision situation? A theoretical basis for resolving this problem

was suggested, but not developed, by Stufflebeam et al. (1971) in their

discussion of the relationship between the items of information identi-

fied as most useful in a defined decision situation and the values of

the decision maker in interaction with whom the items have been deline-

ated. They stated that it is the value system of the decision maker,

especially those aspects of his value system related to a particular de-

cision situation, that determines whether an item of information is rele-

vant to that decision situation (pp. 108-109). The items of information

or variables identified as most useful in a defined decision situation

are not themselves the criteria used to assess the decision situation,

but they are the variables to which the decision maker applies his cri-

teria. On the one hand, the criteria are statements of the means of

measuring the variables and, on the other hand, they are "yardsticks for

values" (p. 109). Values were defined as "predefined states of certain

variables" (p. 108). Presumably, when translated into a means of assess-

ment, "predefined states" equal "criteria" and "certain variables" equal

the information identified as most useful in the defined decision situ-

ation in interaction with the relevant decision maker.

For the purpose of this study, the important point was that the

items of information (variables) identified as most useful in interaction

with the relevant decision maker reflect those aspects of his value sys-

tem that are related to the defined decision situation. If this is true,

as theorized by Stufflebeam et al. (1971), it forms a basis for an

approach that an evaluator may use in determining how to provide multi-

ple items of information in a format that a decision maker should find

"credible and helpful" (p. 17).









The problem is similar to that encountered by psychologists in

attempting to describe human personality (Cattell, 1950). With hundreds

of terms defining traits by which persons could be described, there was

a search for "dimensions of personality" (p. 26) that would facilitate

the description of personality (pp. 26-27). Cattell theorized that the

multiple descriptors of personality, which he labeled "surface traits"

(pp. 21-22), could be accounted for by considerably fewer dimensions,

which he labeled "source traits" (p. 27). Additionally, he theorized

that the source traits were "the real structural influences underlying

personality" (p. 27).

Similarly, it was theorized in this study that for a set of multiple

items of information identified in the delineation phase of an evaluation

process, based on the theory of evaluation developed by Stufflebeam et

al. (1971), there are considerably fewer underlying dimensions that may

be identified and used in developing guidelines for providing information

in a format that decision makers should find useful in a defined decision

situation. If it is true that the items of information identified in the

delineation phase reflect those aspects of a decision maker's value sys-

tem relevant to a defined decision situation, then the underlying dimen-

sions of those items should reflect the dimensions of a decision maker's

value system relevant to that decision situation. If the latter is true,

then utilizing those underlying dimensions to organize those items should

result in providing information in a format that a decision maker should

find credible and helpful, since that format should approximate closely

the dimensions of those aspects of his value system being used in the

decision-making process in the defined decision situation.









This theory may be extended to a decision situation where multiple

decision makers are involved. The identified items of information in

such a decision situation would reflect a hypothetical value system of

"aggregate values" (Stufflebeam et al., 1971, p. 113) of the relevant

decision makers. In such a decision situation, the underlying dimensions

of the identified items of information should reflect the dimensions of

the hypothetical aggregate value system. They should reflect the dimen-

sions of the relevant aspects of an individual decision maker's value

system only to the degree that these dimensions are reflected in the

aggregate value system. Therefore, it may be expected that utilizing

those underlying dimensions to organize the identified items should re-

sult in providing information to the decision makers in a format more or

less credible and helpful to an individual decision maker to the degree

that relevant dimensions of his value system are reflected in the aggre-

gate value system.

Based upon this theory, an appropriate methodology for determining

the underlying dimensions of a set of multiple items of information iden-

tified as most useful in a defined decision situation would be the same

as that used by Cattell (1950) for identifying the underlying dimensions

of personality: the multi-variate technique of factor analysis. For a

set of variables that individuals can rate or in some manner assess, the

technique of multi-factor analysis can be used to determine the dimen-

sions of any underlying pattern of the ratings or other measurements of

that set of variables. For example, multiple items of information iden-

tified as useful in a defined decision situation may be rated by the

relevant decision makers for varying degrees of usefulness. Subsequently,

these ratings can be factor analyzed to identify underlying dimensions of









the degree of usefulness of the items. The results of such an analysis

should provide the evaluator with some guidelines for organizing the

items to increase the probability that the decision makers will find the

format of the provided information credible and helpful, i.e., useful in

the decision-making process in the defined decision situation. This ex-

tension of the theory of evaluation proposed by Stufflebeam et al. (1971)

and the suggested methodology should supply evaluators the needed guid-

ance in their role of providing information in a format useful to deci-

sion makers in a defined decision situation.

Operational Rationale

This study involved the application of this theory and methodology

to an appropriate set of items of information identified as useful in a

defined decision situation in order to identify the underlying dimensions

of those items and to utilize the identified dimensions to develop guide-

lines for organizing these items into a format that should be useful to

the relevant decision makers in the defined decision situation.

Since the quality of programs has been cited as the premier concern

in higher education for the 1980s, the decision situation selected for

this study was the making of quality-evaluation decisions about programs

in Florida's public community/junior colleges. In Florida, Governor

Graham's program for education contained a commitment to assure the cit-

izens of Florida the opportunity to obtain a quality education at every

level of public education including higher education. This commitment

was reflected in a resolution adopted by the Florida State Board of Edu-

cation in January, 1981, that included the following statement:

On a statewide average, educational achievement in the state of
Florida will equal that of the upper quartile of states within
five years, as indicated by commonly accepted criteria of attain-
ment. (State Board of Education, 1981)








The Division of Community Colleges in Florida is under a mandate from the

State Department of Education to identify "certain indicators of quality

which can be used system-wide to give evidence of quality improvement"

(Division of Community Colleges, 1982, p. 1).

The members of the Florida Community/Junior College Inter-Institu-

tional Research Council (IRC), a research consortium of Florida public

community/junior colleges, conducted a project that addressed the problem

of identifying indicators of quality useful in program quality-evaluation

decision-making in Florida public community/junior colleges (Florida Com-

munity/Junior College Inter-Institutional Research Council, 1981). This

project was based upon the theory of evaluation developed by Stufflebeam

et al. (1971). In interaction with the relevant administrators, the

project identified more than 100 indicators of quality as highly useful

in making program quality-evaluation decisions. The indicators of qual-

ity identified were representative of many of those identified in other

studies. A large number of administrators (450 respondents) were in-

volved in this project, representing almost all of the public community/

junior colleges in Florida. Although multiple indicators of quality were

identified as highly useful, there was no attempt in this project to iden-

tify any underlying dimensions of these multiple indicators to utilize in

developing guidelines for providing the desired information to the rele-

vant administrators in a useful format.

All of the aspects of the IRC project described previously supported

the use of the data from that project to test the theory that for a set

of multiple items of information identified in the delineation phase of

an evaluation process, there are considerably fewer underlying dimensions

that may be identified and used in developing guidelines for providing








information in a format that decision makers should find useful in a de-

fined decision situation. Also, because that project found considerable

variability in the information rated as highly useful by respondents

classified in various program and administrative areas, there was an op-

portunity to investigate whether there were any significant differences

between these classifications within any identified underlying dimension

of the multiple indicators of quality.
The Problem

Based on the theory of evaluation developed by Stufflebeam et al.

(1971) and extended in this study, it was expected that multiple items

of information identified by the relevant decision makers as useful in a

defined decision situation would contain underlying dimensions that could

be identified through the use of the technique of factor analysis.

Specifically, this study proposed:

1. To determine any underlying dimensions of the multiple items of

information rated as highly useful in program quality-evaluation

decision making by administrators involved in such decision mak-

ing in Florida public community/junior colleges.

2. To determine if there were any significant differences in the

degree of emphasis within any identified underlying dimension

between the administrators classified within the Advanced and

Professional, Occupational, Developmental, Community Instruc-

tional Services, and Student Services program areas.

3. To determine if there were any significant differences in the

degree of emphasis within any identified underlying dimension

between the administrators classified within the administrative

areas of General Administration, Academic Affairs, Business









Affairs, Student Affairs, Community Instructional Services, and

Presidents.

4. From the results of these analyses, to develop guidelines for or-

ganizing the identified multiple indicators of quality into a

format that should be useful to the administrators involved in

making program quality-evaluation decisions in Florida public

community/junior colleges.
Need for the Study

There was a need to develop further that aspect of the theory of eval-

uation proposed by Stufflebeam et al. (1971) that related to an evalua-

tor's role of providing information (pp. 139-140, 336). In relation to

the developed theory, there was a need "to provide operational guidance

for the evaluator" (p. 336) in the role of providing information in a

format that a decision maker should find "credible and helpful" (p. 17).

Craven (1980) stated that to address effectively the higher education

issues of the 1980s, there was a need for evaluation processes to provide

the desired information in an appropriate format (p. 111). Since only

one study relating to quality evaluation in higher education was found

that used the technique of factor analysis to determine underlying dimen-

sions (Astin & Solmon, 1981), there appeared to be a need for studies to

demonstrate the methodology for determining guidelines for organizing the

considerable amount of information desired by administrators for evaluat-

ing program quality into formats useful in the decision-making process.

Also, due to the large amount of information identified as useful in pro-

gram quality-evaluation decision making by administrators in Florida pub-

lic community/junior colleges, there was a need to determine guidelines

for organizing that specific information into a format that should be









useful to the administrators involved in the quality-evaluation decision

process in Florida public community/junior colleges (Steuart & Rathburn,

1982, p. 185).

Delimitations and Limitations

This study was confined to administrators in Florida public commun-

ity/junior colleges who were classified by their institutions as execu-

tive, administrative, or managerial personnel under part three of the

"Personnel and Salary Report (SA-1)" as defined in the Community College

Management Information Systems Procedures Manual of the State of Florida

(Division of Community Colleges, 1980, pp. 10.1-10.2). Of the 631 admin-

istrators identified and surveyed, 450 responded for a response rate of

71.3% (Steuart & Rathburn, 1982, p. 45). Although a response rate of

this magnitude is generally considered acceptable, it may still be

assumed that the respondents may have been different from the nonrespon-

dents in ways that affected their responses. Thus, the responses might

not be representative of the identified population. Since the study was

confined to administrators in community colleges, the results are gener-

alizable to administrators in other types of colleges only to the extent

that they share attitudes toward program quality evaluation similar to

the respondents in this study. The results are not generalizable to ad-

ministrators in community college systems in other states except to the

degree that they share attitudes toward program quality evaluation simi-

lar to respondents in this study.

The data used in this study were collected by means of a survey ques-

tionnaire. Although face validity was established for the questionnaire

through the use of a review panel, reliability of the questionnaire was

not established. Therefore, it is not known if similar results would be









obtained from the same respondents if they were surveyed again. The re-

sults can be taken only as descriptive of the opinions of the administra-

tors at the time the questionnaire was administered. Also, although the

questionnaire was designed to be comprehensive in relation to the descrip-

tive information it contained about programs offered by the community col-

leges, some information that might be related to quality-evaluation deci-

sion making might have been excluded.

The analytic technique of factor analysis used in this study has sev-

eral limitations associated with it. There are no hard and fast guide-

lines for determining the number of factors to rotate in attempting to

achieve a simple factor pattern. Another researcher might choose differ-

ent criteria and rotate a different number of factors and would, there-

fore, obtain different results. Also, factor analysis assumes a linear

relationship between the variables involved in the analysis. Any other

relationship would be inaccurately represented by a factor-analytic pat-

tern.

Definition of Terms

Administrative Areas. The basic divisions of responsibility for ad-

ministrators in a comprehensive community college in Florida including

General Administration, Academic Affairs, Business Affairs, Student

Affairs, Community Instructional Services, and Presidents. Each of

these areas is operationally defined in Appendix A.

Dimension. A cluster of program characteristics the ratings of which

by the respondents tend to vary in similar ways. Each factor identified

from the factor analysis in this study represents a dimension of the un-

derlying interrelationships of the ratings of the program characteristics.









Evaluation. The process of delineating, obtaining, and providing

useful information for decision making in a defined decision situation.

Program Areas. The five basic operational areas of a comprehensive

community college in Florida including the Advanced and Professional,

Occupational, Developmental, Community Instructional Services, and Stu-

dent Services areas (Division of Community Colleges, 1981, p. 6). Each

of these areas is operationally defined in Appendix A.

Program Characteristics. Any information relating to or describing

a program offered by a college.

Program Quality-Evaluation Decision Making. The evaluation process,

involving the use of relevant information, leading to a judgment by the

responsible administrators of the quality of a program.

Underlying Pattern. The interrelationships of the correlations of

the ratings by respondents among the program characteristics identified

as highly useful in quality-evaluation decision making.

Usefulness. The determination of the serviceability or utility of

a program characteristic in making judgments about the quality of a pro-

gram.

Organization of the Research Report

The chapters in the remainder of this report are organized as follows.

Chapter II presents a review of selected literature relevant to this

study. Chapter III describes the methodology used in this study. Chap-

ter IV presents the results of this study. Chapter V summarizes and dis-

cusses the results with conclusions and recommendations drawn from the

results.















CHAPTER II
REVIEW OF RELATED LITERATURE


Since the evaluation of the quality of programs or services offered

by higher education institutions occurs within the general framework of

educational evaluation, the first section of this chapter is a discussion

of concepts of educational evaluation. The decision-oriented approach to

educational evaluation is emphasized because it was the theoretical basis

of this study. The second section of this chapter reviews selected

attempts in higher education to address the issue of quality. The third

section of this chapter is a discussion of factor analysis related to

discovering underlying dimensions in multi-variate assessments.

Educational Evaluation

During the past decade, evaluation in education has become a topic

wide in scope. It has been the failure of many educators to recognize

that evaluation is a complex process requiring a broad perspective (Alkin,

1969). Pyatte (1970) emphasized the importance of evaluators in educa-

tion looking beyond the immediate problems and contemplating the intri-

cate meanings and legitimate functions embodied in evaluation theory.

The dynamics of evaluation compel attention from many perspectives.

This section of the literature review is presented in three parts. The

initial part introduces the concept of educational evaluation through a

discussion of various definitions of educational evaluation. The second

part provides a brief review of educational evaluation with emphasis on

contemporary models of educational evaluation. The final part discusses









the decision-oriented model of educational evaluation--the basis for

this study's approach to the quality issue in higher education.

Toward a Definition of Educational Evaluation

Many definitions of educational evaluation have been proposed stem-

ming from the fact that three different schools of thought regarding ed-

ucational evaluation have coexisted for more than 30 years (Worthen &

Sanders, 1973). Stufflebeam et al. (1971) provided an excellent review

of the three basic approaches to educational evaluation from which most

of the definitions have developed. The first approach was an early one

equating evaluation with measurement (p. 10). The second approach in-

volved the determination of the congruence between performance and objec-

tives, especially behavioral objectives (p. 11). The third approach was

the process commonly referred to as professional judgment (p. 13).

From these basic approaches, various definitions of educational eval-

uation have emerged. These definitions differ in level of abstraction

and often reflect the specific concerns of the persons who formulated

them. At a basic level, evaluation has been defined as "an assessment

of worth" (Popham, 1975, p. 8). Wolf (1979) found this definition need-

ing clarification regarding the meaning of the terms "assessment" and

"worth."

A more descriptive definition was offered by Cronbach (1963), who de-

fined evaluation as "the collection and use of information to make deci-

sions about an educational program" (p. 675). This definition was pro-

posed initially during the curriculum development era of the late fifties.

Cronbach's studies suggested various kinds of information that could be

examined within the evaluation framework and later analyzed and used in

decision making designed for course improvement (Wolf, 1979).









Doll (1970) defined educational evaluation as "a broad and continuous

effort to inquire into the effects of utilizing educational content and

process according to clearly defined goals" (p. 379). In terms of this

definition, educational evaluation must transcend the levels of simple

measurement techniques or the primary application of the evaluator's

values and beliefs. If evaluation is to be a vast and continuous effort,

it must depend on "a variety of instruments which are used according to

carefully ascribed purposes" (Doll, 1970, p. 380).

Beeby proposed an extended definition of evaluation as "the system-

atic collection and interpretation of evidence, leading, as a part of the

process, to a judgment of value with a view to action (in Wolf, 1979, p.

117). Wolf (1979) developed the important elements of the definition.

First, the term systematic implied that the information needed would be

defined with precision and obtained in an organized fashion. The second

element, the interpretation of evidence, emphasized the role of critical

judgment in the evaluation process. Wolf stated that this element was

often neglected in evaluation activities. The third element of Beeby's

definition involved the judgment of value. This required the evaluator

to be responsible for making judgments from his evaluative work about the

worth of an educational endeavor. The last element, with a view to ac-

tion, introduced the notion that an evaluative undertaking should be de-

signed for the sake of future action (pp. 117-124).

Pyatte (1970) emphasized the importance of a rational plan element in

the definition of educational evaluation. He stated that "evaluation is

the deliberate act of gathering and processing information according to

some rational plan the purpose of which is to render, at some point in

time, a judgment about the worth of that on which the information is









gathered" (p. 306). According to Pyatte, six elements are included:

the agent, the object, the inputs, the plan, the time, and the product.

Bloom, Hastings, and Madaus (1971) defined educational evaluation as:

1. A method of acquiring and processing the evidence needed
to improve the student's learning and the teaching;
2. Including a great variety of evidence beyond the usual
final paper and pencil examination;
3. An aid in clarifying the significant goals and objectives
of education and as a process for determining the extent
to which students are developing in these desired ways;
4. A system of quality control in which it may be determined
at each step in the teaching-learning process whether the
process is effective or not, and if not, what changes must
be made to ensure its effectiveness before it is too late;
5. A tool in educational practice for ascertaining whether
alternative procedures are equally effective or not in
achieving a set of educational ends. (p. 8)

In recent years, the most popular definitions have viewed evaluation

as "a process of identifying and collecting information to assist deci-

sion makers in choosing among available decision alternatives" (Worthen

& Sanders, 1973, p. 20). Since this perspective of evaluation was the

one used in this study, an expanded discussion of it is presented in the

final part of this section of the literature review.

Contemporary Models of Educational Evaluation

With the increased call for accountability in educational institu-

tions, the body of literature on educational evaluation has expanded rap-

idly in recent years. Many models of educational evaluation have emerged.

There have been numerous attempts to categorize the array of models, the

most comprehensive of which were done by Anderson, Ball, and Murphy

(1975), Gardner (1977), Stufflebeam et al. (1971), and Worthen and

Sanders (1973). The more prominent educational evaluation models in-

cluded the measurement model, the congruence model, the professional

judgment model, the goal-free model, and the decision-oriented model

(Gardner, 1977).









The measurement model of evaluation, as described by Gardner (1977),

equated evaluation with measurement (p. 575). In this model, evaluation

is viewed as the science of instrument development and interpretation

(p. 576). The use of measurement instruments results in scores or other

indices which are mathematically and statistically manipulated so masses

of data can be handled and an individual's or a group's score can be com-

pared with established norms (Stufflebeam et al., pp. 10-11). The model

has been widely used and is illustrated by the use of SAT and GRE scores.

Gardner (1977) stated that the model was based on the assumptions that

the phenomena to be evaluated have significant measurable attributes and

that instruments can be designed which are capable of measuring these

attributes.

Perhaps no other model has received more attention in recent evalua-

tion literature, especially in its application to the classroom, than the

congruence model. The origin of this model is most closely associated

with the work of Tyler (1950). Tyler stated that educational objectives

were essentially defined in terms of expected changes in human behavior.

It followed that evaluation is the process for determining the degree to

which changes in behavior actually take place. Gardner (1977) described

this model as

the process of specifying or identifying goals, objectives or
standards of performance; identifying or developing tools to
measure performance; and comparing the measurement data col-
lected with the previously identified objectives or standards
to determine the degree of discrepancy or congruence which
exists. (p. 577)

Probably the most widely used but least discussed model of evaluation

is the professional judgment model (Stufflebeam et al., 1971, p. 3). In

this model, evaluation is professional judgment. Values or criteria that









form the basis of the judgment may or may not be explicitly stated.

Often a commonly shared value system is assumed (Gardner, 1977, p. 574).

Examples of the uses of this model include the judgments of visiting

teams of professionals in the accreditation process, the use of peer re-

view panels for evaluating various programs, and faculty committees pass-

ing judgments on promotion or tenure (Worthen & Sanders, 1973, pp. 126-

127).

A recent addition to the models of educational evaluation is the goal-

free model. Originally proposed by Scriven (1972, 1973), this model is

based on the argument that if the main objective of evaluation is to

assess the worth of outcomes, then no distinction should be made between

intended versus unintended outcomes and that an evaluation should be con-

ducted without reference to a program's goals or objectives (Gardner,

1977, p. 583). In this model, evaluation is not totally goal free, but

standards for comparison can be chosen from a wider range of possibili-

ties than those that might be prescribed by a program's objectives (p.

584). The final outcome of the evaluation "should be accurate, descrip-

tive, and interpretative information relative to the most important as-

pects of the actual performance, effects, and attainments of the program

being evaluated" (p. 585).

All of the previously discussed models are similar in that they in-

clude reference to the use of some information in making some judgment.

The models vary in the degree to which the role of information or the role

of judgment is emphasized. In the next model to be discussed, where eval-

uation is defined as "the process of delineating, obtaining, and provid-

ing useful information for judging decision alternatives" (Stufflebeam et

al., 1971, p. 4), the emphasis is on the role of information.









Decision-Oriented Model of Educational Evaluation

Stufflebeam and the Phi Delta Kappa National Study Committee have

been credited with the refinement of what Gardner (1977) referred to as

the decision-oriented model of educational evaluation. According to

this model, "evaluators collect information and communicate this infor-

mation to someone else" (Alkin & Fitz-Gibbon, 1975, p. 1). The process

by which this information is collected is systematic and deliberate, an

attempt to obtain an unbiased assessment upon which to base an evaluation

(Alkin & Fitz-Gibbon, 1975; Guba, 1975; Stufflebeam, 1969).

In this model, the results of evaluation are directed toward those

individuals who are "intimately connected with the program being evalu-

ated" (Alkin & Fitz-Gibbon, 1975, p. 1) or the administrative decision

makers (Gardner, 1977; Guba, 1975; Stufflebeam, 1969; Stufflebeam et al.,

1971). The model was designed to benefit decision makers. In this con-

text, the role of the evaluator is to collect and present summary infor-

mation to decision makers (Alkin & Fitz-Gibbon, 1975, p. 5). The evalu-

ators collect and present the information needed by someone else who de-

termines its worth. "Decision-facilitation evaluators view the final de-

termination of merit as the decision maker's province, not theirs"

(Popham, 1975, p. 25). In contrast, Alkin and Fitz-Gibbon (1975) sug-

gested that the information from a well-designed evaluation would pass

judgment, not a person (p. 5).

Stufflebeam (1969) viewed evaluation as the science of providing in-

formation for decision making. The assumption was made that the ultimate

goal of the decision-making process was educational improvement. Educa-

tional improvement implied changes resulting from choices selected by de-

cision makers from various alternatives. The process of decision making









or choosing among options is firmly rooted in the decision maker's and

the organization's value systems. In this framework, valid and reliable

information is necessary to facilitate the decision maker's judgment of

the degree to which various options measure up against a personal or or-

ganizational value system (Stufflebeam et al., 1971, p. 38).

Stufflebeam (1968) summarized the rationale for the model in the fol-

lowing statements:

1. The quality of programs depends upon the quality of de-
cisions in and about the program.
2. The quality of decisions depends upon the decision mak-
er's abilities to identify the alternatives which com-
prise decision situations and to make sound judgments
about these alternatives.
3. Making sound judgments requires timely access to valid
and reliable information pertaining to the alternatives.
4. The availability of such information requires system-
atic means to provide it.
5. The processes necessary for providing this information
for decision making collectively comprise the concept
of evaluation. (p. 6)

Alkin (1969) expressed a similar view of evaluation. He stated that

the steps in the process of evaluation included determining the areas of

concern for possible decisions, determining the appropriate data, col-

lecting and analyzing the data, and reporting the summary information in

a form useful for the decision makers. These steps were condensed and

described by Stufflebeam et al. (1971) in their definition of educational

evaluation as "the (process) of (delineating), (obtaining), and (provid-

ing)(useful)(information) for (judging)(decision alternatives)" (p. 40).

Each of the eight elements, set off by parentheses in the definition, has

significant implications for the process and techniques of evaluation.

These elements of evaluation were defined as follows:

1. Process. A particular and continuing activity sub-
suming many methods and involving a number of steps
and operations.









2. Decision alternatives. Two or more different actions that
might be taken in response to some situation requiring
altered action.
3. Information. Descriptive or interpretive data about enti-
ties (tangible or intangible) and their relationships, in
terms of some purpose.
4. Delineating. Identifying evaluative information required
through an inventory of the decision alternatives to be
weighed and the criteria to be applied in weighing them.
5. Obtaining. Making information available through such pro-
cesses as collecting, organizing, and analyzing and through
such formal means as measurement, data processing, and
statistical analysis.
6. Providing. Fitting information together into systems or
subsystems that best serve the purposes of the evaluation,
and reporting the information to the decision maker.
7. Useful. Satisfying the scientific, practical, and pruden-
tial criteria of Chapter I [internal validity, external
validity, reliability, objectivity, relevance, importance,
scope, credibility, timeliness, pervasiveness, and effi-
ciency] and pertaining to the judgmental criteria to be
employed in choosing among the decision alternatives.
8. Judging. The act of choosing among the several decision
alternatives; the act of decision making. (Stufflebeam
et al., 1971, pp. 40-43)

Stufflebeam et al. (1971) contended that evaluation is an extension

of the decision-making process. In this process, the evaluator assists

the decision maker by helping to delineate, in interaction with the de-

cision maker, the information which is needed; by providing that informa-

tion in a useful format to the decision maker; and by assisting the deci-

sion maker in the interpretation of the information. This conceptualiza-

tion of evaluation was used in this study where the making of quality

evaluations about educational programs was defined as the process of

identifying what information about programs is useful to administrators

in making that type of evaluation decision and providing that information

to administrators in a format that facilitates the interpretation of the

information by administrators making such decisions.

While identifying what information is useful for making quality-eval-

uations may be difficult, the presentation of the identified information









in a useful format is equally difficult when multiple items of informa-

tion are involved. This task requires the aggregation of the identified

information into profiles or indices or similar formats useful to admin-

istrators involved in quality-evaluation decision making. Stufflebeam

et al. (1971) pointed out that their theory offered little guidance for

the evaluator in deciding how to provide information in a useful format

(p. 336). Craven (1975) emphasized the information-providing role of an

evaluator in his description of information systems as "any method that

provides the right decision maker with the right information in the right

form at the right time so as to facilitate the decision-making process"

(p. 127). Craven (1975) summarized the importance of an evaluator's in-

formation-providing role with the following statement:

Information that responds to those decision-making needs in a
valid, reliable, and timely manner will assist higher educa-
tional institutions during this period in making decisions that
will maintain and strengthen the quality of its programs and
faculty and will enable them to meet the future educational
needs of students, society, and scholarship. (p. 138)

Selected studies illustrative of these major approaches to evaluation,

including decision-oriented approaches, that have been used in the assess-

ment of quality in higher education are reviewed in the next section of

this chapter.

Quality Assessment in Higher Education

An appropriate summary of a basic problem in assessing quality in

higher education or any other field is provided by the following state-

ment from Pirsig (1974):

Quality . you know what it is, yet you don't know what it
is. But that's self-contradictory. But some things are bet-
ter than others, that is, they have more quality. But when
you try to say what the quality is, apart from the things that
have it, it all goes poof! There's nothing to talk about.
But if you can't say what Quality is, how do you know what it









is, or how do you know that it even exists? If no one knows
what it is, then for all practical purposes it doesn't exist
at all. But for all practical purposes it really does exist.
What else are the grades based on? Why else would people
pay fortunes for some things and throw others in the trash
pile? Obviously some things are better than others ..
but what's the betternesss?" So round and round you go,
spinning mental wheels and nowhere finding anyplace to get
traction. What the hell is Quality? What is it? (p. 184)

During a recent Southern Regional Educational Board Symposium, SREB

President Godwin addressed the problem of defining quality as follows:

Part of our problem in higher education is that too often we
have confused quality with prestige. We need to increase the
understanding that quality education is not a monopoly of a
few dozen major universities in the nation, but is attainable
by all types of higher education institutions. (Legislators
stress quality improvements, 1980, p. 3)

The president of Brevard Community College in Florida, in a recent mes-

sage to his faculty, had the following comments on educational quality:

Quality in education is not an absolute. It can only be
evaluated in terms of arbitrarily determined standards,
and these in turn depend partly on subjectively formulated
aims and partly on objective statistical procedures. . .
Education is quality education to the extent that it meets
the needs of the people being served. (King, 1981, p. 1)

These two quotes are representative of the general view of quality

in higher education. That view is vague, subjective, and broad. On one

hand, such a view has limited use in that it provides little guidance for

educational improvement. On the other hand, it is a workable approach to

the quality issue, maintaining maximum flexibility to serve the diversity

found in higher education. If by no other means, educators intuitively

recognize a substantial variance in program and institutional quality

among the diverse institutions that comprise the American system of

higher education. Various studies conducted by different researchers for

different reasons in different settings using different methodologies have

resulted in a variety of quality attributes that provide little assistance

in defining quality (Lawrence & Green, 1980).









Selected studies illustrative of the major approaches to quality

assessment in higher education are reviewed in this section of the liter-

ature review. This section is presented in three parts. First, the

major reputational assessments of graduate programs are reviewed. These

studies have formed the basis of attempts to investigate the quality

issue in higher education. Second, an overview is presented of quality

assessment at the undergraduate and two-year college level. Third, se-

lected studies designed to identify quantifiable indicators of quality

are reviewed.

Graduate Education

Beginning with Hughes (1925) and continuing through the prestigious

American Council on Education (ACE) sponsored studies (Cartter, 1966;

Roose & Andersen, 1970), reputational ratings of graduate programs have

constituted the basis of attempts to address the issue of quality in

higher education. The methodology incorporated in a majority of these

studies involved a peer review, in which programs were rated by eminent

faculty in the same discipline. Their ratings reflected the quality of

graduate education and research in the system. These studies attempted

to identify the outstanding research and teaching institutions by program

and they have consistently identified 20 or 30 institutions, virtually

ignoring the balance of the system (Lawrence & Green, 1980, p. 2).

Using a panel of distinguished scholars from each field, Hughes (1925)

conducted the first comprehensive reputational study of graduate programs

in American higher education. At the time of his study, only 65 Ameri-

can universities awarded the doctoral degree. Hughes ranked 38 of these

universities in 20 disciplines according to the number of outstanding

scholars each employed. During the next decade, the number of American









universities awarding the doctoral degree nearly doubled. This prompted

a second study by Hughes (1934) in which 59 universities were ranked in

35 disciplines according to the quality of facilities and staff for the

preparation of doctoral candidates. The stated purpose of both of

Hughes' studies was to educate undergraduate students about various grad-

uate programs. These studies went well beyond this purpose in establish-

ing procedures for quality ratings of the nation's leading institutions

through numerical ranks based upon the informal opinions of academicians.

For the next 20 years, the Hughes studies were regarded as authori-

tative. It was not until Keniston's (1959) work that an attempt was made

to update the Hughes studies. Using department chairmen selected from

the institutional members of the American Association of Universities as

raters, Keniston ranked 24 graduate programs based upon a combined meas-

ure of doctoral program quality and faculty quality. These rankings were

used to produce a rank-ordered list of the top 20 institutions which were

compared with Hughes' results.

The major weakness of the Hughes and Keniston studies, according to

Cartter (1966), was the uncontrolled geographical and rater biases.

Other flaws in these studies noted by Cartter included the failure to

distinguish measures of faculty quality from measures of educational

quality, the failure to account for the biases of raters toward their

alma maters, and the choice of department chairmen as raters. It was

Cartter's opinion that the department chairmen were not necessarily the

most distinguished scholars nor typical of their peers in age, speciali-

zation, or rank. They tended to be more conservative and thus to favor

the traditional institutions.









Cartter's design of the ACE studies accounted for these criticisms.

He took great care to assure the representation of various institutions

and raters from all geographic areas. Cartter surveyed 106 institutions

representing more than 1,000 graduate programs in 29 disciplines. The

more than 4,000 survey respondents included senior and junior scholars as

well as department chairmen. From a list of the institutions in alpha-

betical order, the respondents were requested to rate each doctoral pro-

gram in their area of study on two components: quality of graduate fac-

ulty and effectiveness of the doctoral program. To support the represen-

tativeness of the raters, the respondents were requested to supply basic

biographical information. The leading departments were ranked separately

on the basis of the raters' responses on each of the components. In most

disciplines, the rankings by each component were very similar. Where the

discipline areas overlapped, Cartter compared his rankings with those of

Hughes (1925) and Keniston (1959). Cartter found a high correlation be-

tween his rankings and objective institutional measures such as faculty

salaries, library resources, and publication indices. His rankings cor-

related highly with Bowker's (1964), who used enrollment of graduate

award recipients in institutional programs as a criterion. Cartter used

these relationships as a primary point in his support of peer ratings for

quality assessment.

The 1970 ACE-sponsored Roose-Andersen study essentially replicated

Cartter's study. The Roose-Andersen study included 130 institutions

across 29 disciplines. The ratings were based upon the same two compon-

ents Cartter used in 1966: quality of graduate faculty and effectiveness

of the doctoral program. The Roose-Andersen report presented ranges of

raters' scores rather than absolute raw departmental ratings and ranges









of quality instead of specific institutional rankings. Even with these

changes, the results of the Roose-Andersen study were very similar to

those of the Cartter study (1966). Using the reputational rating pro-

cedures refined by the ACE studies, other researchers produced similar

program or institutional rankings based on the two ACE criteria or simi-

lar criteria (Carpenter & Carpenter, 1970; Cartter & Solmon, 1977; Cole

& Lipton, 1977; Cox & Catt, 1977; Gregg & Sims, 1972; Margulies & Blau,

1973; Munson & Nelson, 1977).

Lawrence and Green (1980) discussed the weaknesses in reputational

ratings, the most apparent being their lack of agreement on the meaning

of quality. The definition of quality varied according to disciplines,

program areas, and individual raters. The lack of agreement on a defini-

tion of quality made program or institutional comparisons nonsensical.

Lawrence and Green expressed the opinion that higher education was far

too complex to rate on the basis of one or two dimensions. They stated

that

the ratings represent the subjective judgments of faculty and
that they probably reflect prestige rather than quality. .
and high prestige is translated to mean educational excellence.
As a result, research and scholarly productivity are emphasized
to the exclusion of teaching effectiveness, community service,
and other possible functions; undergraduate education is deni-
grated; and the vast number of institutions lower down in the
pyramid are treated as mediocrities, whatever their actual
strengths and weaknesses. (pp. 15-16)

Dolan (1976) criticized the reputational approach because it tended

to maintain the status quo. Dolan expressed the opinion that subjective

ratings of program quality reflected elitist and traditionalist views of

higher education that stifled or restricted change and innovation. Dolan

believed that increasing consumer awareness in higher education demanded

student involvement in any attempt to rate graduate programs.









Blackburn and Lingenfelter (1973) defended the ACE reputational rat-

ings on the following grounds:

(1) Panel bias has been largely eliminated by the careful se-
lection procedures of the ACE studies; (2) subjectivity cannot
be escaped in evaluation no matter what technique is used; (3)
professional peers are competent to evaluate scholarly work,
the central criterion in reputational studies; and (4) although
not a sufficient condition of general excellence, scholarly
ability is necessary for a good doctoral program. (p. 25)

Webster (1981) pointed out that the process usually produced results with

face validity in that those programs or institutions considered to be of

high quality by the educated general public were often rated highly.

Regardless of the criticisms or defenses of the reputational rating

approach, none of the studies that have been cited have investigated spe-

cifically what information was useful for assessing the quality of gradu-

ate programs. Only one study of graduate education quality was found

that investigated this topic. The Council of Graduate Schools (CGS) and

the Educational Testing Service (ETS) sponsored a study that involved 73

departments divided among three fields--psychology, chemistry, and his-

tory--that were surveyed with the purpose of determining what information

to use to assess quality (Clark, Hartnett, & Baird, 1976). Four major

conclusions resulted from this study. First, it was determined that

timely, relevant, and useful information (program characteristics) re-

lated to educational quality could be reasonably obtained. Second,

approximately 30 program characteristics were identified as especially

useful. Third, these program characteristics appeared to be applicable

across diverse program areas. Fourth, two clusters of program character-

istics were identified: research-oriented indicators and educational-

experience indicators. The research-oriented indicators included depart-

ment size, reputation, physical and financial resources, student ability,









and faculty publications. The educational-experience indicators were

concerned with the educational process and academic climate, faculty in-

terpersonal relations, and alumni ratings of dissertation experiences.

The CGS-ETS study used faculty, students, and alumni input in a sep-

arate peer-rating component of the study similar in approach to the ACE

studies. One finding of this component of the study was that reputa-

tional ratings of graduate programs had little relationship to teaching

and educational effectiveness as measured by the input of students and

alumni. Clark et al. (1976) concluded that the peer ratings were based

primarily on scholarly publications with little or no emphasis on the

quality of instruction.

The CGS-ETS study demonstrated that information useful for determin-

ing educational quality could be identified. Furthermore, that study

demonstrated that the information identified as useful consisted of mul-

tiple indicators of quality that appeared to be applicable across program

areas. This is supportive of the view taken in this study that the mul-

tiple indicators of quality identified in the IRC project (Steuart &

Rathburn, 1982) were representative of some underlying structure of the

multiple indicators of quality, the dimensions of which should remain in-

variant across program areas. The Clark et al. (1976) study and the IRC

study (Steuart & Rathburn, 1982) defined several dimensions of quality

based upon the program characteristics identified in the respective stud-

ies as useful in assessing program quality. However, the dimensions were

defined in both studies on the basis of the perceived similarity of the

content of clusters of program characteristics and were not defined by the

utilization of the technique of factor analysis as was done in this study.









Undergraduate Education

Although considerably fewer studies have been conducted to assess

quality at the undergraduate level than at the graduate level, the stud-

ies rating undergraduate education have demonstrated that colleges differ

substantially in traditional measures of quality. Jordan (1963), in a

study involving undergraduate programs, found that those institutions

that spent more on salaries for library staff and had higher numbers of

library volumes per student tended to score higher on a quality index

based upon multiple weighted factors. Brown's (1967) study of undergrad-

uate education ranked colleges on the basis of eight criteria including

total current income per student, proportion of students entering gradu-

ate school, proportion of graduate students, number of library volumes

per student, total number of full-time faculty, faculty-student ratio,

proportion of faculty with doctorate, and average faculty compensation.

These two studies represented approaches to undergraduate quality assess-

ment similar to those utilized for graduate programs. Lawrence and Green

(1980) expressed the opinion that these and similar studies (Dube, 1974;

Krause & Krause, 1970; Tidball & Kistiakowski, 1976) that used quality

measures more typically associated with graduate quality assessment (e.g.,

publication record of students, percent of students who finish profes-

sional schools or terminal graduate degrees, etc.) failed in their pur-

pose because they did not take into account the "special nature of the

undergraduate experience" (p. 33).

Astin, through a series of studies (1965, 1971; Astin & Henson, 1977)

approached one specific aspect of undergraduate quality that he termed

the selectivity index. Astin (1971) defined the selectivity index as a

relative measure of the academic ability of a college's entering freshmen.









In another study involving the selectivity index, Astin and Henson (1977)

used ACT and SAT scores to approximate the selectivity of all accredited

two- and four-year institutions. Astin and Henson defended their approach

on the basis of its acceptance by the mainstream of faculty and administra-

tors in higher education (p. 2). The validity of the approach was sup-

ported by its positive correlations with selected institutional character-

istics such as student-faculty ratios (Astin & Solmon, 1979).

In a related study, Astin developed further the selectivity index by

examining the preferences of academically talented students for various

institutions (Astin & Solmon, 1979). Although they realized that this

measure was confounded by a number of variables such as institutional pop-

ularity and regionalism, Astin and Solmon maintained that a measure of an

institution's drawing power for highly able students was a valid quality

measure (p. 47).

In a later study of undergraduate education quality, Astin and Solmon

(Astin & Solmon, 1981; Solmon & Astin, 1981) expanded their view of qual-

ity. They utilized faculty members representing seven disciplines from

institutions in four states (California, Illinois, New York, and North

Carolina) to rate institutions from a national list and a state list.

The state list included those institutions in a rater's state that

awarded a minimum of five undergraduate degrees in a rater's field during

1977. The national list was composed of 100 of the "most visible insti-

tutions in the rater's field" (Astin & Solmon, p. 14). Each rater was

asked to evaluate each institution from both lists according to six qual-

ity criteria including overall quality of undergraduate education, prep-

aration of students for graduate and professional school, preparation of

students for employment after college, faculty commitment to undergraduate









teaching, scholarly or professional accomplishments of faculty, and inno-

vativeness of curriculum and pedagogy (p. 24).

Utilizing a factor analysis of the mean ratings on each of the qual-

ity criteria for each of the undergraduate disciplines, Astin and Solmon

(1981) concluded that

these ratings showed that the seven fields form a single "overall
qualityn dimension. In practical terms, this means that quality
differences among fields at a given institution tend to be mini-
mal, and that ratings of one department may suffice as an estimate
of the quality in the other departments at the institution. (pp.
14-15)

Considering that only six quality criteria were used in the study, the

conclusion appeared warranted.

Probably the best known studies of undergraduate quality, the Gourman

studies (1967, 1977), provided little explanation of the procedures used

to arrive at the reported ratings. Scores on two sets of variables--

strength of an institution's academic departments and quality of nonde-

partmental areas--were used to produce an average academic department

rating, an average nondepartmental rating, and an overall "Gourman rating"

for each institution.

Although the Gourman ratings were accepted as a viable measure of un-

dergraduate quality, several of the assumptions used in the ratings were

questionable. Gourman assumed that, at minimum, 10 years were required

following graduation to produce an excellent classroom teacher and thus

rated older faculty higher. Gourman gave equal weight to faculty effec-

tiveness, public relations, library, a college's alumni association, and

the athletic-academic balance as measures of institutional quality.

Gourman held a bias toward larger institutions, consistently rating them

higher than smaller liberal arts colleges (Lawrence & Green, 1980). In










1977, Gourman changed the format of his ratings, making it similar to

that of the 1970 Roose-Andersen study. Gourman rated 68 undergraduate

programs in 1977, again providing no information on the procedures used

in developing the ratings.

Utilizing approaches such as those discussed, other researchers have

addressed the issue of quality in undergraduate education (Johnson, 1973;

Nichols, 1966; Solmon, 1975). Other, possibly less academic, attempts to

evaluate undergraduate quality included the popular college guides (e.g.,

Hawes Comprehensive Guide to Colleges, 1978). Webster (1981) criticized

many of these attempts on the basis of their limited view of the under-

graduate experience. Central to his criticism was the lack of emphasis

on undergraduate teaching in preparation for the job market and the over-

riding view of undergraduate programs serving primarily as preparatory

periods for graduate study.

Very little research has been conducted in the community/junior col-

lege setting in relation to the quality issue. In general, many of the

premises underlying traditional views of quality in higher education run

in opposition to the basic principles of the community college philosophy.

An example of this is the discrepancy between the selectivity index (Astin

& Solmon, 1979) and the open door admission policy of most community col-

leges.

One of the more quoted studies of educational quality in the community

college setting involved the identification of quality indicators from

peer opinions expressed in evaluations of selected junior colleges during

accreditation team visits (Walters, 1970). Walters identified 58 specific

indicators from a list of more than 500 recommendations made in accredita-

tion team reports on 126 public junior colleges from 1960 to 1968. Most









of the indicators related to college procedures, the efficiency of oper-

ations, staffing levels, and organizational structure. Walters postu-

lated that the 58 indicators, taken collectively, described a quality

public junior college. Only two of the indicators were based on any

specific quantitative measures. Another study of educational quality

in the two-year college, the Pike study (1963), involved an analysis of

the relationship of current expenditures, enrollment, and expenditures

per student to certain variables associated with educational quality in

junior colleges in Texas.

The IRC project (Steuart & Rathburn, 1982), which generated the data

used in the present study, surveyed 631 administrators representing 24

of Florida's public community colleges to determine what information

was perceived as useful in making decisions about the quality of pro-

grams or services offered by their colleges. In that project, the ad-

ministrators rated 434 program characteristics for degree of usefulness

in quality-evaluation decision making. More than 100 program character-

istics were identified as highly useful. The program characteristics

identified as highly useful were organized on the basis of perceived

similarity of content into 12 types of information including the need

for and structure of a program, program size, program costs, program

utilization rates, support services related to a program, information

on students entering a program, information on students currently en-

rolled in a program, information on faculty or staff associated with a

program, information from external or internal evaluations of a program,

quantitative outputs of a program, ratings of a program by various types

of raters, and information on students transferring from a program to

upper division (pp. 68, 145-146).









Similar to most of the studies of quality in graduate education,

none of the studies of quality in undergraduate education except the

IRC project (Steuart & Rathburn, 1982) investigated the usefulness in

quality-evaluation decision making of the various quality indicators

used in the studies. Also, although multiple program characteristics

have been used as indicators of quality, no study has attempted to iden-

tify any underlying dimensions for the multiple indicators except Astin

and Solmon (1981). Although the indicators of quality in the Astin and

Solmon study were so broad and so few that the dimensions identified are

probably spurious, they did demonstrate the use of the factor-analytic

technique in identifying underlying dimensions of indicators of quality.

Quantifiable Approaches to quality

In recent years, higher education researchers have explored numerous

ways of providing objective measures of educational quality. Many of

these attempts have involved correlating various quantifiable measures

with established rankings of institutional quality. These measures in-

cluded, among others, institutional size (Elton & Rose, 1972; Hagstrom,

1971), research productivity (Drew, 1975; Wispe, 1969), publication pro-

ductivity (Lewis, 1968), amount of money spent (Ousiew & Castetter,

1960), and number of library volumes (Lazarsfield & Thielens, 1958).

Many of these "correlates of prestige" (Lawrence & Green, 1980, p. 23)

used the popular ACE ratings as their basis for comparison. Cartter

(1966), anticipating the identification of quantifiable quality indica-

tors in his ratings, stated that such indicators "are for the most part

'subjective' measures once removed" (p. 4).

The list of factors was lengthy that positively correlated with rep-

utational quality ratings. Blackburn and Lingenfelter (1973) listed the









following items as being positively correlated with the 1966 ACE rat-

ings:

1. Magnitude of the doctoral program.
2. Amount of federal funding for academic research and de-
velopment.
3. Non-federal current fund income for educational and gen-
eral purposes.
4. Baccalaureate origins of graduate fellowship recipients.
5. Baccalaureate origins of doctorates.
6. Freshman admissions selectivity.
7. Selection of institutions by recipients of graduate
fellowships.
8. Postdoctoral students in science and engineering.
9. Doctoral awards per faculty member.
10. Doctoral awards per graduate student.
11. Ratio of doctorate to baccalaureate degrees.
12. Compensation of full professors.
13. The proportion of full professors on a faculty.
14. Higher graduate student/faculty ratios.
15. Departmental size of seven faculty members or more.
(p. 11)

Fotheringham (1978) described traditional quality indicators as in-

cluding context, faculty input, faculty-student interaction, and student

input. Fotheringham defined context as "the setting for the educational

process" (p. 17). The context variables included number of library vol-

umes, administrative policies, physical facilities, and similar varia-

bles. Pike (1963), in his study of the relationship between 72 varia-

bles associated with educational quality including enrollment, current

expenditures, and expenditure per student, found expenditures to be the

most important measure of context. Banghart, Kraprayoon, and Clewell

(1978) identified other context variables including curriculum, admini-

strative practices, and amount of external funding.

Meder (1955) defined faculty input as including an instructor's

training, skill, ability, and morale. Blackburn and Lingenfelter (1973)

included degrees, awards, faculty compensation, and post-doctoral stud-

ies as indicators of faculty input. Other faculty input indicators









included research productivity (Hagstrom, 1971), publication productiv-

ity (Somit & Tanenhaos;.1964) and faculty size (Balderston, 1970). The

faculty input indicators identified as most difficult to measure in-

cluded faculty morale, vigor, cohesion, and progressiveness that

Balderston (1974) suggested could only be measured subjectively.

Faculty-student interaction has been traditionally defined as the

faculty-student ratio (Meder, 1955). That definition has been expanded

to include the accessibility of the faculty (Roose & Andersen, 1970) as

well as the extent and nature of the faculty contact with students

(Fotheringham, 1978).

Student input indicators of quality have often been held as the most

valuable type of indicator. Fotheringham (1978) defined student input

as the characteristics of the student at the time of admission.

Blackburn and Lingenfelter (1973) proposed a more comprehensive defini-

tion simply as the students' quality. Many researchers concluded that

not enough has been done to control for variations in student input in-

dicators when measuring various outcome indicators of quality (Richards,

Holland, & Lutz, 1966; Rock, Centra, & Linn, 1969).

Fotheringham (1978) cited three more categories of quality indica-

tors that he labeled output, student change, and intellectual climate.

Output was described as including both faculty output (publications and

other productivity measures) and student output (accomplishments of stu-

dents following graduation). Variability in the specific measures used

to assess output indicators was reflected in the work of Keller (1969)

and Lawrence, Weathersby, and Patterson (1970).

The student change indicators related to the extent of learning that

took place during the students' enrollment (Turnball, 1971). Ostar









0973)described this as the value-added concept. It was his opinion

that in the assessment of the development of students, specific atten-

tion should be given to their initial abilities and their goals. Meas-

ures of student change included post-graduate employment, personal

achievements, motivation, and achievements in graduate school according

to Fotheringham (1978).

Fotheringham (1978) defined intellectual climate as "an attitude

toward learning and scholarship shared by students, faculty, and admin-

istration" (p. 26). Several researchers have expressed the opinion that

campus climate is of primary importance in assessing institutional qual-

ity (Astin, 1963; Boyer, 1964; Bowen, 1963). Indicators in this cate-

gory included both academic attributes, such as faculty concern for

scholarship, and non-academic attributes such as students' residential

experience, democratic participation of the students in campus affairs,

and counseling or other supplementary services.

Although multiple quantifiable indicators of quality have been iden-

tified in these studies, none of the studies investigated the possibil-

ity of identifying underlying dimensions of the multiple indicators to

facilitate providing information to decision makers in a format useful

in quality-evaluation decision making. The IRC study (Steuart &

Rathburn, 1982) included some program characteristics representative of

many of these quantifiable indicators of quality which is another rea-

son the data from that study provided an excellent opportunity for iden-

tifying underlying dimensions for information useful in program quality-

evaluation decision making. A discussion of the utility of the tech-

nique of factor analysis for identifying any underlying dimensions of a

multi-variate data set is presented in the next section of this chapter.









Determining Underlying Dimensions: Factor Analysis

In the decision-oriented model of evaluation as described by

Stufflebeam et al. (1971), once the information useful for making an

evaluation has been determined in interaction with the decision maker,

that information should be provided to the decision maker in a format

useful to the decision maker. If relatively few items of information

are involved, then the means of providing the information in a useful

format would appear relatively straightforward. However, from the re-

view of selected studies on quality evaluation in higher education, mul-

tiple indicators of quality have been identified. In the IRC study

(Steuart & Rathburn, 1982), more than 100 program characteristics were

identified as highly useful in making quality-evaluation decisions.

Providing such a wide array of information in a format useful to a de-

cision maker is a problem. Craven (1980) indicated that "providing the

desired information in an appropriate format" (p. 111) is a major con-

cern if evaluation processes are to effectively address the higher edu-

cation issues of the 1980s.

Applicability of Factor Analysis

The situation of administrators in higher education attempting to

use multiple indicators of quality when making quality judgments about

programs or services is similar to the situation psychologists faced

when evaluating human personality: interpreting multiple measures to

describe or evaluate a person (Harman, 1976, p. 4). This was the con-

text for the origin of factor analysis in psychology. It was developed

as a technique to determine dimensions of personality that would facili-

tate the evaluation of personality (Cattell, 1950, pp. 26-27). Although

it was developed within the field of psychology, the mathematical









techniques involved are not limited to psychological applications

(Harman, 1976, p. 4). Cattell (1966) stated that the use of factor

analysis was particularly advantageous where "the number of variables

to be watched over and thought about is bewilderingly large . [and

where] there has been little success after several years in reaching

agreement on the major concepts [in the area of inquiry]" (p. 175).

Both of these criteria appear to apply to the field of quality evalua-

tion in higher education. Burt (in Cattell, 1966) has stated that the

primary aim of factor analysis is "to discover principles of classifica-

tion [of individuals or variables]" (p. 268).

Simply because the technique of factor analysis originated in the

field of psychology, applications of factor analysis were primarily in

that field up until increasing accessibility to computers facilitated

the use of the technique (Harman, 1976, p. 7). Harman (1976) has col-

lected more than 200 studies using factor analysis in fields other than

psychology including such diverse fields as economics, medicine, the

physical sciences, political science, sociology, and regional science

(p. 7). Also, he cited a number of taxonomic applications in fields
other than psychology (pp. 7-8). Harman stated that

Unlike the field of psychology, in which theory has been pri-
mary and the factor-analytic model has been used to test and
modify such theory, the application of factor analysis in the
areas noted has been exploratory, almost exclusively, in the
hope of bringing order out of the relationships among the many
variables that could now be investigated with the aid of the
computer. (p. 8)

Guertin and Bailey (1970) suggested numerous applications for fac-

tor analysis in the field of educational psychology (Chapter 14). The

pervasiveness of its use in research in higher education is indicated

by the numerous entries under the subject heading "factor analysis" in









each issue of Resources in Higher Education published by the Educational

Resources Information Center (ERIC). The following recent studies in

higher education are cited because, as in this study, factor analysis

was used for discovering dimensions or categories among a set of varia-

bles.

Smart (1975) used the technique in a survey of students, faculty,

and administrators to determine salient dimensions of 47 institutional

goals rated by respondents for degree of importance to a college. In a

survey of a stratified random sample of 722 Minnesota citizens, Biggs,

Brown, and Kingston (1977) used factor analysis to determine "categories

of educational values" (p. 157) from respondents' ratings of the impor-

tance of various university goals and activities, the importance of var-

ious academic fields, and the importance of various reasons for students

attending the University of Minnesota. During the development of a

model for evaluating educational innovations, Bess and Hayes (1970)

used factor analysis as a means "of assembling meaningful clusters of

student characteristics into subcultures" (p. 44) from students' re-

sponses to a questionnaire that was devised to measure a combination of

student personality characteristics, value orientations, attitudes,

goals, perceptions, and behaviors. In a study to investigate the pos-

sibility of clustering academic departments on dimensions that could

provide an equitable basis for departmental funding, Dressel and Simon

(1976) used factor analysis on 35 descriptive variables representing

various characteristics of the instructional load and output of academic

departments to determine the dimensions for grouping the departments.

At the University of Toledo, a study was done with an objective

very similar to the objective in this study (Perry & Lind, 1976). In









the Perry and Lind study, factor analysis was used on the ratings by

140 department chairpersons and 272 program graduates of the importance

of 33 criteria in evaluating academic programs to determine "what latent

factors or dimensions were involved in the data" (p. 20). In their most

recent reputational study of undergraduate educational quality, Solmon

and Astin (1981) used factor analysis to determine patterns among the

ratings of seven discipline areas in selected American undergraduate in-

stitutions by faculty representing undergraduate institutions in four

selected states.

Each of these studies is illustrative of the use of factor analysis

for discovering categories or dimensions of an underlying pattern within

a set of variables. It appeared appropriate in this study to use fac-

tor analysis to determine the underlying dimensions of the multiple in-

dicators of quality identified in the IRC project (Stuart & Rathburn,

1982) to use in developing guidelines for organizing the identified in-

formation into a format useful to administrators in making quality-eval-

uation decisions about programs.

Definition of Factor Analysis

Spearman is generally credited with the origin of factor analysis

in his development of a psychological theory involving the specification

of a general factor and a number of specific factors related to describ-

ing general intelligence: the two-factor theory (Harman, 1976, p. 3).

Finding Spearman's theory insufficient to describe a battery of psycho-

logical tests, other psychologists explored the possibility of extract-

ing several general or common factors from a matrix of correlations

among tests. These explorations led to the development of multiple-

factor analysis (Harman, p. 4).









The principal concern of factor analysis is the resolution of a set

of variables into a smaller number of categories or "factors." The

resolution is accomplished by analysis of the correlations among the

variables within the set. A satisfactory resolution produces a set of

factors (or categories or dimensions or variables) smaller than the

original set of variables that conveys the essential information of the

original set of variables. Thus, "the chief aim [of factor analysis]

is to attain scientific parsimony or economy of description" (Harmon, p.

4). Economy of description is precisely the goal in providing to deci-

sion makers in a useful format the information represented by multiple

indicators of quality. As Fox (1969) stated, factor analysis is a pro-

cedure for "identifying the underlying structure of the interrelation-

ships expressed in the correlational matrix [of a set of variables]"

(p. 216). The procedure estimates the minimum number of separate vari-

ables or dimensions, called factors, necessary to provide the informa-

tion contained in the correlation matrix (Fox, p. 216).

Steps in Factor Analysis

Fox (1969) described the procedure of factor analysis as typically

involving a five-step process (pp. 216-218). The first step is to iden-

tify the variables to be studied. The second step is to create a matrix

of correlations expressing the correlation between each pair of variables

in the set of variables being studied. The third step is "to put this

matrix through the first computational process of factor analysis that

produces what is called an unrotated matrix of principal components,

from which the minimum number of separate factors required to account

for the data can be identified" (p. 217). A full description of the

calculation procedures is presented in Harman (1976).









Harman (1976) described two basic approaches to the calculations in-

volved (pp. 14-15). Within the framework of the linear mathematical

model used in factor analysis, the calculations can either extract the

maximum variance or best reproduce the observed correlations (p. 14).

The method for the reduction of a large body of data so that the maxi-

mum variance is extracted was first proposed by Pearson and later de-

veloped as the method of principal components or component analysis

(p. 14). In contrast to the maximum variance approach is the classical

factor-analysis model developed to maximally reproduce the correlations.

It is generally called common-factor analysis because each of the ob-

served variables involved in the analysis is defined linearly in terms

of a number of common factors and a unique factor (p. 15). "The common

factors account for the correlations among the variables, while each

unique factor accounts for the remaining variance (including error) of

that variable" (p. 15). The common-factor analysis approach was used

in this study because the intent was to determine as clearly as possible

the dimensions (interrelationships) among the variables involved and

not to determine the amount of variance attributable to a variable or a

group of variables (See Guertin & Bailey, 1970, pp. 82-83).

The method of calculation generally used for common-factor analysis

was described by Thurstone and has been labeled the "principal axes so-

lution" (in Guertin & Bailey, 1970, p. 61). The essential difference

between the methods is whether in the mathematical computations unities

are inserted in the diagonal of the correlational matrix (component

analysis) or whether "communalities" are inserted (common-factor analy-

sis) (Harman, 1976, p. 70). According to Guertin and Bailey (1970),

the use of unities in the diagonal of the correlation matrix causes the









intercorrelation matrix to take on a higher rank than it would with val-

ues less than unity in the diagonal (p. 33). Since the object in fac-

tor analysis is to find the minimum number of factors or dimensions or

variables necessary for economy of description of the total set of var-

iables, values less than one are desired in the diagonal (Guertin &

Bailey, p. 33). The values less than one in the diagonal are called

"communalities." The communalities express the amount of the common-

factor variance (the variance shared with all the other variables in

the analysis) (Guertin & Bailey, p. 33). The correlation matrix with

communalities rather than unities in the diagonal is called the reduced

intercorrelation matrix (Guertin & Bailey, p. 33).

One of the problems encountered in common-factor analysis is that

the appropriate communalities are not easily computed with precision

and various methods of estimating them have been developed. The best

estimate of the communalities appears to be the squared multiple corre-

lations of each variable with the remaining variables (Guertin, 1977, p.

21). On the other hand, Harman (1976) stated that "it matters little

what values are placed in the principal diagonal of the correlation ma-

trix when the number of variables is large (say, n> 20)" (p. 86), be-

cause the number of values in the diagonal is relatively small compared

to the many values off the diagonal so the factorial results are little

affected (p. 86). However, the use of communalities in the diagonal

prior to factor extraction makes possible the obtaining of the maximum

amount of common-factor variance, a chief emphasis of common-factor

analysis (Guertin, 1977, p. 22).

Once the principal axes factors have been extracted from the reduced

intercorrelation matrix through the processes involved in step three,










they can then be rotated to gain the clearest view of the common-factor

space or configuration. This is step four of the factor-analysis pro-

cess described by Fox (1969, p. 217). Rotation is performed mathema-

tically, but the concept of rotation is based upon geometry. A clear

description of the relationship may be found in Guertin and Bailey

(1970, pp. 26-34 and Chapter 6). The reason for rotation is that al-

though the initial factors may be mathematically satisfactory solutions,

the factors themselves may have little meaning relative to determining

constructs or principles of concern to the investigator (Guertin &

Bailey, 1970, pp. 87-88).

Since the principal axes method extracts the maximum possible common

variance, the primary decision in rotation becomes that of determining

the number of principal axes to carry into rotation to gain the clearest

picture of the common factors (Guertin, 1977, p. 22). At this point in

the factor-analysis process, there is encountered another major problem:

what criterion or criteria to use to decide what number of factors to

carry into rotation (Guertin & Bailey, 1970, Chapter 7). Guertin (1977)

stated that the universally accepted criterion that is followed is

Thurstone's principle of simple structure that yields factors that are

relatively invariant across studies (p. 22). Guertin and Bailey (1970)

asserted that the simple structure criteria not only provide a unique

solution but at the same time assure meaningful factors (p. 42). In

simplest terms, the concept of simple structure dictates that both var-

iables and factors be described by a minimum number of sizable loadings

(Guertin, 1977, p. 22). In reference to the matrix representation of

factors (columns) and variables (rows), the concept of simple structure

specifies that the columns (factors) should have the largest possible









number of zero or negligible loadings (values), the rows (variables)

should have the largest possible number of zero or negligible loadings

(values), and every pair of columns (factors) should have the largest

possible number of values approaching zero in one column (factor)

(Guertin & Bailey, 1970, p. 99). The ideal situation would be to have

each variable have a high loading on only one of the factors and for

each factor to have only a few variables with high loadings with all the

other variables having loadings approaching zero on that factor (Guertin

& Bailey, 1970, p. 98).

To approximate the ideal of simple structure for a given factor ma-

trix, the factors may be rotated in either an oblique or an orthogonal

fashion (Guertin, 1977, p. 22). As with the term rotation, these terms

reference a geometric perspective. Conceiving of the factors as dimen-

sions (vectors), an orthogonal rotation assumes that the factors are un-

related and places the factors (vectors) in relation to each other at

900 angles. An oblique rotation is not held to that criterion. Accord-

ing to Guertin and Bailey (1970), with the use of real data, true simple

structure must provide for correlated factors so an orthogonal represen-

tation of factor space is unsatisfactory (p. 100). They recommend the

use of the oblique rotation procedures and if that results in factors

that are only slightly correlated, then an orthogonal rotation may be

performed (p. 101). It is their opinion that it is necessary to use

oblique rotation procedures to properly represent underlying dimensions

or factors of a set of variables (p. 89).

The utilization of rotation to identify simple structure completes

step four of the factor-analysis process as outlined by Fox (1969, p.

217). The resulting matrix is the factor pattern and the values forming









this matrix are called the factor loadings (Fox, 1968, p. 217; Harman,

1976, p. 15). The loadings have the same characteristics as correla-

tion coefficients in that they are two-digit decimal numbers in the

range of +1.00 to -1.00 through a midpoint of zero. A variable can have

a positive or negative loading on a factor and the sign indicates

whether the factor operates to raise or lower the value of that particu-

lar variable (Fox, 1969, pp. 217-218). The magnitude of the loading in-

dicates the importance of the factor on each variable (p. 218).

The fifth and final step in the factor-analysis process as outlined

by Fox (1969) is for the researcher to label the factors (p. 218). Gen-

erally, this involves determining the variables that have relatively

high loadings on a factor and then abstracting a term or concept that

reflects the content of these variables (p. 218; see also Guertin &

Bailey, 1970, p. 87).

This description of factor analysis has presented only the salient

features of the process related to this study. A thorough discussion

of factor analysis may be found in Harman (1976). For the less mathe-

matically inclined person, Guertin and Bailey (1970) present an excel-

lent description of factor analysis.















CHAPTER III
METHODOLOGY


The Problem

The problem in this study was the identification of any underlying

dimensions within the multiple quality indicators rated by administra-

tors in Florida public community/junior colleges as highly useful in

making program quality-evaluation decisions. The research questions

were: (1) What is the "best" factor structure for the usefulness rat-

ings? (2) For the identified "best" factor structure, are there signif-

icant differences in the mean factor scores between classifications of

respondents by program area and between classifications of respondents

by administrative area?

Description of Data Used

The data used in this study were generated in the IRC project

(Steuart & Rathburn, 1982). A full description of the methodology used

in that project is in Appendix B.

The survey population consisted of all administrators in Florida

public community/junior colleges who were classified by their institu-

tions as executive, administrative, or managerial personnel under part

three of the "Personnel and Salary Report (SA-1)" as defined in the

Community College Management Information System Procedures Manual of the

State of Florida (Division of Community Colleges, 1980, pp. 10.1-10.2).

There were 631 administrators identified and 450 respondents represent-

ing 24 of Florida's 28 public community/junior colleges for a response

rate of 71.3% (Steuart & Rathburn, 1982, p. 49).
50









The responding administrators rated 434 program characteristics,

contained in a survey questionnaire (Appendix C), for degree of useful-

ness in program quality-evaluation decision making. The rating scale

was



1 = ESSENTIAL ("I do not see how I could make a judgment about
the quality of a program without considering this charac-
teristic.")

2 = VERY USEFUL ("I would feel hindered in making a judgment
about the quality of a program without considering this
characteristic, but I would make a judgment without it.")

3 = SOME USEFULNESS ("Although I would like to consider this
characteristic in making a judgment about the quality of
a program, I would not feel hindered in making a judgment
without it.")

4 = LITTLE OR NO USEFULNESS ("I probably would not consider
this characteristic in arriving at a judgment of the
quality of a program.") (Steuart & Rathburn, p. 144)

Also, any program characteristics that were considered "not applicable"

by the raters were rated with a "4" (Steuart & Rathburn, p. 144).

Each respondent was assigned a "position code" (p. 46) based upon a

self-reported position title on each questionnaire. The position codes,

a description of the position titles associated with each code, and fre-

quencies of respondents for each code are reported in Appendix D.

The program areas and administrative areas used to classify the re-

sponding administrators were defined as follows:

Program Areas
Advanced and Professional Program Area--commonly referred to as
university parallel, the first two years of a baccalaureate pro-
gram.
Occupational Program Area--or vocational-technical education,
terminal certificate or degree programs preparing students for
employment in a specific trade or field.
Community Instructional Services Program Area--programs of
short, credit or noncredit classes designed to provide enrich-
ment for students.









Developmental Program Area--of compensatory education, designed
to assist students in improving deficient basic skills necessary
for program-required work.
Student Services Program Area--various auxilliary services pro-
vided to students facilitating their progress through one of
the program areas including such services as counseling, student
activities, admissions, financial aid, etc.
Administrative Areas
General Administration--respondents with responsibilities of a
general nature in the operation of the college's programs or
services.
Academic Affairs--respondents with responsibilities of adminis-
tering one or more of the college's academic programs.
Student Affairs--respondents with responsibilities of adminis-
tering one or more of the college's student services programs.
Community Instructional Services--respondents with responsibil-
ities of administering the college's adult and continuing edu-
cation or community instructional services programs.
Business Affairs--respondents associated with the operation of
the business offices (budget, accounting, personnel, etc.) of
the college.
President--the chief executive officer of the college. (Steuart
& Rathburn, 1982, Appendix A)

Based upon their position titles, only respondents who were perceived as

having major responsibility in one of the five program areas were in-

cluded in the analysis by program areas. For example, presidents, vice

presidents, research and planning directors, and other administrators

with responsibilities across program areas were not included in the

analysis by program areas. All respondents were included in the anal-

ysis by administrative areas. Operational definitions for these classi-

fications are given in Appendix A.

Mean ratings were calculated for each program characteristic in the

questionnaire. Using these means, ranks were calculated for the program

characteristics based upon the responses of all respondents (N = 450).

When the ranks for two or more program characteristics were tied, the

tied values received the mean of the ranks that would have been assigned

had the ranks not tied (Steuart & Rathburn, 1982, p. 54).









Only those program characteristics that were in the top quarter of

the ranked mean ratings were discussed in the presentation of results

for all respondents (N = 450) in the IRC report (Steuart & Rathburn,

1982, p. 54). All 108 program characteristics in the top quarter had a

mean rating on the "essential" side of the rating scale (p. 62). The

mean ratings of these 108 program characteristics ranged from 1.38 to

2.05 (p. 54). The analyses in this study included only these 108 pro-

gram characteristics. The means for these program characteristics are

reported in Appendix E.

Analysis of the Data

Research Question One

To discover the best factor structures for the usefulness ratings,

two sets of data were used for analysis. An analysis was performed

based upon those respondents who rated all 108 program characteristics

(i.e., respondents with missing data were excluded). There were 315

such cases. The same analysis was performed using the ratings of all

450 respondents by changing any missing ratings for an item to the mean

ratings for respondents rating that item. The use of all 450 respond-

ents was desirable so that all respondents could be included in the com-

parisons of factor scores between program areas and between administra-

tive areas (research question two). The following procedures for obtain-

ing the best factor structure were performed on each of these sets of

data and the results compared through use of the coefficient of congru-

ence for matching factors, inspection of the difference in the root-

mean-square values, and the criteria for simple structure (Guertin &

Bailey, 1970, p. 99; Harman, 1976, pp. 343-344).









The first step in the analysis was the production of the correlation

matrices representing the correlations between the ratings of all possi-

ble pairs of the 108 program characteristics. These correlation ma-

trices constituted the basis for what has been defined as an R analysis

(Cattell, 1950, p. 28). An R analysis consists of looking at the inter-

relationships of variables (program characteristics) rather than cases

(respondents)(Cattell, 1950, pp. 30-31). The correlation coefficients

represented the degree of similarity in the ratings by the respondents

of any pair of program characteristics.

The correlation matrices were factor analyzed using the principal

axes method with iterations. It has been described as the most widely

used technique in determining the initial principal axes (Guertin &

Bailey, 1970, p. 62; Harman, 1976, p. 133). Following Guertin and

Bailey's (1970, p. 101) suggestion, the principal axes matrices were

submitted initially to an oblique rotation to determine whether the fac-

tors were essentially uncorrelated. The direct oblimin rotation proce-

dure (Jennrich & Sampson, 1966) was used with gamma equal to zero. Pro-

gram P4M was used in the BMDP Biomedical Computer Programs P-Series 1979

(Dixon & Brown, 1979). The squared multiple correlations were used as

the initial estimates of the communalities (Guertin & Bailey, 1970, pp.

147, 163). The number of factors to carry into successive rotations was

determined by inspecting the results for decrements in the latent roots,

the cumulative percentages of common variance for which successive fac-

tors accounted, and the criteria for simple structure (Guertin & Bailey,

1970, pp. 115-120). Since the factors proved to be essentially uncorre-

lated, the principal axes matrices were then submitted to an orthogonal

rotation. The varimax method for orthogonal rotations was used since









there appeared to be general agreement that this method was preferred

with regard to giving the closest approximation to simple structure

(Guertin & Bailey, 1970, pp. 98-99; Harman, 1976, Chapter 14). Again

the number of factors to carry into successive rotations was determined

by inspecting the results for decrements in the latent roots (eigenval-

ues), the size of the latent roots, the cumulative percentages of common

variance for which successive factors accounted, and the criteria for

simple structure (Guertin & Bailey, 1970, pp. 115-120). The factor pro-

cedure in the SAS computerized package was used for the orthogonal rota-

tions (SAS Institute, Inc., 1979, pp. 203-210).

The resulting factor solutions from both sets of respondents (N =

315 and N = 450) were compared for congruence using the coefficient of

congruence (Harman, 1976, pp. 343-346). If the coefficient of congru-

ence between any pair of factors was .90 or greater, the factors were

considered congruent (Mulaik, 1972, p. 355). Since the factor structures

were congruent, the factor structure based upon the set of 450 respond-

ents (with missing values set equal to the mean value for that variable)

was selected as the best representation of the underlying dimensions of

the 108 indicators of quality.

The loadings of the variables on each factor in this factor struc-

ture were inspected. Any variable having a loading of .50 or greater

was considered in determining the meaning of a factor (Guertin & Bailey,

1970, pp. 78, 81). Based upon the nature of the program characteristics

with a .50 or greater loading, each factor was described. With the de-

scription of the factor structure, the methodology involved in the first

research question was completed.









Research Question Two

For the second research question, the best factor structure was used,

as determined by the methodology for the first research question, to cal-

culate factor scores for the respondents. The regression method was used

for the factor score computations (SAS Institute, Inc., 1979, p. 204).

The score procedure in the SAS computerized package was used (SAS Insti-

tute, Inc., 1979, pp. 371-372). Mean factor scores were determined for

the respondents classified by the described program and administrative

areas.

The differences in mean factor scores between the program areas and

between the administrative areas were tested for significance using the

t statistic at .10 level of significance. Since the variances of the

factor scores for some of the program areas and some of the administra-

tive areas were significantly unequal, as tested by use of the F statis-

tic at the .05 level of significance, it was inappropriate to perform

an analysis of variance prior to testing for significant differences be-

tween mean factor scores. Also, since the likelihood of a Type I error

increases as the number of contrasts tested increases, the Bonferroni

correction for the t statistic was used (Myers, 1979, pp. 298-300).

Essentially, this correction results in rejection of the null hypothesis

(i.e., there is no significant difference in the means) when the obtained

t exceeds the value of t in the standard t table at a level of signifi-

cance equal to the selected level of significance for the comparisons

(.10 in this study) divided by the number of comparisons. Since there

were 10 comparisons between the program areas and 15 comparisons between

the administrative areas, the obtained t for these comparisons had to

exceed the value in the t table at .01 (.10 divided by 10) and .007 (.10






57


divided by 15) levels of significance, respectively, for rejection of

the null hypothesis. Where the variances were significantly different

for the factor scores being compared, the t value was calculated on the

assumption of unequal variances (SAS Institute, Inc., 1979, p. 425).

The t-test procedure in the SAS computerized package was used (SAS In-

stitute, Inc., 1979, pp. 425-426).

Using the results of these analyses, guidelines were forumlated for

organizing the multiple indicators of quality into a format useful to

administrators in Florida public community/junior colleges in making

quality-evaluation decisions about programs offered by their colleges.















CHAPTER IV
RESULTS AND DISCUSSION


There were 450 administrators representing 24 of Florida's 28 public

community colleges who rated 434 program characteristics, contained in

a survey questionnaire (Appendix C), for degree of usefulness in program

quality-evaluation decision making. The rating scale ranged from 1

(essential) to 4 (little or no usefulness). Only the 108 program char-

acteristics in the top quarter of ranked mean ratings were included in

the factor analysis. Based upon all 450 respondents, the mean ratings

for each of these 108 program characteristics are presented in Appendix

E. All of the 108 program characteristics in the top quarter of ranked

mean ratings had a mean rating on the "essential" side of the rating

scale. The mean ratings of the 108 program characteristics ranged from

1.38 to 2.05. The mean ratings of each of the 108 program characteris-

tics, based upon the 315 respondents who rated all of them, are reported

in Appendix E. These mean ratings ranged from 1.36 to 2.13.

The Pearson product-moment correlation coefficients for the inter-

correlations of the 108 program characteristics, based upon the ratings

by all respondents (N = 450) with missing values for any program charac-

teristic set equal to the mean rating for that program characteristic,

are presented in Appendix F. The Pearson product-moment correlation co-

efficients for the intercorrelations of the 108 program characteristics,

based upon the ratings by respondents with no missing responses (N = 315),









are presented in Appendix G. These two sets of correlation coefficients

were used in the factor analysis.
Factor Analysis Results

The iterated principal axes factor-analytic method as applied to

both sets of correlation coefficients resulted in a solution with 21

principal axes. The principal axes solution based upon N = 450 is pre-

sented in Appendix H with the final communality estimates and eigenval-

ues. The principal axes solution based upon N = 315 is presented in

Appendix I with the final communality estimates and eigenvalues.

For the principal axes solution based upon N = 315, the latent roots

(eigenvalues), differences in the latent roots, cumulative variance for

which successive axes accounted, and the percentage of common variance

for which successive axes accounted are presented in Table 1. These

were the values that were examined to determine the number of factors to

carry into the initial rotations. In factor analyses, the latent roots

generally fall off rapidly at first because systematic common variance

is being extracted. The roots start decreasing almost linearly as

mostly error variance is being extracted. It is generally accepted that

one criterion for the cutoff point for the number of factors to rotate

comes just before this linear descent (Guertin & Bailey, 1970, p. 117).

Although the differences in the latent roots decreased greatly after fac-

tor 5, they did not become linear until after factor 10 (Table 1). Us-

ing the differences in the latent roots, the rotation of 10 factors was

indicated. The rotation of 10 factors accounted for 80.7% of the common

variance compared to 64.5% accounted for by the rotation of five factors.

Following the suggestion of Guertin and Bailey (1970, p. 117), one more

and one less than the indicated number of factors were rotated with the










Table 1

Variance Accounted for by Successive
Principal Axes for N=315


Principal
Axes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21


Cumulative
Differences Variance

-- 26.616
18.928 34.304
1.703 40.289
2.083 44.191
.790 47.303
.117 50.298
.353 52.940
.310 55.272
.219 57.385
.250 59.248
.041 61.070
.118 62.774
.151 64.327
.084 65.796
.192 67.073
.069 68.281
.053 69.436
.073 70.518
.066 71.534
.067 72.483
.055 73.377


results compared according to the criteria for simple structure.


Accord-


ing to Guertin and Bailey (1970), "the factors are best located when the

produced structure is as simple as possible" (p. 98). The three general

criteria for simple structure are: (1) the factors should have the

largest possible number of loadings approaching zero; (2) the variables

should have the largest possible number of loadings on the factors

approaching zero; and (3) every pair of factors should have the largest

possible number of loadings approaching zero on one factor but not the

other (Guertin & Bailey, 1970, p. 99). For all program characteristics

that had a factor loading of .50 or greater, the loadings that resulted

from rotating 9, 10, and 11 factors are presented in Table 2. The com-

plete factor structures for the three rotations are presented in Appendix J.


Percentage of
Common Variance

36.3
46.8
54.9
60.2
64.5
68.5
72.1
75.3
78.2
80.7
83.2
85.5
87.7
89.7
91.4
93.1
94.6
96.1
97.5
98.8
100.0


Eigenvalues

26.616
7.688
5.985
3.902
3.112
2.995
2.642
2.332
2.113
1.863
1.822
1.704
1.553
1.469
1.277
1.208
1.155
1.082
1.016
.949
.894






61


Table 2

Program Characteristics With Factor Loadings
of .50 or Greater in the Three Rotations of
the Principal Axes Solution Based Upon N=315
Rotations
Factors Characteristics 9 10 11
7 .52 .55 .54
15 .56 .58 .88
41 .66 .67 .66
89 .65 .66 .65
69 .74 .75 .74
63 .73 .74 .74
101 .63 .63 .63
95 .62 .62 .63
17 .56 .58 .57
31 .60 .61 .61
48 .69 .69 .68
1 75 .79 .78 .78
79 .76 .76 .76
96 .71 .70 .70
99 .67 .67 .67
1 .55 .57 .56
6 .54 .55 .56
13 .65 .65 .65
44 .70 .69 .69
72 .74 .73 .73
29 .76 .75 .76
28 .74 .73 .73
73 .65 .64 .64
51 .70 .67 .68
37 .59 .58 .59
--------------------T ------------ --- ----- -------
25 .77 .77 .76
16 .81 .80 .80
2 20 .82 .81 .81
60 .60 .61 .61
39 .81 .81 .81
36 .84 .84 .84
50 .84 .84 .84
---------------- 7T---------------. --------------5---------
87 .59 .58 .59
70 .72 .73 .73
56 .75 .77 .77
49 .73 .75 .75
3 46 .54 .56 .57
30 .60 .63 .63
80 .61 .65 .64
27 .70 .72 .72
22 .75 .78 .77
26 .73 .76 .75
.....................---------------------------------------------------






62


Table 2 (continued)

Rotations
Factors Characteristics 9 10 11

76 .71 .71 .72
78 .54 .55 .52
102 .54 .53 .54
74 .70 .70 .70
4 77 .57 .57 .57
98 .70 .70 .71
82 .58 .60 .57
92 .51 .52 .51
86 .66 .66 .66

59 .38a .52 .51
81 .43a .57 .54
66 .45a .59 .60
5 2 .48a .51 .52
19 .59 .60 .62
42 .60 .61 .61
34 .61 .62 .66
100 .57 .57 .60
-88 -- 2 74 ------- g2-52----- -----
4 .57 .57 .56
83 .82 .51 .50
6 40 .76 .76 .77
12 .72 .73 .75
35 .72 .73 .75
11 .50 .51 .49a
-------------------- --------- ---7T------ ----
90 .64 .68 .66
103 .60 .58 .60
7 33 .58 .55 .57
24 .72 .74 .74
32 .72 .73 .72
58 .64 .61 .63
9 .62 .56 .59
--- --- ------ -- 7------- ----------
18 .74 .73 .73
23 .72 .72 .74
8 65 .68 .68 .69
38 .69 .69 .69
53 .62 .62 .62
105 .49a .49a .50
-------- ------- ------ ---- ----------
45 .51 .60 .59
5 .46a .45a .46a
9 64 .62 .62 .63
84 .61 .66 .66
14 .65 .59 .59
8 .59 .51 .51
62 .50 .45a .45a
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .









Table 2 (continued)

Rotations
Factors Characteristics 9 10 11

10 None
-------------- --........................--------- -
11 91 .62
104 .68
aThe factor loading is included for comparison with the factor struc-
ture based upon N=450.


The most evident feature of the data represented in Table 2 was that,

regardless of the rotation examined, there was a relatively stable nine-

factor structure. For factors 1, 2, 3, 4, and 7, loadings of the varia-

bles on the factors were very similar within the three rotations. For

factor 5, the rotations of 10 or 11 factors produced a more clearly de-

fined structure. Although not evident in Table 2, from a comparison of

factors 5 and 9 for the three rotations in Appendix J, the 10 factor ro-

tation most closely approximated the criteria for simple structure be-

tween factors 5 and 9. Also, for factor 6 the 10-factor rotation pro-

duced the clearest structure. For factor 8, from Table 2 the rotation

of 11 factors was indicated as producing the clearest structure, but

from comparison of the loadings of other variables on factors 2, 5, and

8 for the three rotations presented in Appendix J, the 10-factor rota-

tion most closely approximated the criteria for simple structure. Also,

for factor 9, from a comparison of the loadings of other variables on

factors 5, 6, and 9 in the three rotations in Appendix J, the 10-factor

rotation most closely approximated the criteria for simple structure.

No variables had loadings of .50 or greater on factor 10 for either the

10- or 11-factor rotations. Three variables had a .50 or greater load-

ing on factor 11 for the 11-factor rotation.









For the three rotations, the rotation of 10 factors produced the

clearest common-factor structure. The rotation of 11 factors resulted

in the same nine interpretable factors as the 10-factor rotation but

with a slightly less clear structure. A trial rotation of 12 factors

confirmed this analysis. The 12-factor rotation resulted in the fis-

sion of factors 1 and 5 into more specific factors. Therefore, the 10-

factor rotation of the principal axes solution for N = 315 was chosen

as the rotation most closely approximating the criteria for simple

structure and producing the clearest picture of the common-factor struc-

ture for the ratings of the 108 program characteristics.

For the principal axes solution based upon N = 450, the latent roots,

differences in the latent roots, cumulative variance for which success-

ive axes accounted, and the percentage of common variance for which suc-

cessive axes accounted are presented in Table 3. Using the differences

in the latent roots, the rotation of 10, 11, and 12 factors was indi-

cated. For all the program characteristics that had factor loadings of

.50 or greater, the loadings that resulted from the three rotations are

presented in Table 4. The complete factor structures for the three ro-

tations are presented in Appendix K.

As in Table 2, the most evident feature of the data presented in

Table 4 was that, regardless of the rotation examined, there was a rela-

tively stable nine-factor structure. For factors 1, 2, 3, 7, 8, and 9,

the variables with loadings of .50 or greater on the factors were the

same for the three rotations. The variables loading .50 or greater on

factors 4 and 5 were the same for the three rotations with the exception

of one variable that loaded slightly less than .50 (.49) on factor 4 in

the 11-factor rotation and one variable that loaded less than .50 (.41)













Principal
Axes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21


Table 3
Variance Accounted for by Successive
Principal Axes for N=450

Cumulative
Eigenvalues Differences Variance

26.356 -- 26.616
7.731 18.625 34.087
5.522 2.209 39.609
3.477 2.045 43.086
3.038 .439 46.124
2.946 .092 49.070
2.561 .385 51.631
2.303 .258 53.934
2.246 .057 56.180
1.922 .324 58.102
1.718 .204 59.820
1.684 .034 61.504
1.514 .170 63.018
1.396 .118 64.414
1.314 .082 65.728
1.123 .191 66.851
1.101 .022 67.952
1.050 .051 69.002
.990 .06 69.992
.920 .07 70.912
.907 .013 71.819


on factor 5 in the 12-factor rotation. The variables with loadings on

factor 6 were the same with the exception of one variable that loaded .50

in the 11-factor rotation but less than .50 in the other rotations. For

factor 10, the 10-factor rotation produced no loadings of .50 or greater

on factor 10. The 11-factor rotation had two variables loading above .50

on factor 10. These two variables had higher loadings in the 12-factor

rotation. Also, one additional variable had a loading on factor 10 of

.50 or greater in the 12-factor rotation. No variable had loadings of

.50 or greater on factor 11 in the 11-factor and 12-factor rotations.

Two variables had loadings of at least .50 on factor 12 in the 12-factor

rotation.


Percent of
Common Variance

36.7
47.5
54.4
60.0
64.2
68.3
71.9
75.1
78.2
80.9
83.3
85.6
87.7
89.7
91.5
93.1
94.6
96.1
97.5
98.7
100.0






66


Table 4

Program Characteristics With Factor Loadings of .50 or Greater
in the Three Rotations of the Principal Axes Solution Based Upon N=450

Rotations
Factors Characteristics 10 11 12

7 .51 .52 .53
15 .56 .57 .58
41 .65 .65 .65
89 .64 .64 .64
69 .73 .73 .73
63 .74 .75 .75
101 .61 .59 .58
95 .59 .58 .58
17 .58 .58 .59
31 .63 .64 .65
48 .71 .70 .70
1 75 .80 .79 .79
79 .77 .77 .78
96 .69 .67 .66
99 .64 .63 .63
1 .58 .59 .59
6 .58 .60 .61
13 .69 .70 .69
44 .71 .71 .70
72 .74 .74 .73
29 .77 .77 .77
28 .75 .76 .76
73 .64 .64 .63
51 .66 .65 .63
37 .56 .56 .55
--------------------- T ----- --- ------
25 .76 .77 .79
16 .79 .80 .82
20 .79 .80 .82
2 60 .58 .58 .56
39 .81 .80 .78
36 .81 .80 .78
50 .80 .80 .78
..-----------------7T----------- ------------- ---
87 .56 .55 .56
70 .73 .73 .73
56 .75 .76 .76
49 .72 .72 .73
346 .55 .57 .56
30 .58 .59 .66
80 .60 .61 .63
27 .72 .73 .74
22 .76 .78 .79
26 .74 .75 .76
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .









Table 4 (continued)

Rotations
Factors Characteristics 10 11 12

76 .72 .72 .71
78 .50 .49a .51
102 .56 .56 .55
74 .64 .65 .64
77 .57 .60 .61
98 .73 .73 .73
82 .55 .54 .57
92 .51 .53 .54
86 .61 .62 .62
----------------------------------------------
59 .37a .37a .26a
81 .41a .42a .32a
66 .50 .50 .41a
2 .59 .59 .61
19 .68 .68 .69
42 .66 .66 .66
34 .71 .71 .72
100 .65 .64 .65
88 .49a .50 .47a
4 .59 .60 .54
83 .53 .55 .51
6 40 .77 .78 .82
12 .73 .74 .80
35 .73 .74 .79
11 48a .49 .46
85 .62 .60 .65
90 .61 .60 .64
103 .56 .61 .59
33 .53 .55 .53
24 .71 .70 .72
32 .71 .70 .72
58 .63 .66 .63
9 .61 .64 .59
52 .72 .72 .72
18 .75 .75 .75
23 .77 .77 .77
8 65 .73 .73 .73
38 .72 .72 .72
53 .62 .61 .61
105 .47a .47a .47a
54 .55 .57 .60
45 .58 .60 .64
5 .51 .52 .51
54 .64 .64 .62
84 .64 .64 .66
14 .64 .62 .63
8 .61 .60 .60
62 .50 .50 .50
---------------------------------------------------









Table 4 (continued)

Rotations
Factors Characteristics 10 11 12

10 .28a .46a .59a
10 91 .36a .60 .70
104 .41a .67 .79
- ---------------------------------------------------
------------^---------------------
94 .51
12 54 .53

aThe factor loading is included for comparison with the factor struc-
turebbased upon N=315.
This factor most closely corresponds with factor 11 in the 11-fac-
tor rotation based upon N=315.
CThis factor most closely corresponds with factor 10 in the 10-fac-
tor rotation based upon N=315.


Although not entirely evident from the data presented in Table 4, it

was evident from the entire factor structure presented in Appendix K

that the rotation of 10 factors produced a factor structure more closely

approximating the criteria for simple structure. The 11-factor rotation

resulted in the beginning of fission for factor 1 (factor 11). The 12-

factor rotation clarified the structure for the specific factor (factor

11), continued the fission of factor 1, and resulted in the fission of

factor 5, producing another specific factor (factor 12). Therefore, for

the analysis based upon N= 450, the factor structure that resulted from

the rotation of 10 factors was selected as the best representation of

the common-factor structure for the ratings of the 108 program character-

istics based upon N = 450.

To determine the intercorrelation of the factors for both factor

structures, the two principal axes solutions (N = 315 and N = 450) were

submitted to an oblique rotation using the direct oblimin rotation pro-

cedure as described by Jennrich and Sampson (1966). The correlation









coefficients for the intercorrelation of the factors for both N = 315

and N = 450 are presented in Table 5. Since the factors were essen-

tially uncorrelated with no correlation coefficient exceeding .42, then

the orthogonal rotation was accepted as producing the best solution for

the common-factor structure.

The next task was to determine whether the 10-factor structure from

the analysis based upon N = 315 was congruent with the 10-factor struc-

ture from the analysis based upon N = 450. The coefficients of congru-

ence between comparable factors in the two factor structures are pre-

sented in Table 6. Since all the coefficients were at least .90, the


Intercorrelations of
of the Principal


Table 5

the Factors for the 10-Factor Rotation
Axes Solutions for N=315 and N=450


Factors
1 2 3 4 5 6 7 8 9 10

1 1.00
2 .37 1.00
3 .17 .14 1.00
4 .28 .41 .06 1.00
N=315 5 .18 .24 .13 .24 1.00
6 .16 .33 .19 .24 .24 1.00
7 .20 .25 .30 .17 .15 .12 1.00
8 .23 .25 .18 .21 .23 .25 .21 1.00
9 .05 .18 .22 .11 .15 .13 .27 .23 1.00
10 .05 .08 .08 -.02 .04 .02 .03 .07 .01 1.00

1 1.00
2 .19 1.00
3 .16 .28 1.00
4 .18 .21 .18 1.00
5 .13 .27 .22 .18 1.00
N=450 6 .32 .14 .09 .27 .14 1.00
7 .22 .30 .23 .28 .29 .23 1.00
8 .38 .23 .13 .28 .24 .42 .29 1.00
9 .15 .07 .15 .22 .14 .24 .22 .28 1.00
10 .03 .04 .05 .03 -.02 -.01 .01 .02 .05 1.00









Table 6

Coefficients of Congruence Between Comparable
Factors for the 10-Factor Structures for N=315 and N=450


Factors 1 2 3 4 5 6 7 8 9 10

Coefficients .98 .97 .97 .96 .94 .96 .96 .95 .95 .93


factor structures were considered congruent. Therefore, the factor

structure based upon N = 450 was taken as the factor structure best rep-

resenting the common factor space for the ratings of the 108 program

characteristics.

Interpretation of the Factors

In the selected rotated 10-factor structure, there were nine inter-

pretable factors. The 10-factor structure represented the common-factor

space of the ratings of the 108 program characteristics. The nine inter-

pretable factors represented nine common dimensions underlying these rat-

ings. Each of these common dimensions is defined in the following discus-

sion.

The program characteristics with loadings of .50 or greater on fac-

tor 1 are listed in Table 7. These program characteristics concerned

total cost of a program, costs of various aspects of a program, usage of

equipment and space, and number of support staff. In addition to total

cost, other costs included cost of instructional personnel, program ad-

ministration, support services, materials, equipment maintenance, and

space utilized. For the majority of these program characteristics, the

administrators indicated that the information was desired per total pro-

gram, per number of program full-time equivalent (FTE) students, and per

program unduplicated headcount of students.









Table 7

Program Characteristics With .50 or
Greater Loadings on Factor 1


Number Loading Program Characteristics

1 .58 Total cost of a program
7 .51 Total cost of a program per FTE
17 .58 Total cost of program per unduplicated headcount
6 .69 Cost of instructional personnel per total program
15 .56 Cost of instructional personnel per program FTE
31 .63 Cost of instructional personnel per program undupli-
cated headcount
28 .75 Cost of program administration per total program
63 .74 Cost of program administration per program FTE
79 .77 Cost of program administration per program undupli-
cated headcount
29 .77 Cost of support services per total program
69 .73 Cost of support services per program FTE
75 .80 Cost of support services per program unduplicated
headcount
37 .56 Number of support staff per total program
95 .59 Number of support staff per program FTE
99 .64 Number of support staff per program unduplicated head-
count
41 .65 Cost of materials per program FTE
48 .71 Cost of materials per program unduplicated headcount
51 .66 Equipment utilization per total program
101 .61 Equipment utilization per program FTE
96 .69 Equipment utilization per program unduplicated head-
count
44 .71 Cost of equipment maintenance per total program
89 .64 Cost of equipment maintenance per program FTE
73 .64 Space utilization per total program
72 .74 Cost of space utilized per total program



Based upon the content of these program characteristics, factor 1

was interpreted as involving resources used in a program. Fiscal, physi-

cal (equipment and space), and human resources (support staff) were in-

cluded. Factor 1 was identified as one common dimension underlying the

ratings of the 108 program characteristics and was labeled the "Resources

Usage" dimension.









The program characteristics with loadings of .50 or greater on fac-

tor 2 are listed in Table 8. These program characteristics concerned

ratings of program support services and student services by students en-

rolled in a program and students who have completed a program. The rat-

ings of student services included ratings of the usefulness, accessibil-

ity, and ease of use of the services. Based upon the content of these

program characteristics, factor 2 was interpreted as involving student

ratings of support services, including student services. Factor 2 was

identified as another common dimension underlying the ratings of the 108

program characteristics and was labeled the "Student Ratings of Support

Services" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 3 are listed in Table 9. These program characteristics involved

Table 8

Program Characteristics With .50 or
Greater Loadings on Factor 2


Number Loading Program Characteristics

61 .56 Ratings of support services by currently enrolled
students
60 .58 Ratings of support services by program completers
25 .76 Ratings of usefulness of student services by cur-
rently enrolled students
39 .81 Ratings of usefulness of student services by program
completers
16 .79 Ratings of accessibility of student services by cur-
rently enrolled students
36 .81 Ratings of accessibility of student services by pro-
gram completers
20 .79 Ratings of ease of use of student services by cur-
rently enrolled students
50 .80 Ratings of ease of use of student services by program
completers









Table 9

Program Characteristics With .50 or
Greater Loadings on Factor 3


Number Loading Program Characteristics

46 .55 Number or percent of full-time faculty/staff by a
productivity ratio
71 .52 Number or percent of part-time faculty/staff by a
productivity ratio
30 .58 Number or percent of full-time faculty/staff by num-
ber of course hours taught per term
87 .56 Number or percent of part-time faculty/staff by num-
ber of course hours taught per term
27 .72 Number or percent of full-time faculty/staff by num-
ber of student contact hours per term
70 .73 Number or percent of part-time faculty/staff by num-
ber of student contact hours per term
22 .76 Number or percent of full-time faculty/staff by num-
ber of students per term
56 .75 Number or percent of part-time faculty/staff by num-
ber of students per term
26 .74 Number or percent of full-time faculty/staff by
average class size
49 .72 Number or percent of part-time faculty/staff by
average class size
80 .60 Number or percent of full-time faculty/staff by num-
ber of FTE students per term



information about full-time and part-time faculty or staff in a program.

The information included the number or percent of full-time and part-time

faculty or staff by (1) their rating on some productivity ratio, (2) the

number of course hours they taught per term, (3) the number of student

contact hours they had per term, (4) the number of students they had per

term, and (5) their average class size. Additionally, but for full-time

faculty or staff only, the information included the number of FTE students

they taught per term. Based upon the content of these program character-

istics, factor 3 was interpreted as involving information on the produc-

tivity of faculty or staff in a program. Factor 3 was identified as









another common dimension underlying the ratings of the 108 program char-

acteristics and was labeled the "Faculty/Staff Instructional Productiv-

ity" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 4 are listed in Table 10. These program characteristics involved

information about students entering a program and students currently en-

rolled in a program. For both entering and currently enrolled students,

the information included the number or percent of students by major area

of study, by type of handicap, and by types of developmental or remedial

assistance desired. For entering students only, the information included

the number or percent of students by level of previous academic

Table 10

Program Characteristics With .50 or
Greater Loadings on Factor 4


Number Loadings Program Characteristics

102 .56 Number or percent of entering students by level of
previous academic achievement
77 .57 Number or percent of entering students by academic
skills level as measured by local instruments
78 .50 Number or percent of entering students by major area
of study
82 .55 Number or percent of currently enrolled students by
major area of study
76 .72 Number or percent of entering students by type of
handicap
98 .73 Number or percent of currently enrolled students by
type of handicap
74 .64 Number or percent of entering students by types of
developmental or remedial assistance desired
86 .61 Number or percent of currently enrolled students by
types of developmental or remedial assistance
desired
92 .51 Number or percent of currently enrolled students by
number of hours with failing grade









achievement and by academic skills level as measured by local instru-

ments. For currently enrolled students only, the information included

the number or percent of students by number of hours with failing grade.

Based upon the content of these program characteristics, factor 4 was

interpreted as involving the identification of any physical or cognitive

needs of students relevant to their performance in their selected pro-

grams. Factor 4 was identified as another common dimension underlying

the ratings of the 108 program characteristics and was labeled the "Phys-

ical and Academic Skills Needs Assessment Enrolled Students" dimension.

The program characteristics with a loading of .50 or greater on fac-

tor 5 are listed in Table 11. These program characteristics involved

ratings of various aspects of a program by students who have completed

a program or who are currently enrolled in a program. The aspects of a

program to be rated by program completers included program staff, pro-

gram facilities and equipment, program instructional strategies, program

administration, and program curriculum. Also included were ratings of a


Table 11

Program Characteristics With .50 or
Greater Loadings on Factor 5


Number Loadings Program Characteristics

34 .71 Ratings of program staff by program completers
19 .68 Ratings of program facilities/equipment by program
completers
42 .66 Ratings of program instructional strategies by pro-
gram completers
100 .65 Ratings of program administrators by program com-
pleters
2 .59 Ratings of program curriculum by program completers
66 .50 Ratings of program staff by currently enrolled stu-
dents









program's staff by currently enrolled students. Based upon the content

of these program characteristics, factor 5 was interpreted as involving

student ratings, primarily ratings by program completers, of various as-

pects of a program. Factor 5 was identified as another common dimension

underlying the ratings of the 108 program characteristics and was la-

beled the "Student Ratings of Program" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 6 are listed in Table 12. These program characteristics concerned

information on the quantity of students completing a program and the

average time taken for completion, the number or percent of those com-

pleting a program who take state board or licensure exams, the number

passing those exams, and the type of license, certificate, or registra-

tion received. Based upon the content of these program characteristics,

factor 6 was interpreted as involving measures of the quantitative output

of a program and certain student follow-up information. Factor 6 was

identified as another common dimension underlying the ratings of the 108

Table 12

Program Characteristics With .50 or
Greater Loadings on Factor 6


Number Loadings Program Characteristics

4 .59 Number or percent of students completing a program
83 .53 Number or percent of program completers by average
time taken for completion of a program
40 .77 Number or percent of program completers taking state
board or licensure exams
12 .73 Number or percent of program completers passing state
board or licensure exams
35 .73 Number or percent of program completers by type of
license, certificate, or registration received









program characteristics and was labeled the "Program Student Output"

dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 7 are listed in Table 13. These program characteristics concerned

various attributes of both the full-time and part-time faculty or staff

in a program. The attributes included degrees held, total years taught

or served, years taught or served in a specific program, and type of

certification or rank held. Based upon the content of these program

characteristics, factor 7 was interpreted as involving indicators of the

level of preparedness of faculty or staff serving in a program. Factor

7 was identified as another common dimension underlying the ratings of

the 108 program characteristics and was labeled the "Faculty/Staff Pre-

paredness" dimension.


Table 13

Program Characteristics With .50 or
Greater Loadings on Factor 7


Number Loadings Program Characteristics

9 .61 Number or percent of full-time faculty/staff by de-
grees held
33 .51 Number or percent of part-time faculty/staff by de-
grees held
24 .71 Number or percent of full-time faculty/staff by years
taught or served
85 .62 Number or percent of part-time faculty/staff by years
taught or served
32 .71 Number or percent of full-time faculty/staff by
length of service in a program
90 .61 Number or percent of part-time faculty/staff by
length of service in a program
58 .66 .Number or percent of full-time faculty/staff by cer-
tification or rank held
103 .57 Number or percent of part-time faculty/staff by cer-
tification or rank held









The program characteristics with loadings of .50 or greater on fac-

tor 8 are listed in Table 14. These program characteristics involved

ratings of various aspects of a program by a program's faculty or staff.

The aspects of a program to be rated included instructional strategies,

facilities and equipment, staff, curriculum, administration, and support

services. Based upon the content of these program characteristics, fac-

tor 8 was interpreted as involving ratings of a program by a program's

faculty or staff. Factor 8 was identified as another common dimension

underlying the ratings of the 108 program characteristics and was la-

beled the "Faculty/Staff Program Ratings" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 9 are listed in Table 15. These program characteristics included

the number or types of changes in a program as a result of program eval-

uations or accreditation studies; ratings of a program by certification

boards or accreditation agencies; level of demand for a program in the

college's service area, by students, and in the college's state; and

clearly stated objectives for a program. Based upon the content of

Table 14

Program Characteristics With .50 or
Greater Loadings on Factor 8


Number Loadings Program Characteristics

23 .77 Ratings of program instructional strategies by fac-
ulty/staff
18 .75 Ratings of program facilities/equipment by faculty/
staff
65 .73 Ratings of program staff by faculty/staff
52 .72 Ratings of a program curriculum by faculty/staff
38 .72 Ratings of program administration by faculty/staff
53 .62 Ratings of support services by faculty/staff









Table 15

Program Characteristics With .50 or
Greater Loadings on Factor 9


Number Loadings Program Characteristics

64 .64 Number/types of changes as a result of program evalu-
ations
84 .64 Number/types of changes as a result of accreditation
studies
54 .55 Ratings by certification boards
45 .58 Ratings by accreditation agencies
8 .61 Level of demand for program or service in a college's
service area
14 .64 Level of demand for program or service by students
62 .50 Level of demand for program or service in college's
state
5 .51 Clearly stated program objectives



these program characteristics, factor 9 was interpreted as involving the

responsiveness of a program to program evaluations, certification boards,

accreditation agencies, the community it serves, the students it serves,

and the state it serves. Although not an object of a program's respon-

siveness, program objectives clearly related to assessing that respon-

siveness. Factor 9 was identified as another common dimension underly-

ing the ratings of the 108 program characteristics and was labeled the

"Program Responsiveness" dimension.

The factor analysis has resulted in the identification of a 10-fac-

tor structure with nine interpretable factors that remained relatively

stable across several rotations and for the two groups of respondents

(N = 315 and N = 450). The identified factor structure has been inter-

preted as representing the underlying dimensions common to the ratings

of the 108 program characteristics. Using the content of the program

characteristics that loaded .50 or greater on the factors, each of the









nine dimensions has been described and labeled. The labels have been

created to reflect the content of the program characteristics loading

.50 or greater on the factor representing a dimension. The following

labels have been created for the nine dimensions:

Resources Usage
Student Ratings of Support Services
Faculty/Staff Instructional Productivity
Physical and Academic Skills Needs Assessment
of Enrolled Students
Student Ratings of Program
Program Student Output
Faculty/Staff Preparedness
Faculty/Staff Program Ratings
Program Responsiveness.

In accordance with the evaluation theory developed by Stufflebeam et

al. (1971), the program characteristics that have been identified as in-

cluded in these dimensions were delineated in interaction with the ad-

ministrators making program quality-evaluation decisions in Florida pub-

lic community/junior colleges. These program characteristics were rated

by the administrators as the ones most highly useful in making program

quality-evaluation decisions. According to the results of this study,

the data represented by these program characteristics are those data

that should be collected, organized, and analyzed for the purpose of pro-

viding information useful to the administrators in program quality-eval-

uation decision making in Florida public community/junior colleges.

The results of the factor analysis performed in this study have dem-

onstrated that there are nine common dimensions that should be used to

organize those data for presentation of information to administrators in-

volved in program quality-evaluation decision making. As developed in

the theoretical rationale for this study, based on the theory of evalua-

tion developed by Stufflebeam et al. (1971), the items of information









identified reflected those aspects of the aggregate value system of these

administrators that are relevant to program quality-evaluation decision

making and the underlying dimensions of those items reflect the dimen-

sions of the aggregate value system that are relevant to this decision

situation. Therefore, the utilization of the nine identified common di-

mensions to organize the relevant data should result in an information

format that these administrators should find most useful, since the for-

mat should approximate the dimensions of those aspects of the aggregate

value system that are common to these administrators and that are being

used in making program quality-evaluation decisions. Any individual ad-

ministrator should find such a format more or less useful to the degree

that the relevant dimensions of his value system are reflected in the

aggregate value system represented in the nine dimensions.

It should be noted that these nine dimensions are dimensions repre-

senting the parameters of the information an administrator is most

likely to find useful in making program quality-evaluation decisions. It

should be understood that the information these dimensions reflect might

be positively or negatively valued by an administrator and in varying de-

grees in relation to assessing a program. Since quality is a value judg-

ment and not an attribute or characteristic of a program, these nine com-

mon dimensions are the dimensions of an aggregate value system used by ad-

ministrators in making program quality-evaluation decisions. They should

not be interpreted as dimensions of quality.

The identification of these nine dimensions completed the analysis

required for resolving the first aspect of the problem with which this

study was concerned: to determine any underlying dimensions of the mul-

tiple items of information rated as highly useful in program quality-









evaluation decision making by administrators involved in such decision

making in Florida public community/junior colleges. In the next section

of this chapter, the results are presented of a comparison of the mean

factor scores of the administrators classified first by program areas in

relation to which they had major administrative responsibilities and

then by administrative areas as reflected in their position titles. The

following results reflect an attempt to determine any significant differ-

ences between program areas or between administrative areas in emphasis

on any of the nine dimensions in order to refine the description for for-

matting by program or administrative area the information included in

these dimensions.

Factor Score Comparisons

Program Areas

Using the selected factor structure, factor scores were computed for

the 450 respondents using the regression method in the SAS factor proce-

dure and the SAS score procedure. Mean factor scores were calculated

for the respondents grouped according to program area. Included in this

analysis were those respondents whose position title indicated that they

had major responsibility in one of the five program areas common to most

community colleges in Florida: the Advanced and Professional, Occupa-

tional, Developmental, Community Instructional Services, and Student Ser-

vices program areas. Not all the administrators who participated in this

study had major responsibility in a specific program area. The position

codes used to classify the administrators included in each program area

are listed in Appendix A. Position codes, associated titles, and fre-

quency of the position codes are in Appendix D. The program areas, the

number of respondents classified in each program area, and the percentage

of all respondents that this represents are given in Table 16.









Table 16

Number of Respondents Per Program Area and
Corresponding Percentages of All Respondents (N=450)


Program Areas Number of Respondents Percentage of N

Advanced and Professional 65 14.4

Occupational 83 14.4

Developmental 5 1.1

Community Instructional 21 4.7
Services

Student Services 88 19.6

TOTAL 262 58.2



Although the number of administrators with primary responsibility in

the Developmental Program Area was small (N = 5), they represented five

different colleges. According to the list of administrators with respon-

sibility for compensatory/developmental education in the 1981-82 Direct-

ory of Florida Community Colleges (Division of Community Colleges, 1981a,

p. 71), there were very few position titles reflecting primary responsi-

bility in this program area.

The mean factor scores and standard deviations for the program areas

are presented in Table 17. It should be recalled that all of the program

characteristics included in this study were rated as highly useful in pro-

gram quality-evaluation decision making. Therefore, the factor scores in-

dicated the relative emphasis placed upon the program characteristics with

relatively greater loadings on a factor by the administrators classified

in a program area. Since the rating scale was 1 (program characteristics









84















I, n z C .. .P. .
o2CD CD p C CJ CD ) C-C- C0O 0o:C aO-
~~ b ~~ lhl im m mo h I


4-1o 0 -



SU-
0 41- LI
L., k -








C I1








4 II
oz















ccn
(U--
0-



















m



4-






> > II
00











t -


0







0-
`E0' *i i
(~VU i
UV In 1
c f0 t- ^
> 0


S- 00 00 0-









c-d- o'CO 001 I Cm
00 .9 Io 00)

00 00 0- 00











IDcO oo r- co
CDC0 00 C-00 C0
o0 0- 00 00
I I


h-c c (DCQJ CM (O h-0 00 d-0 coO.
Cn C) o- -o 00) 0C c3CD COPh i 0C) rC-
oo os om o oO mo oo o
00 00 00 00 00 00 00 0-


C C C C c C c C. C; C aC.
fOQ ~ e (a0 MO 10 d TQ 03 O fC
II LI In* n I3 n. I1 In a )n*
Mt V; oQ = L n =m InI 11i Il v) V




C- j cn ^d LO 1 I'l co GN


hifl .- . . .O ia
r. r- M Ln M mr












-0- O- OM OMC0 Or-M
C0m 7 001 I
00 00 00 00 0r-









-- 0 C=O ) CT^ OC CD CD C)*
00 0- 00oo oo oo00
I I


00)
0o
I









essential to quality-evaluation decision making) to 4 (program charac-

teristics of little or no use in quality-evaluation decision making), a

low factor score indicated that administrators included in the program

area rated the program characteristics with relatively greater loadings

on a factor as relatively more highly useful in program quality-evalua-

tion decision making and a high factor score indicated that they rated

them relatively less highly useful.

The results of testing for significant differences in mean factor

scores between program areas for all factors are presented in Appendix

L. As indicated in the description of the methodology for this study,

an analysis of variance prior to performing the t tests was inappropri-

ate due to unequal variances among some of the program area classifica-

tions. The Bonferroni correction for multiple t tests was applied to

the obtained t statistics.

For factors 1, 3, and 9, there were no significant differences in

mean factor scores between any of the program areas (Appendix L). For

factor 1, the Resources Usage dimension, the mean factor scores ranged

from -.072 for the Developmental Program Area to .487 for Community In-

structional Services (Table 17). For factor 3, the Faculty/Staff In-

structional Productivity dimension, the mean factor scores ranged from

-.341 for the Developmental Program Area to .231 for Community Instruc-

tional Services. For factor 9, the Program Responsiveness dimension,

the mean factor scores ranged from -.142 for the Occupational Program

Area to .500 for the Developmental Program Area. These results indi-

cated that the administrators classified into the five program areas did

not differ significantly in their emphasis on these three dimensions:

Resources Usage, Faculty/Staff Instructional Productivity, and Program

Responsiveness.









For factors 2 and 8, there were significant differences in mean fac-

tor scores between Student Services and all other program areas except

the Developmental Program Area (Appendix L). For factor 2, the Student

Ratings of Support Services dimension, the mean factor scores ranged

from -.480 for Student Services to .487 for Community Instructional Ser-

vices (Table 17). For factor 8, the Faculty/Staff Program Ratings di-

mension, the mean factor scores ranged from .410 for Student Services to

-.350 for the Developmental Program Area (Table 17). These results in-

dicated that the administrators classified in Student Services empha-

sized the Student Ratings of Support Services dimension significantly

more than did all other program areas except the Developmental Program

Area and emphasized the Faculty/Staff Program Ratings dimension signifi-

cantly less than did all other program areas except the Developmental

Program Area. Also, the results indicated that the other program areas

did not differ significantly in their emphasis on these dimensions. It

should be recalled that the number of administrators classified in the

Developmental Program Area was relatively small (N = 5) which influenced

the tests for significant differencesin mean factor scores.

For factor 4, there were significant differences in mean factor

scores between Community Instructional Services and all other program

areas except the Developmental Program Area (Appendix L). For factor 4,

the Physical and Academic Skills Needs Assessment of Enrolled Students

dimension, the mean factor scores ranged from -.133 for the Advanced and

Professional Program Area to .952 for Community Instructional Services

(Table 17). These results indicated that the administrators classified

in Community Instructional Services emphasized the Physical and Academic

Skills Needs Assessment of Enrolled Students dimension significantly









less than did all other program areas except the Developmental Program

Area. Also, the results indicated that the other program areas did not

differ significantly in their emphasis on this dimension.

There were significant differences in mean factor scores between the

Occupational Program Area and the Advanced and Professional and the Stu-

dent Services program areas on factor 5 (Appendix L). For this factor,

the Student Ratings of Program dimension, the mean factor scores ranged

from -.435 for the Developmental Program Area to .219 for Community In-

structional Services (Table 17). The mean factor score for the Occupa-

tional Program Area was .143 (Table 17). These results indicated that

the administrators classified in the Occupational Program Area empha-

sized the Student Ratings of Program dimension significantly less than

did the Advanced and Professional or the Student Services program areas.

Also, the results indicated that the Occupational Program Area did not

differ significantly from Community Instructional Services and the De-

velopmental Program Area in emphasis on this dimension and that program

areas other than the Occupational Program Area did not differ signifi-

cantly in their emphasis on this dimension.

For factor 6, there were significant differences in mean factor

scores between the Developmental Program Area and all other program areas

except Community Instructional Services (Appendix L). For this factor,

the Program Student Output dimension, the mean factor scores ranged from

-.579 for the Occupational Program Area to 1.619 for the Developmental

Program Area (Table 17). These results indicated that the administra-

tors classified in the Developmental Program Area emphasized the Program

Student Output dimension significantly less than did all other program

areas except Community Instructional Services. Also, the results









indicated that the other program areas did not differ significantly in

their emphasis on this dimension.

For the remaining factor, factor 7, there were significant differ-

ences in the mean factor scores between the Advanced and Professional

Program Area and the Occupational and Community Instructional Services

program areas (Appendix L). For this factor, the Faculty/Staff Pre-

paredness dimension, the mean factor scores ranged from -.390 for the

Advanced and Professional Program Area to .387 for Community Instruc-

tional Services (Table 17). These results indicated that the adminis-

trators classified in the Advanced and Professional Program Area empha-

sized the Faculty/Staff Preparedness dimension significantly more than

did the Occupational and the Community Instructional Services program

areas. Also, the results indicated that the Advanced and Professional

Program Area did not differ significantly from the other two program

areas in their emphasis on this dimension and that the program areas

other than the Advanced and Professional Program Area did not differ

significantly in their emphasis on this dimension.

As indicated in the preceding section of this chapter, the utiliza-

tion of the nine identified common dimensions to organize the 108 pro-

gram characteristics identified as most useful in program quality-evalu-

ation decision making should result in increasing the probability that

the format of the presented information will be perceived as credible

and useful by the administrator involved in the decision situation.

Examination of the differences in mean factor scores for the five pro-

gram areas was done to determine if there were any statistically signif-

icant differences that might be useful in tailoring by program area the

format of information presented to administrators in the five program

areas for use in program quality-evaluation decision making.









The results presented in this section indicated that the administra-

tors classified into the five program areas did not differ significantly

in their emphasis on three dimensions: Resources Usage, Faculty/Staff

Instructional Productivity, and Program Responsiveness. For the Student

Ratings of Support Services dimension, the results indicated that Stu-

dent Services emphasized this dimension significantly more than did all

other program areas except the Developmental Program Area. Community

Instructional Services emphasized the Physical and Academic Skills Needs

Assessment of Enrolled Students dimension significantly less than did

all other program areas except the Developmental Program Area. The Stu-

dent Ratings of Program dimension was emphasized significantly less by

the Occupational Program Area than by the Advanced and Professional and

Student Services program areas. The Developmental Program Area placed

significantly less emphasis on the Program Student Output dimension than

did all other program areas except Community Instructional Services.

For the Faculty/Staff Preparedness dimension, the Advanced and Profes-

sional Program Area emphasized this dimension significantly more than

did the Occupational and Community Instructional Services program areas.

The Faculty/Staff Program Ratings dimension received significantly less

emphasis by Student Services than by all other program areas except the

Developmental Program Area.

These results should be useful in tailoring by program area the or-

ganization of information for presentation to administrators involved in

quality-evaluation decision making in a specific program area. For

example, the results indicated that the information included in the Fac-

ulty/Staff Preparedness dimension should be emphasized when presenting

information to administrators with major responsibilities in the Advanced









and Professional Program Area to increase the probability that adminis-

trators in that program area will find the information credible and use-

ful in program quality-evaluation decision making. The nature of this

emphasis, although not an objective of this study, might include the

presentation of more information or more detailed information or some

type of weighting of the information related to this dimension. Simi-

larly, these results may be used to tailor the presentation of informa-

tion for program quality-evaluation decision making to administrators in

other specific program areas.

The results presented in this section applied only to significant

differences among the mean factor scores for administrators classified

within the specified program areas. In the next section of this chapter,

the results are presented for comparison of the mean factor scores be-

tween administrators classified within six administrative areas as de-

fined in Chapter III.

Administrative Areas

Using the selected factor structure, mean factor scores were calcu-

lated for the administrators classified within the six administrative

areas defined in Chapter III. The six administrative areas were General

Administration, Academic Affairs, Student Affairs, Community Instruc-

tional Services, Business Affairs, and Presidents. A description of the

administrative areas and the position codes used to classify the adminis-

trators included in each administrative area are given in Appendix A.

The administrative areas, the number of respondents in each administra-

tive area, and the percentage of all respondents that this represents

are given in Table 18.




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID EXT3EPFIP_WLBHKD INGEST_TIME 2017-07-14T21:17:48Z PACKAGE UF00099602_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES



PAGE 1

A MULTIPLE-FACTOR ANALYSIS TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES BY THOMAS ALBERT STEUART A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1983

PAGE 2

ACKNOWLEDGEMENTS During the past two years, many persons have assisted and encouraged me while I have been engaged in the research that has culminated in this dissertation. Regretfully, only a few can be mentioned here. I would like to thank Dr. John M. Nickens, my committee chairman, and the many members of the Florida Community/Junior College Inter-Institutional Research Council, who, through a research assistantship, supplied most of my financial support. I would like to express my gratitude to the other members of my committee, Dr. James L. Wattenbarger and Dr. Robert S. Soar, whose patience with me has been unending. I express a great debt to C.B. "Bix" Rathburn, III, who, as a fellow research assistant, provided me with constant feedback and invaluable emotional support. I wish to thank Dr. Wilson Guertin for his consultations regarding the factor analysis procedures used in this study. Teresa Agrillo, who typed and edited this dissertation, deserves more than I can give her. Finally, I wish to acknowledge James D. Cook for his continuing emotional and financial support, without which this dissertation would never have been completed. ii

PAGE 3

TABLE OF CONTENTS Pao£ ACKNOWLEDGEMENTS 11 LIST OF TABLES vi ABSTRACT ix CHAPTER I INTRODUCTION 1 Rationale 3 Theoretical Rationale 3 Operational Rationale 7 The Problem 9 Need for the Study 10 Delimitations and Limitations 11 Definition of Terms 12 Organization of the Research Report 13 II REVIEW OF RELATED LITERATURE 14 Educational Evaluation 14 Toward a Definition of Educational Evaluation 15 Contemporary Models of Educational Evaluation 17 Decision-Oriented Model of Educational Evaluation. . .20 Quality Assessment in Higher Education 23 Graduate Education 25 Undergraduate Education 31 Quantifiable Approaches to Quality 36 Determining Underlying Dimensions: Factor Analysis 40 in

PAGE 4

TABLE OF CONTENTS (continued) Page Applicability of Factor Analysis 40 Definition of Factor Analysis 43 Steps in Factor Analysis 44 I I I METHODOLOGY 50 Description of Data Used 50 Analysis of the Data 53 Research Question One 53 Research Questi on Two 56 IV RESULTS AND DISCUSSION 58 Factor Analysis Results 59 Interpretation of the Factors 70 Factor Score Compari sons 82 Program Areas 82 Administrative Areas 90 Summary 97 V SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER STUDY 99 Summary 99 Conclusions 102 Recommendations for Further Study 109 APPENDICES A CLASSIFICATIONS OF RESPONDENTS USED IN DATA ANALYSIS Ill B DESCRIPTION OF IRC PROJECT METHODS AND PROCEDURES 113 C PROGRAM QUALITY INDICATORS PROJECT QUESTIONNAIRE 123 D POSITION CODES USED IN THE CATEGORIZATION OF RESPONDENTS BY ADMINISTRATIVE AREA AND PROGRAM AREA WITH FREQUENCIES. .136 E MEAN RATINGS FOR PROGRAM CHARACTERISTICS FOR N=450 AMD N=315 139 IV

PAGE 5

TABLE OF CONTENTS (continued) Page F CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PROGRAM CHARACTERISTICS FOR N=450 143 G CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PROGRAM CHARACTERISTICS FOR N=315 151 H PRINCIPAL AXES SOLUTION BASED UPON N=450 WITH FINAL COMMUNALITY ESTIMATES AND EIGENVALUES 1 59 I PRINCIPAL AXES SOLUTION BASED UPON M=315 WITH FINAL COMMUNALITY ESTIMATES AND EIGENVALUES 155 J FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCIPAL AXES BASED UPON N=315 171 K FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCIPAL AXES BASED UPON N=450 181 L t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN PROGRAM AREAS BASED ON ASSUMPTION OF EQUAL VARIANCES 191 M t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN ADMINISTRATIVE AREAS BASED ON ASSUMPTION OF EQUAL VARIANCES 194 REFERENCES 199 BIOGRAPHICAL SKETCH 207

PAGE 6

LIST OF TABLES Table £Mi 1 Variance Accounted for by Successive Principal Axes for N=315 60 2 Program Characteristics With Factor Loadings of .50 or Greater in the Three Rotations of the Principal Axes Solution Based Upon N=315 61) 3 Variance Accounted for by Successive Principal Axes for N=450 69 4 Program Characteristics With Factor Loadings of .50 or Greater in the Three Rotations of the Principal Axes Solution Based Upon N=450 66 5 Intercorrelations of the Factors for the 10-Factor Rotation of the Principal Axes Solutions for N=315 and N=450 69 6 Coefficients of Congruence Between the Comparable Factors for the 10-Factor Structures for M=315 and N=450 70 7 Program Characteristics With .50 or Greater Loadings on Factor 1 71 8 Program Characteristics With .50 or Greater Loadings on Factor 2 72 9 Program Characteristics With .50 or Greater Loadings on Factor 3 73 10 Program Characteristics With .50 or Greater Loadings on Factor 4 75 11 Program Characteristics With .50 or Greater Loadings on Factor 5 75 12 Program Characteristics With .50 or Greater Loadings on Factor 6 76 13 Program Characteristics With .50 or Greater Loadings on Factor 7 77 VI

PAGE 7

LIST OF TABLES (continued) Table Page 14 Program Characteristics With .50 or Greater Loadings on Factor 8 78 15 Program Characteristics With .50 or Greater Loadings on Factor 9 79 16 Number of Respondents Per Program Area and Corresponding Percentages of Al 1 Respondents (N=450) 83 17 Mean Factor Scores and Standard Deviations. for Respondents Grouped by Program Area 84 18 Number of Respondents Per Administrative Area and Corresponding Percentages of All Respondents (N=450) 91 19 Mean Factor Scores and Standard Deviations for Respondents Grouped by Administrative Area 93 vn

PAGE 8

LIST OF FIGURES Figure Page 1 Sample Format for Program Quality-Evaluation Information Report 1 06 2 Sample Format for Program Quality-Evaluation Information Profile 108 vm

PAGE 9

Abstract of Dissertation Presented to the Graduate Council of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy A MULTIPLEFACTOR ANALYSIS TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES BY Thomas Albert Steuart April 1983 Chairman: John M. Nickens Major Department: Educational Administration and Supervision The purpose of this study was the identification of any underlying dimensions within multiple quality indicators rated by administrators in Florida public community/junior colleges as highly useful in making program quality-evaluation decisions. It was theorized that utilization of such dimensions to organize and provide information to administrators should result in a format that they would find most useful since it should reflect those aspects of their value systems relevant to the defined decision situations. Of 631 administrators identified to participate in the study, 450 responded by rating 454 items on a survey questionnaire for degree of usefulness in program quality-evaluation decision making. The correlation matrix of the intercorrelations of the mean responses of the 108 most IX

PAGE 10

highly rated items were factor analyzed using the iterated principal axes method and an orthogonal rotation to the varimax criterion with an oblique rotation to determine intercorrelation of factors. This analysis resulted in the identification of a factor structure accounting for 80.5% of the common variance that contained nine interpretable factors. The nine dimensions involved information relating to(1) fiscal, physical, and human resources; (2) student ratings of support services; (3) instructional productivity of faculty; (4) assessments of any physical or cognitive needs of students relevant to their performance in their selected programs; (5) ratings of selected aspects of programs by students; (6) indicators of the quantitative output of a program; (7) selected attributes of full-time and part-time faculty; (8) ratings of selected aspects of programs by faculty; and (9) indicators of the responsiveness of a program to certification and accreditation agencies, the local community, the students, and the state. The recommendation was made that further research in program qualityevaluation involve more direct investigation of the attitudes of the decision maker involved and the development of instruments that will facilitate the identification of attitudinal dimensions relevant to the defined decision situation.

PAGE 11

CHAPTER I INTRODUCTION During the 1970s when public confidence in higher education waned and financial resources became less abundant, there was an emphasis on accountability. This resulted in a rapid increase of evaluation activities related to higher education. A major focus of these activities was the maintenance or improvement of the quality of programs offered by higher education institutions within the context of a broadening of student access in a time of fiscal constraint (Craven, 1980, p. vii). The conditions of fiscal austerity and the demands for accountability within the context of broadening student access to higher education have continued into the 1980s (Craven, 1980). There has been an increasing concern for maintaining or improving the quality of programs offered by higher education institutions. The concern is shared by persons within higher education institutions, state level coordinating or governing boards, other state executives, and state legislators (Bowen, 1974; Craven, 1980; "Legislators stress quality improvement," 1980). As Finn (1980) correctly perceived, quality has emerged as the premier concern in higher education for the 1980s. Although it is the premier concern, quality in higher education is a concept that can mean all things to all people (King, 1981). If used too loosely with little or no definition, the concept provides little guidance. If defined too strictly, the concept is of limited use for a diverse system of higher education (Finn, 1980, p. 2). 1

PAGE 12

Traditionally, the quality of a program or institution in higher education has been determined by subjective evaluations of experts. One criticism of this approach has been that 20 to 30 higher education institutions have been identified consistently as institutions of quality, with all other institutions of higher education virtually ignored (Lawrence & Green, 1980, p. 1). Another criticism has been that the bases of these evaluations have been related to the missions and goals of the institutions identified and institutions with other missions and goals, such as community colleges, have been excluded (Bowen, 1974; Fotheringham, 1978). Usually researchers in higher education have tried to avoid constitutively defining quality, but have operationally defined it through their choices of research designs and evaluative criteria (Astin & Henson, 1977; Blackburn & Lingenfelter, 1973; Cartter, 1966; Krause, 1970). However quality is defined, the determination of educational quality involves decision making by program administrators, which requires the use of some information about the program being evaluated. This is consonant with the theory of evaluation developed by Stufflebeam, Foley, Gephart, Guba, Hammond, Merriman, and Provus (1971). They defined evaluation as "the process of delineating, obtaining, and providing useful information for judging decision alternatives" (p. 4). Thus, the making of quality evaluations about educational programs may be described as a process of delineating what information about programs is useful to administrators making quality-evaluation decisions, obtaining that information, and providing it in a format useful to those administrators. This definition of program quality evaluation formed the basis of the rationale for this study.

PAGE 13

Rationale Theoretical Rationale Delineation, the first operational step in program quality-evaluation decision making, involves "the identification of the most useful information" (Stufflebeam et al . , 1971, p. 41). Although Stufflebeam et al . did not specify a methodology for accomplishing this step, they did specify that it could be accomplished successfully "by the evaluator only in interaction with his client [the decision maker]" (p. 41). The second step, obtaining, was described as "the more technical aspect of evaluation" (p. 42) and consists of "collecting, organizing, and analyzing [the data delineated as most useful]" (p. 42). The providing phase of evaluation involves reporting the delineated and obtained data to the decision maker "in ways that he finds credible and helpful" (p. 17). According to Stufflebeam et al . (1971), although there existed much knowledge and many methodologies for collecting data, "the interface role of delineating information needs with the decision makers and the similar interface role of providing information to audiences are not so well developed in theory or practice" (pp. 139-140). Furthermore, they stated that "a most glaring and conspicuous omission in this [their] book is the failure to provide operational guidance for the evaluator as he plays this interface role [of providing information]" (p. 336). It was the theory and methodology of the providing phase of evaluation as defined by Stufflebeam et al . (1971) with which this study was concerned. Craven (1980) indicated that evaluation processes for the 1980s must be capable of "providing the desired information in an appropriate format" (p. 111). How might an evaluator determine an appropriate format for providing the desired information to decision makers when multiple

PAGE 14

items of information have been identified as highly useful in a particular decision situation? A theoretical basis for resolving this problem was suggested, but not developed, by Stufflebeam et al . (1971) in their discussion of the relationship between the items of information identified as most useful in a defined decision situation and the values of the decision maker in interaction with whom the items have been delineated. They stated that it is the value system of the decision maker, especially those aspects of his value system related to a particular decision situation, that determines whether an item of information is relevant to that decision situation (pp. 108-109). The items of information or variables identified as most useful in a defined decision situation are not themselves the criteria used to assess the decision situation, but they are the variables to which the decision maker applies his criteria. On the one hand, the criteria are statements of the means of measuring the variables and, on the other hand, they are "yardsticks for values" (p. 109). Values were defined as "predefined states of certain variables" (p. 108). Presumably, when translated into a means of assessment, "predefined states" equal "criteria" and "certain variables" equal the information identified as most useful in the defined decision situation in interaction with the relevant decision maker. For the purpose of this study, the important point was that the items of information (variables) identified as most useful in interaction with the relevant decision maker reflect those aspects of his value system that are related to the defined decision situation. If this is true, as theorized by Stufflebeam et al . (1971), it forms a basis for an approach that an evaluator may use in determining how to provide multiple items of information in a format that a decision maker should find "credible and helpful" (p. 17).

PAGE 15

The problem is similar to that encountered by psychologists in attempting to describe human personality (Cattell , 1950). With hundreds of terms defining traits by which persons could be described, there was a search for "dimensions of personality" (p. 26) that would facilitate the description of personality (pp. 26-27). Cattell theorized that the multiple descriptors of personality, which he labeled "surface traits" (pp. 21-22), could be accounted for by considerably fewer dimensions, which he labeled "source traits" (p. 27). Additionally, he theorized that the source traits were "the real structural influences underlying personality" (p. 27). Similarly, it was theorized in this study that for a set of multiple items of information identified in the delineation phase of an evaluation process, based on the theory of evaluation developed by Stuff! ebeam et al. (1971), there are considerably fewer underlying dimensions that may be identified and used in developing guidelines for providing information in a format that decision makers should find useful in a defined decision situation. If it is true that the items of information identified in the delineation phase reflect those aspects of a decision maker's value system relevant to a defined decision situation, then the underlying dimensions of those items should reflect the dimensions of a decision maker's value system relevant to that decision situation. If the latter is true, then utilizing those underlying dimensions to organize those items should result in providing information in a format that a decision maker should find credible and helpful, since that format should approximate closely the dimensions of those aspects of his value system being used in the decision-making process in the defined decision situation.

PAGE 16

This theory may be extended to a decision situation where multiple decision makers are involved. The identified items of information in such a decision situation would reflect a hypothetical value system of "aggregate values" (Stufflebeam et al., 1971, p. 113) of the relevant decision makers. In such a decision situation, the underlying dimensions of the identified items of information should reflect the dimensions of the hypothetical aggregate value system. They should reflect the dimensions of the relevant aspects of an individual decision maker's value system only to the degree that these dimensions are reflected in the aggregate value system. Therefore, it may be expected that utilizing those underlying dimensions to organize the identified items should result in providing information to the decision makers in a format more or less credible and helpful to an individual decision maker to the degree that relevant dimensions of his value system are reflected in the aggregate value system. Based upon this theory, an appropriate methodology for determining the underlying dimensions of a set of multiple items of information identified as most useful in a defined decision situation would be the same as that used by Cattell (1950) for identifying the underlying dimensions of personality: the multi-variate technique of factor analysis. For a set of variables that individuals can rate or in some manner assess, the technique of multi -factor analysis can be used to determine the dimensions of any underlying pattern of the ratings or other measurements of that set of variables. For example, multiple items of information identified as useful in a defined decision situation may be rated by the relevant decision makers for varying degrees of usefulness. Subsequently, these ratings can be factor analyzed to identify underlying dimensions of

PAGE 17

the degree of usefulness of the items. The results of such an analysis should provide the evaluator with some guidelines for organizing the items to increase the probability that the decision makers will find the format of the provided information credible and helpful, i.e., useful in the decision-making process in the defined decision situation. This extension of the theory of evaluation proposed by Stufflebeam et al . (1971) and the suggested methodology should supply evaluators the needed guidance in their role of providing information in a format useful to decision makers in a defined decision situation. Operational Rationale This study involved the application of this theory and methodology to an appropriate set of items of information identified as useful in a defined decision situation in order to identify the underlying dimensions of those items and to utilize the identified dimensions to develop guidelines for organizing these items into a format that should be useful to the relevant decision makers in the defined decision situation. Since the quality of programs has been cited as the premier concern in higher education for the 1980s, the decision situation selected for this study was the making of quality-evaluation decisions about programs in Florida's public community/ junior colleges. In Florida, Governor Graham's program for education contained a commitment to assure the citizens of Florida the opportunity to obtain a quality education at every level of public education including higher education. This commitment was reflected in a resolution adopted by the Florida State Board of Education in January, 1981, that included the following statement: On a statewide average, educational achievement in the state of Florida will equal that of the upper quartile of states within_ five years, as indicated by commonly accepted criteria of attainment. (State Board of Education, 1981)

PAGE 18

8 The Division of Community Colleges in Florida is under a mandate from the State Department of Education to identify "certain indicators of quality which can be used system-wide to give evidence of quality improvement" (Division of Community Colleges, 1982, p. 1). The members of the Florida Community/Junior College Inter-Institutional Research Council (IRC), a research consortium of Florida public community/junior colleges, conducted a project that addressed the problem of identifying indicators of quality useful in program quality-evaluation decision-making in Florida public community/junior colleges (Florida Community/Junior College Inter-Institutional Research Council, 1981). This project was based upon the theory of evaluation developed by Stufflebeam et al. (1971). In interaction with the relevant administrators, the project identified more than 100 indicators of quality as highly useful in making program quality-evaluation decisions. The indicators of quality identified were representative of many of those identified in other studies. A large number of administrators (450 respondents) were involved in this project, representing almost all of the public community/ junior colleges in Florida. Although multiple indicators of quality were identified as highly useful, there was no attempt in this project to identify any underlying dimensions of these multiple indicators to utilize in developing guidelines for providing the desired information to the relevant administrators in a useful format. All of the aspects of the IRC project described previously supported the use of the data from that project to test the theory that for a set of multiple items of information identified in the delineation phase of an evaluation process, there are considerably fewer underlying dimensions that may be identified and used in developing guidelines for providing

PAGE 19

information in a format that decision makers should find useful in a defined decision situation. Also, because that project found considerable variability in the information rated as highly useful by respondents classified in various program and administrative areas, there was an opportunity to investigate whether there were any significant differences between these classifications within any identified underlying dimension of the multiple indicators of quality. The Problem Based on the theory of evaluation developed by Stufflebeam et al . (1971) and extended in this study, it was expected that multiple items of information identified by the relevant decision makers as useful in a defined decision situation would contain underlying dimensions that could be identified through the use of the technique of factor analysis. Specifically, this study proposed: 1. To determine any underlying dimensions of the multiple items of information rated as highly useful in program quality-evaluation decision making by administrators involved in such decision making in Florida public community/junior colleges. 2. To determine if there were any significant differences in the degree of emphasis within any identified underlying dimension between the administrators classified within the Advanced and Professional, Occupational, Developmental, Community Instructional Services, and Student Services program areas. 3. To determine if there were any significant differences in the degree of emphasis within any identified underlying dimension between the administrators classified within the administrative areas of General Administration, Academic Affairs, Business

PAGE 20

10 Affairs, Student Affairs, Community Instructional Services, and Presidents. 4. From the results of these analyses, to develop guidelines for organizing the identified multiple indicators of quality into a format that should be useful to the administrators involved in making program quality-evaluation decisions in Florida public community/ junior colleges. Need for the Study There was a need to develop further that aspect of the theory of evaluation proposed by Stufflebeam et al . (1971) that related to an evaluator's role of providing information (pp. 139-HO, 336). In relation to the developed theory, there was a need "to provide operational guidance for the evaluator" (p. 336) in the role of providing information in a format that a decision maker should find "credible and helpful" (p. 17). Craven (1980) stated that to address effectively the higher education issues of the 1980s, there was a need for evaluation processes to provide the desired information in an appropriate format (p. 111). Since only one study relating to quality evaluation in higher education was found that used the technique of factor analysis to determine underlying dimensions (Astin & Solmon, 1981), there appeared to be a need for studies to demonstrate the methodology for determining guidelines for organizing the considerable amount of information desired by administrators for evaluating program quality into formats useful in the decision-making process. Also, due to the large amount of information identified as useful in program quality-evaluation decision making by administrators in Florida public community/ junior colleges, there was a need to determine guidelines for organizing that specific information into a format that should be

PAGE 21

11 useful to the administrators involved in the quality-evaluation decision process in Florida public community/junior colleges (Steuart & Rathburn, 1982, p. 185). Delimitations and Limitations This study was confined to administrators in Florida public community/junior colleges who were classified by their institutions as executive, administrative, or managerial personnel under part three of the "Personnel and Salary Report (SA-1)" as defined in the Community College Management Information Systems Procedures Manual of the State of Florida (Division of Community Colleges, 1980, pp. 10.1-10.2). Of the 631 administrators identified and surveyed, 450 responded for a response rate of 71.3% (Steuart & Rathburn, 1982, p. 45). Although a response rate of this magnitude is generally considered acceptable, it may still be assumed that the respondents may have been different from the nonrespondents in ways that affected their responses. Thus, the responses might not be representative of the identified population. Since the study was confined to administrators in community colleges, the results are generalizable to administrators in other types of colleges only to the extent that they share attitudes toward program quality evaluation similar to the respondents in this study. The results are not general izable to administrators in community college systems in other states except to the degree that they share attitudes toward program quality evaluation similar to respondents in this study. The data used in this study were collected by means of a survey questionnaire. Although face validity was established for the questionnaire through the use of a review panel, reliability of the questionnaire was not established. Therefore, it is not known if similar results would be

PAGE 22

12 obtained from the same respondents if they were surveyed again. The results can be taken only as descriptive of the opinions of the administrators at the time the questionnaire was administered. Also, although the questionnaire was designed to be comprehensive in relation to the descriptive information it contained about programs offered by the community colleges, some information that might be related to quality-evaluation decision making might have been excluded. The analytic technique of factor analysis used in this study has several limitations associated with it. There are no hard and fast guidelines for determining the number of factors to rotate in attempting to achieve a simple factor pattern. Another researcher might choose different criteria and rotate a different number of factors and would, therefore, obtain different results. Also, factor analysis assumes a linear relationship between the variables involved in the analysis. Any other relationship would be inaccurately represented by a factor-analytic pattern. Definition of Terms Administrative Areas . The basic divisions of responsibility for administrators in a comprehensive community college in Florida including General Administration, Academic Affairs, Business Affairs, Student Affairs, Community Instructional Services, and Presidents. Each of these areas is operationally defined in Appendix A. Dimension . A cluster of program characteristics the ratings of which by the respondents tend to vary in similar ways. Each factor identified from the factor analysis in this study represents a dimension of the underlying interrelationships of the ratings of the program characteristics.

PAGE 23

13 Evaluation . The process of delineating, obtaining, and providing useful information for decision making in a defined decision situation. Program Areas . The five basic operational areas of a comprehensive community college in Florida including the Advanced and Professional, Occupational, Developmental, Community Instructional Services, and Student Services areas (Division of Community Colleges, 1981, p. 6). Each of these areas is operationally defined in Appendix A. Program Characteristics . Any information relating to or describing a program offered by a college. Program Quality-Evaluation Decision Making . The evaluation process, involving the use of relevant information, leading to a judgment by the responsible administrators of the quality of a program. Underlying Pattern . The interrelationships of the correlations of the ratings by respondents among the program characteristics identified as highly useful in quality-evaluation decision making. Usefulness . The determination of the serviceability or utility of a program characteristic in making judgments about the quality of a program. Organization of the Research Report The chapters in the remainder of this report are organized as follows. Chapter II presents a review of selected literature relevant to this study. Chapter III describes the methodology used in this study. Chapter IV presents the results of this study. Chapter V summarizes and discusses the results with conclusions and recommendations drawn from the results.

PAGE 24

CHAPTER II REVIEW OF RELATED LITERATURE Since the evaluation of the quality of programs or services offered by higher education institutions occurs within the general framework of educational evaluation, the first section of this chapter is a discussion of concepts of educational evaluation. The decision-oriented approach to educational evaluation is emphasized because it was the theoretical basis of this study. The second section of this chapter reviews selected attempts in higher education to address the issue of quality. The third section of this chapter is a discussion of factor analysis related to discovering underlying dimensions in multi-variate assessments. Educational Evaluation During the past decade, evaluation in education has become a topic wide in scope. It has been the failure of many educators to recognize that evaluation is a complex process requiring a broad perspective (Alkin, 1969). Pyatte (1970) emphasized the importance of evaluators in education looking beyond the immediate problems and contemplating the intricate meanings and legitimate functions embodied in evaluation theory. The dynamics of evaluation compel attention from many perspectives. This section of the literature review is presented in three parts. The initial part introduces the concept of educational evaluation through a discussion of various definitions of educational evaluation. The second part provides a brief review of educational evaluation with emphasis on contemporary models of educational evaluation. The final part discusses 14

PAGE 25

15 the decision-oriented model of educational evaluation—the basis for this study's approach to the quality issue in higher education. Toward a Definition of Educational Evaluation Many definitions of educational evaluation have been proposed stemming from the fact that three different schools of thought regarding educational evaluation have coexisted for more than 30 years (Worthen & Sanders, 1973). Stufflebeam et al . (1971) provided an excellent review of the three basic approaches to educational evaluation from which most of the definitions have developed. The first approach was an early one equating evaluation with measurement (p. 10). The second approach involved the determination of the congruence between performance and objectives, especially behavioral objectives (p. 11). The third approach was the process commonly referred to as professional judgment (p. 13). From these basic approaches, various definitions of educational evaluation have emerged. These definitions differ in level of abstraction and often reflect the specific concerns of the persons who formulated them. At a basic level, evaluation has been defined as "an assessment of worth" (Popham, 1975, p. 8). Wolf (1979) found this definition needing clarification regarding the meaning of the terms "assessment" and "worth." A more descriptive definition was offered by Cronbach (1963), who defined evaluation as "the collection and use of information to make decisions about an educational program" (p. 675). This definition was proposed initially during the curriculum development era of the late fifties Cronbach's studies suggested various kinds of information that could be examined within the evaluation framework and later analyzed and used in decision making designed for course improvement (Wolf, 1979).

PAGE 26

16 Doll (1970) defined educational evaluation as "a broad and continuous effort to inquire into the effects of utilizing educational content and process according to clearly defined goals" (p. 379). In terms of this definition, educational evaluation must transcend the levels of simple measurement techniques or the primary application of the evaluator's values and beliefs. If evaluation is to be a vast and continuous effort, it must depend on "a variety of instruments which are used according to carefully ascribed purposes" (Doll, 1970, p. 380). Beeby proposed an extended definition of evaluation as "the systematic collection and interpretation of evidence, leading, as a part of the process, to a judgment of value with a view to action (in Wolf, 1979, p. 117). Wolf (1979) developed the important elements of the definition. First, the term systematic implied that the information needed would be defined with precision and obtained in an organized fashion. The second element, the interpretation of evidence, emphasized the role of critical judgment in the evaluation process. Wolf stated that this element was often neglected in evaluation activities. The third element of Beeby' s definition involved the judgment of value. This required the evaluator to be responsible for making judgments from his evaluative work about the worth of an educational endeavor. The last element, with a view to action, introduced the notion that an evaluative undertaking should be designed for the sake of future action (pp. 117-124). Pyatte (1970) emphasized the importance of a rational plan element in the definition of educational evaluation. He stated that "evaluation is the deliberate act of gathering and processing information according to some rational plan the purpose of which is to render, at some point in time, a judgment about the worth of that on which the information is

PAGE 27

17 gathered" (p. 306). According to Pyatte, six elements are included: the agent, the object, the inputs, the plan, the time, and the product. Bloom, Hastings, and Madaus (1971) defined educational evaluation as: 1. A method of acquiring and processing the evidence needed to improve the student's learning and the teaching; 2. Including a great variety of evidence beyond the usual final paper and pencil examination; 3. An aid in clarifying the significant goals and objectives of education and as a process for determining the extent to which students are developing in these desired ways; 4. A system of quality control in which it may be determined at each step in the teaching-learning process whether the process is effective or not, and if not, what changes must be made to ensure its effectiveness before it is too late; 5. A tool in educational practice for ascertaining whether alternative procedures are equally effective or not in achieving a set of educational ends. (p. 8) In recent years, the most popular definitions have viewed evaluation as "a process of identifying and collecting information to assist decision makers in choosing among available decision alternatives" (Worthen & Sanders, 1973, p. 20). Since this perspective of evaluation was the one used in this study, an expanded discussion of it is presented in the final part of this section of the literature review. Contemporary Models of Educational Evaluation With the increased call for accountability in educational institutions, the body of literature on educational evaluation has expanded rapidly in recent years. Many models of educational evaluation have emerged. There have been numerous attempts to categorize the array of models, the most comprehensive of which were done by Anderson, Ball, and Murphy (1975), Gardner (1977), Stufflebeam et al . (1971), and Worthen and Sanders (1973). The more prominent educational evaluation models included the measurement model, the congruence model, the professional judgment model, the goal-free model, and the decision-oriented model (Gardner, 1977).

PAGE 28

18 The measurement model of evaluation, as described by Gardner (1977), equated evaluation with measurement (p. 575). In this model, evaluation is viewed as the science of instrument development and interpretation (p. 576). The use of measurement instruments results in scores or other indices which are mathematically and statistically manipulated so masses of data can be handled and an individual's or a group's score can be compared with established norms (Stufflebeam et al . , pp. 10-11). The model has been widely used and is illustrated by the use of SAT and GRE scores. Gardner (1977) stated that the model was based on the assumptions that the phenomena to be evaluated have significant measurable attributes and that instruments can be designed which are capable of measuring these attributes. Perhaps no other model has received more attention in recent evaluation literature, especially in its application to the classroom, than the congruence model. The origin of this model is most closely associated with the work of Tyler (1950). Tyler stated that educational objectives were essentially defined in terms of expected changes in human behavior. It followed that evaluation is the process for determining the degree to which changes in behavior actually take place. Gardner (1977) described this model as the process of specifying or identifying goals, objectives or standards of performance; identifying or developing tools to measure performance; and comparing the measurement data collected with the previously identified objectives or standards to determine the degree of discrepancy or congruence which exists, (p. 577) Probably the most widely used but least discussed model of evaluation is the professional judgment model (Stufflebeam et al . , 1971, p. 3). In this model, evaluation js_ professional judgment. Values or criteria that

PAGE 29

19 form the basis of the judgment may or may not be explicitly stated. Often a commonly shared value system is assumed (Gardner, 1977, p. 574). Examples of the uses of this model include the judgments of visiting teams of professionals in the accreditation process, the use of peer review panels for evaluating various programs, and faculty committees passing judgments on promotion or tenure (Worthen & Sanders, 1973, pp. 126127). A recent addition to the models of educational evaluation is the goalfree model. Originally proposed by Scriven (1972, 1973), this model is based on the argument that if the main objective of evaluation is to assess the worth of outcomes, then no distinction should be made between intended versus unintended outcomes and that an evaluation should be conducted without reference to a program's goals or objectives (Gardner, 1977, p. 583). In this model, evaluation is not totally goal free, but standards for comparison can be chosen from a wider range of possibilities than those that might be prescribed by a program's objectives (p. 584). The final outcome of the evaluation "should be accurate, descriptive, and interpretative information relative to the most important aspects of the actual performance, effects, and attainments of the program being evaluated" (p. 585). All of the previously discussed models are similar in that they include reference to the use of some information in making some judgment. The models vary in the degree to which the role of information or the role of judgment is emphasized. In the next model to be discussed, where evaluation is defined as "the process of delineating, obtaining, and providing useful information for judging decision alternatives" (Stufflebeam et al., 1971, p. 4), the emphasis is on the role of information.

PAGE 30

20 Decision-Oriented Model of Educational Evaluation Stufflebeam and the Phi Delta Kappa National Study Committee have been credited with the refinement of what Gardner (1977) referred to as the decision-oriented model of educational evaluation. According to this model, "evaluators collect information and communicate this information to someone else" (Alkin & Fitz-Gibbon, 1975, p. 1). The process by which this information is collected is systematic and deliberate, an attempt to obtain an unbiased assessment upon which to base an evaluation (Alkin & Fitz-Gibbon, 1975; Guba, 1975; Stufflebeam, 1969). In this model, the results of evaluation are directed toward those individuals who are "intimately connected with the program being evaluated" (Alkin & Fitz-Gibbon, 1975, p. 1) or the administrative decision makers (Gardner, 1977; Guba, 1975; Stufflebeam, 1969; Stufflebeam et al . , 1971). The model was designed to benefit decision makers. In this context, the role of the evaluator is to collect and present summary information to decision makers (Alkin & Fitz-Gibbon, 1975, p. 5). The evaluators collect and present the information needed by someone else who determines its worth. "Decision-facilitation evaluators view the final determination of merit as the decision maker's province, not theirs" (Popham, 1975, p. 25). In contrast, Alkin and Fitz-Gibbon (1975) suggested that the information from a well-designed evaluation would pass judgment, not a person (p. 5). Stufflebeam (1969) viewed evaluation as the science of providing information for decision making. The assumption was made that the ultimate goal of the decision-making process was educational improvement. Educational improvement implied changes resulting from choices selected by decision makers from various alternatives. The process of decision making

PAGE 31

21 or choosing among options is firmly rooted in the decision maker's and the organization's value systems. In this framework, valid and reliable information is necessary to facilitate the decision maker's judgment of the degree to which various options measure up against a personal or organizational value system (Stufflebeam et al . , 1971, p. 38). Stufflebeam (1968) summarized the rationale for the model in the following statements: 1. The quality of programs depends upon the quality of decisions in and about the program. 2. The quality of decisions depends upon the decision maker's abilities to identify the alternatives which comprise decision situations and to make sound judgments about these alternatives. 3. Making sound judgments requires timely access to valid and reliable information pertaining to the alternatives. 4. The availability of such information requires systematic means to provide it. 5. The processes necessary for providing this information for decision making collectively comprise the concept of evaluation, (p. 6) Alkin (1969) expressed a similar view of evaluation. He stated that the steps in the process of evaluation included determining the areas of concern for possible decisions, determining the appropriate data, collecting and analyzing the data, and reporting the summary information in a form useful for the decision makers. These steps were condensed and described by Stufflebeam et al . (1971) in their definition of educational evaluation as "the (process) of (delineating), (obtaining), and (providing)(useful)(information) for (judging)(decision alternatives)" (p. 40). Each of the eight elements, set off by parentheses in the definition, has significant implications for the process and techniques of evaluation. These elements of evaluation were defined as follows: 1. Process. A particular and continuing activity subsuming many methods and involving a number of steps and operations.

PAGE 32

22 2. Decision alternatives. Two or more different actions that might be taken in response to some situation requiring altered action. 3. Information. Descriptive or interpretive data about entities (tangible or intangible) and their relationships, in terms of some purpose. 4. Delineating. Identifying evaluative information required through an inventory of the decision alternatives to be weighed and the criteria to be applied in weighing them. 5. Obtaining. Making information available through such processes as collecting, organizing, and analyzing and through such formal means as measurement, data processing, and statistical analysis. 6. Providing. Fitting information together into systems or subsystems that best serve the purposes of the evaluation, and reporting the information to the decision maker. 7. Useful. Satisfying the scientific, practical, and prudential criteria of Chapter I [internal validity, external validity, reliability, objectivity, relevance, importance, scope, credibility, timeliness, pervasiveness, and efficiency] and pertaining to the judgmental criteria to be employed in choosing among the decision alternatives. 8. Judging. The act of choosing among the several decision alternatives; the act of decision making. (Stufflebeam et al., 1971, pp. 40-43) Stufflebeam et al . (1971) contended that evaluation is an extension of the decision-making process. In this process, the evaluator assists the decision maker by helping to delineate, in interaction with the decision maker, the information which is needed; by providing that information in a useful format to the decision maker; and by assisting the decision maker in the interpretation of the information. This conceptualization of evaluation was used in this study where the making of quality evaluations about educational programs was defined as the process of identifying what information about programs is useful to administrators in making that type of evaluation decision and providing that information to administrators in a format that facilitates the interpretation of the information by administrators making such decisions. While identifying what information is useful for making quality-evaluations may be difficult, the presentation of the identified information

PAGE 33

23 in a useful format is equally difficult when multiple items of information are involved. This task requires the aggregation of the identified information into profiles or indices or similar formats useful to administrators involved in quality-evaluation decision making. Stufflebeam et al . (1971) pointed out that their theory offered little guidance for the evaluator in deciding how to provide information in a useful format (p. 336). Craven (1975) emphasized the information-providing role of an evaluator in his description of information systems as "any method that provides the right decision maker with the right information in the right form at the right time so as to facilitate the decision-making process" (p. 127). Craven (1975) summarized the importance of an evaluator 's information-providing role with the following statement: Information that responds to those decision-making needs in a valid, reliable, and timely manner will assist higher educational institutions during this period in making decisions that will maintain and strengthen the quality of its programs and faculty and will enable them to meet the future educational needs of students, society, and scholarship, (p. 138) Selected studies illustrative of these major approaches to evaluation, including decision-oriented approaches, that have been used in the assessment of quality in higher education are reviewed in the next section of this chapter. Quality Assessment in Higher Education An appropriate summary of a basic problem in assessing quality in higher education or any other field is provided by the following statement from Pirsig (1974): Quality . . . you know what it is, yet you don't know what it is. But that's self -contradictory. But some things are better than others, that is, they have more quality. But when you try to say what the quality is, apart from the things that have it, it all goes poof ! There's nothing to talk about. But if you can't say what Quality is, how do you know what it

PAGE 34

24 is, or how do you know that it even exists? If no one knows what it is, then for all practical purposes it doesn't exist at all. But for all practical purposes it really does exist. What else are the grades based on? Why else would people pay fortunes for some things and throw others in the trash pile? Obviously some things are better than others . . . but what's the "betterness?" So round and round you go, spinning mental wheels and nowhere finding anyplace to get traction. What the hell is Quality? What j_s it? (p. 184) During a recent Southern Regional Educational Board Symposium, SREB President Godwin addressed the problem of defining quality as follows: Part of our problem in higher education is that too often we have confused quality with prestige. We need to increase the understanding that quality education is not a monopoly of a few dozen major universities in the nation, but is attainable by all types of higher education institutions. (Legislators stress quality improvements, 1980, p. 3) The president of Brevard Community College in Florida, in a recent message to his faculty, had the following comments on educational quality: Quality in education is not an absolute. It can only be evaluated in terms of arbitrarily determined standards, and these in turn depend partly on subjectively formulated aims and partly on objective statistical procedures. . . . Education is quality education to the extent that it meets the needs of the people being served. (King, 1981, p. 1) These two quotes are representative of the general view of quality in higher education. That view is vague, subjective, and broad. On one hand, such a view has limited use in that it provides little guidance for educational improvement. On the other hand, it is a workable approach to the quality issue, maintaining maximum flexibility to serve the diversity found in higher education. If by no other means, educators intuitively recognize a substantial variance in program and institutional quality among the diverse institutions that comprise the American system of higher education. Various studies conducted by different researchers for different reasons in different settings using different methodologies have resulted in a variety of quality attributes that provide little assistance in defining quality (Lawrence & Green, 1980).

PAGE 35

25 Selected studies illustrative of the major approaches to quality assessment in higher education are reviewed in this section of the literature review. This section is presented in three parts. First, the major reputational assessments of graduate programs are reviewed. These studies have formed the basis of attempts to investigate the quality issue in higher education. Second, an overview is presented of quality assessment at the undergraduate and two-year college level. Third, selected studies designed to identify quantifiable indicators of quality are reviewed. Graduate Education Beginning with Hughes (1925) and continuing through the prestigious American Council on Education (ACE) sponsored studies (Cartter, 1966; Roose & Andersen, 1970), reputational ratings of graduate programs have constituted the basis of attempts to address the issue of quality in higher education. The methodology incorporated in a majority of these studies involved a peer review, in which programs were rated by eminent faculty in the same discipline. Their ratings reflected the quality of graduate education and research in the system. These studies attempted to identify the outstanding research and teaching institutions by program and they have consistently identified 20 or 30 institutions, virtually ignoring the balance of the system (Lawrence & Green, 1980, p. 2). Using a panel of distinguished scholars from each field, Hughes (1925) conducted the first comprehensive reputational study of graduate programs in American higher education. At the time of his study, only 65 American universities awarded the doctoral degree. Hughes ranked 38 of these universities in 20 disciplines according to the number of outstanding scholars each employed. During the next decade, the number of American

PAGE 36

26 universities awarding the doctoral degree nearly doubled. This prompted a second study by Hughes (1934) in which 59 universities were ranked in 35 disciplines according to the quality of facilities and staff for the preparation of doctoral candidates. The stated purpose of both of Hughes' studies was to educate undergraduate students about various graduate programs. These studies went well beyond this purpose in establishing procedures for quality ratings of the nation's leading institutions through numerical ranks based upon the informal opinions of academicians. For the next 20 years, the Hughes studies were regarded as authoritative. It was not until Keniston's (1959) work that an attempt was made to update the Hughes studies. Using department chairmen selected from the institutional members of the American Association of Universities as raters, Keniston ranked 24 graduate programs based upon a combined measure of doctoral program quality and faculty quality. These rankings were used to produce a rank-ordered list of the top 20 institutions which were compared with Hughes' results. The major weakness of the Hughes and Keniston studies, according to Cartter (1966), was the uncontrolled geographical and rater biases. Other flaws in these studies noted by Cartter included the failure to distinguish measures of faculty quality from measures of educational quality, the failure to account for the biases of raters toward their alma maters, and the choice of department chairmen as raters. It was Cartter's opinion that the department chairmen were not necessarily the most distinguished scholars ncr typical of their peers in age, specialization, or rank. They tended to be more conservative and thus to favor the traditional institutions.

PAGE 37

27 Cartter's design of the ACE studies accounted for these criticisms. He took great care to assure the representation of various institutions and raters from all geographic areas. Cartter surveyed 106 institutions representing more than 1,000 graduate programs in 29 disciplines. The more than 4,000 survey respondents included senior and junior scholars as well as department chairmen. From a list of the institutions in alphabetical order, the respondents were requested to rate each doctoral program in their area of study on two components: quality of graduate faculty and effectiveness of the doctoral program. To support the representativeness of the raters, the respondents were requested to supply basic biographical information. The leading departments were ranked separately on the basis of the raters' responses on each of the components. In most disciplines, the rankings by each component were very similar. Where the discipline areas overlapped, Cartter compared his rankings with those of Hughes (1925) and Keniston (1959). Cartter found a high correlation between his rankings and objective institutional measures such as faculty salaries, library resources, and publication indices. His rankings correlated highly with Bowker's (1964), who used enrollment of graduate award recipients in institutional programs as a criterion. Cartter used these relationships as a primary point in his support of peer ratings for quality assessment. The 1970 ACE-sponsored Roose-Andersen study essentially replicated Cartter's study. The Roose-Andersen study included 130 institutions across 29 disciplines. The ratings were based upon the same two components Cartter used in 1966: quality of graduate faculty and effectiveness of the doctoral program. The Roose-Andersen report presented ranges of raters' scores rather than absolute raw departmental ratings and ranges

PAGE 38

28 of quality instead of specific institutional rankings. Even with these changes, the results of the Roose-Andersen study were very similar to those of the Cartter study (1966). Using the reputational rating procedures refined by the ACE studies, other researchers produced similar program or institutional rankings based on the two ACE criteria or similar criteria (Carpenter & Carpenter, 1970; Cartter & Solmon, 1977; Cole & Lipton, 1977; Cox & Catt, 1977; Gregg & Sims, 1972; Margulies & Blau, 1973; Munson & Nelson, 1977). Lawrence and Green (1980) discussed the weaknesses in reputational ratings, the most apparent being their lack of agreement on the meaning of quality. The definition of quality varied according to disciplines, program areas, and individual raters. The lack of agreement on a definition of quality made program or institutional comparisons nonsensical. Lawrence and Green expressed the opinion that higher education was far too complex to rate on the basis of one or two dimensions. They stated that the ratings represent the subjective judgments of faculty and that they probably reflect prestige rather than quality. . . . and high prestige is translated to mean educational excellence. As a result, research and scholarly productivity are emphasized to the exclusion of teaching effectiveness, community service, and other possible functions; undergraduate education is denigrated; and the vast number of institutions lower down in the pyramid are treated as mediocrities, whatever their actual strengths and weaknesses, (pp. 15-16) Dolan (1976) criticized the reputational approach because it tended to maintain the status quo. Dolan expressed the opinion that subjective ratings of program quality reflected elitist and traditionalist views of higher education that stifled or restricted change and innovation. Dolan believed that increasing consumer awareness in higher education demanded student involvement in any attempt to rate graduate programs.

PAGE 39

29 Blackburn and Lingenfelter (1973) defended the ACE reputational ratings on the following grounds: (1) Panel bias has been largely eliminated by the careful selection procedures of the ACE studies; (2) subjectivity cannot be escaped in evaluation no matter what technique is used; (3) professional peers are competent to evaluate scholarly work, the central criterion in reputational studies; and (4) although not a sufficient condition of general excellence, scholarly ability is necessary for a good doctoral program, (p. 25) Webster (1981) pointed out that the process usually produced results with face validity in that those programs or institutions considered to be of high quality by the educated general public were often rated highly. Regardless of the criticisms or defenses of the reputational rating approach, none of the studies that have been cited have investigated specifically what information was useful for assessing the quality of graduate programs. Only one study of graduate education quality was found that investigated this topic. The Council of Graduate Schools (C6S) and the Educational Testing Service (ETS) sponsored a study that involved 73 departments divided among three fields—psychology, chemistry, and history—that were surveyed with the purpose of determining what information to use to assess quality (Clark, Hartnett, & Baird, 1976). Four major conclusions resulted from this study. First, it was determined that timely, relevant, and useful information (program characteristics) related to educational quality could be reasonably obtained. Second, approximately 30 program characteristics were identified as especially useful. Third, these program characteristics appeared to be applicable across diverse program areas. Fourth, two clusters of program characteristics were identified: research-oriented indicators and educationalexperience indicators. The research-oriented indicators included department size, reputation, physical and financial resources, student ability,

PAGE 40

30 and faculty publications. The educational -experience indicators were concerned with the educational process and academic climate, faculty interpersonal relations, and alumni ratings of dissertation experiences. The CGS-ETS study used faculty, students, and alumni input in a separate peer-rating component of the study similar in approach to the ACE studies. One finding of this component of the study was that reputational ratings of graduate programs had little relationship to teaching and educational effectiveness as measured by the input of students and alumni. Clark et al . (1976) concluded that the peer ratings were based primarily on scholarly publications with little or no emphasis on the quality of instruction. The CGS-ETS study demonstrated that information useful for determining educational quality could be identified. Furthermore, that study demonstrated that the information identified as useful consisted of multiple indicators of quality that appeared to be applicable across program areas. This is supportive of the view taken in this study that the multiple indicators of quality identified in the IRC project (Steuart & Rathburn, 1982) were representative of some underlying structure of the multiple indicators of quality, the dimensions of which should remain invariant across program areas. The Clark et al . (1976) study and the IRC study (Steuart & Rathburn, 1982) defined several dimensions of quality based upon the program characteristics identified in the respective studies as useful in assessing program quality. However, the dimensions were defined in both studies on the basis of the perceived similarity of the content of clusters of program characteristics and were not defined by the utilization of the technique of factor analysis as was done in this study.

PAGE 41

31 Undergraduate Education Although considerably fewer studies have been conducted to assess quality at the undergraduate level than at the graduate level, the studies rating undergraduate education have demonstrated that colleges differ substantially in traditional measures of quality. Jordan (1963), in a study involving undergraduate programs, found that those institutions that spent more on salaries for library staff and had higher numbers of library volumes per student tended to score higher on a quality index based upon multiple weighted factors. Brown's (1967) study of undergraduate education ranked colleges on the basis of eight criteria including total current income per student, proportion of students entering graduate school, proportion of graduate students, number of library volumes per student, total number of full-time faculty, faculty-student ratio, proportion of faculty with doctorate, and average faculty compensation. These two studies represented approaches to undergraduate quality assessment similar to those utilized for graduate programs. Lawrence and Green (1980) expressed the opinion that these and similar studies (Dube, 1974; Krause & Krause, 1970; Tidball & Kistiakowski , 1976) that used quality measures more typically associated with graduate quality assessment (e.g., publication record of students, percent of students who finish professional schools or terminal graduate degrees, etc.) failed in their purpose because they did not take into account the "special nature of the undergraduate experience" (p. 33). Astin, through a series of studies (1965, 1971; Astin & Henson, 1977) approached one specific aspect of undergraduate quality that he termed the selectivity index. Astin (1971) defined the selectivity index as a relative measure of the academic ability of a college's entering freshmen.

PAGE 42

32 In another study involving the selectivity index, Astin and Henson (1977) used ACT and SAT scores to approximate the selectivity of all accredited twoand four-year institutions. Astin and Henson defended their approach on the basis of its acceptance by the mainstream of faculty and administrators in higher education (p. 2). The validity of the approach was supported by its positive correlations with selected institutional characteristics such as student-faculty ratios (Astin & Solmon, 1979). In a related study, Astin developed further the selectivity index by examining the preferences of academically talented students for various institutions (Astin & Solmon, 1979). Although they realized that this measure was confounded by a number of variables such as institutional popularity and regionalism, Astin and Solmon maintained that a measure of an institution's drawing power for highly able students was a valid quality measure (p. 47). In a later study of undergraduate education quality, Astin and Solmon (Astin & Solmon, 1981; Solmon & Astin, 1981) expanded their view of quality. They utilized faculty members representing seven disciplines from institutions in four states (California, Illinois, New York, and North Carolina) to rate institutions from a national list and a state list. The state list included those institutions in a rater's state that awarded a minimum of five undergraduate degrees in a rater's field during 1977. The national list was composed of 100 of the "most visible institutions in the rater's field" (Astin & Solmon, p. 14). Each rater was asked to evaluate each institution from both lists according to six quality criteria including overall quality of undergraduate education, preparation of students for graduate and professional school, preparation of students for employment after college, faculty commitment to undergraduate

PAGE 43

33 teaching, scholarly or professional accomplishments of faculty, and innovativeness of curriculum and pedagogy (p. 24). Utilizing a factor analysis of the mean ratings on each of the quality criteria for each of the undergraduate disciplines, Astin and Solmon (1981) concluded that these ratings showed that the seven fields form a single "overall quality" dimension. In practical terms, this means that quality differences among fields at a given institution tend to be minimal, and that ratings of one department may suffice as an estimate of the quality in the other departments at the institution, (pp. 14-15) Considering that only six quality criteria were used in the study, the conclusion appeared warranted. Probably the best known studies of undergraduate quality, the Gourman studies (1967, 1977), provided little explanation of the procedures used to arrive at the reported ratings. Scores on two sets of variablesstrength of an institution's academic departments and quality of nondepartmental areas—were used to produce an average academic department rating, an average nondepartmental rating, and an overall "Gourman rating" for each institution. Although the Gourman ratings were accepted as a viable measure of undergraduate quality, several of the assumptions used in the ratings were questionable. Gourman assumed that, at minimum, 10 years were required following graduation to produce an excellent classroom teacher and thus rated older faculty higher. Gourman gave equal weight to faculty effectiveness, public relations, library, a college's alumni association, and the athletic-academic balance as measures of institutional quality. Gourman held a bias toward larger institutions, consistently rating them higher than smaller liberal arts colleges (Lawrence & Green, 1980). In

PAGE 44

34 1977, Gourman changed the format of his ratings, making it similar to that of the 1970 Roose-Andersen study. Gourman rated 68 undergraduate programs in 1977, again providing no information on the procedures used in developing the ratings. Utilizing approaches such as those discussed, other researchers have addressed the issue of quality in undergraduate education (Johnson, 1973; Nichols, 1966; Solmon, 1975). Other, possibly less academic, attempts to evaluate undergraduate quality included the popular college guides (e.g., Hawes Comprehensive Guide to Colleges , 1978). Webster (1981) criticized many of these attempts on the basis of their limited view of the undergraduate experience. Central to his criticism was the lack of emphasis on undergraduate teaching in preparation for the job market and the overriding view of undergraduate programs serving primarily as preparatory periods for graduate study. yery little research has been conducted in the community/junior college setting in relation to the quality issue. In general, many of the premises underlying traditional views of quality in higher education run in opposition to the basic principles of the community college philosophy. An example of this is the discrepancy between the selectivity index (Astin & Solmon, 1979) and the open door admission policy of most community colleges. One of the more quoted studies of educational quality in the community college setting involved the identification of quality indicators from peer opinions expressed in evaluations of selected junior colleges during accreditation team visits (Walters, 1970). Walters identified 58 specific indicators from a list of more than 500 recommendations made in accreditation team reports on 126 public junior colleges from 1960 to 1968. Most

PAGE 45

35 of the indicators related to college procedures, the efficiency of operations, staffing levels, and organizational structure. Walters postulated that the 58 indicators, taken collectively, described a quality public junior college. Only two of the indicators were based on any specific quantitative measures. Another study of educational quality in the two-year college, the Pike study (1963), involved an analysis of the relationship of current expenditures, enrollment, and expenditures per student to certain variables associated with educational quality in junior colleges in Texas. The IRC project (Steuart & Rathburn, 1982), which generated the data used in the present study, surveyed 631 administrators representing 24 of Florida's public community colleges to determine what information was perceived as useful in making decisions about the quality of programs or services offered by their colleges. In that project, the administrators rated 434 program characteristics for degree of usefulness in quality-evaluation decision making. More than 100 program characteristics were identified as highly useful. The program characteristics identified as highly useful were organized on the basis of perceived similarity of content into 12 types of information including the need for and structure of a program, program size, program costs, program utilization rates, support services related to a program, information on students entering a program, information on students currently enrolled in a program, information on faculty or staff associated with a program, information from external or internal evaluations of a program, quantitative outputs of a program, ratings of a program by various types of raters, and information on students transferring from a program to upper division (pp. 63, 145-146).

PAGE 46

36 Similar to most of the studies of quality in graduate education, none of the studies of quality in undergraduate education except the IRC project (Steuart & Rathburn, 1982) investigated the usefulness in quality-evaluation decision making of the various quality indicators used in the studies. Also, although multiple program characteristics have been used as indicators of quality, no study has attempted to identify any underlying dimensions for the multiple indicators except Astin and Solmon (1981). Although the indicators of quality in the Astin and Solmon study were so broad and so few that the dimensions identified are probably spurious, they did demonstrate the use of the factor-analytic technique in identifying underlying dimensions of indicators of quality. Quantifiable Approaches to Quality In recent years, higher education researchers have explored numerous ways of providing objective measures of educational quality. Many of these attempts have involved correlating various quantifiable measures with established rankings of institutional quality. These measures included, among others, institutional size (Elton & Rose, 1972; Hagstrom, 1971), research productivity (Drew, 1975; Wispe, 1969), publication productivity (Lewis, 1968), amount of money spent (Ousiew & Castetter, 1960), and number of library volumes (Lazarsfield & Thielens, 1958). Many of these "correlates of prestige" (Lawrence & Green, 1980, p. 23) used the popular ACE ratings as their basis for comparison. Cartter (1966), anticipating the identification of quantifiable quality indicators in his ratings, stated that such indicators "are for the most part 'subjective' measures once removed" (p. 4). The list of factors was lengthy that positively correlated with reputational quality ratings. Blackburn and Lingenfelter (1973) listed the

PAGE 47

37 following items as being positively correlated with the 1966 ACE ratings: 1. Magnitude of the doctoral program. 2. Amount of federal funding for academic research and development. 3. Non-federal current fund income for educational and general purposes. 4. Baccalaureate origins of graduate fellowship recipients. 5. Baccalaureate origins of doctorates. 6. Freshman admissions selectivity. 7. Selection of institutions by recipients of graduate fellowships. 8. Postdoctoral students in science and engineering. 9. Doctoral awards per faculty member. 10. Doctoral awards per graduate student. 11. Ratio of doctorate to baccalaureate degrees. 12. Compensation of full professors. 13. The proportion of full professors on a faculty. 14. Higher graduate student/faculty ratios. 15. Departmental size of seven faculty members or more, (p. ID Fotheringham (1978) described traditional quality indicators as including context, faculty input, faculty-student interaction, and student input. Fotheringham defined context as "the setting for the educational process" (p. 17). The context variables included number of library volumes, administrative policies, physical facilities, and similar variables. Pike (1963), in his study of the relationship between 72 variables associated with educational quality including enrollment, current expenditures, and expenditure per student, found expenditures to be the most important measure of context. Banghart, Kraprayoon, and Clewell (1978) identified other context variables including curriculum, administrative practices, and amount of external funding. Meder (1955) defined faculty input as including an instructor's training, skill, ability, and morale. Blackburn and Lingenfelter (1973) included degrees, awards, faculty compensation, and post-doctoral studies as indicators of faculty input. Other faculty input indicators

PAGE 48

38 included research productivity (Hagstrom, 1971), publication productivity (Somtt & Tanenhaos.,1964) and faculty size (Balderston, 1970). The faculty input indicators identified as most difficult to measure included faculty morale, vigor, cohesion, and progressiveness that Balderston (1974) suggested could only be measured subjectively. Faculty-student interaction has been traditionally defined as the faculty-student ratio (Meder, 1955). That definition has been expanded to include the accessibility of the faculty (Roose & Andersen, 1970) as well as the extent and nature of the faculty contact with students (Fotheringham, 1978). Student input indicators of quality have often been held as the most valuable type of indicator. Fotheringham (1978) defined student input as the characteristics of the student at the time of admission. Blackburn and Lingenfelter (1973) proposed a more comprehensive definition simply as the students' quality. Many researchers concluded that not enough has been done to control for variations in student input indicators when measuring various outcome indicators of quality (Richards, Holland, & Lutz, 1966; Rock, Centra, & Linn, 1969). Fotheringham (1978) cited three more categories of quality indicators that he labeled output, student change, and intellectual climate. Output was described as including both faculty output (publications and other productivity measures) and student output (accomplishments of students following graduation). Variability in the specific measures used to assess output indicators was reflected in the work of Keller (1969) and Lawrence, Weathersby, and Patterson (1970). The student change indicators related to the extent of learning that took place during the students' enrollment (Turnball, 1971). Ostar

PAGE 49

39 (1973) described this as the value-added concept. It was his opinion that in the assessment of the development of students, specific attention should be given to their initial abilities and their goals. Measures of student change included post-graduate employment, personal achievements, motivation, and achievements in graduate school according to Fotheringham (1978). Fotheringham (1978) defined intellectual climate as "an attitude toward learning and scholarship shared by students, faculty, and administration" (p. 26). Several researchers have expressed the opinion that campus climate is of primary importance in assessing institutional quality (Astin, 1963; Boyer, 1964; Bowen, 1963). Indicators in this category included both academic attributes, such as faculty concern for scholarship, and non-academic attributes such as students' residential experience, democratic participation of the students in campus affairs, and counseling or other supplementary services. Although multiple quantifiable indicators of quality have been identified in these studies, none of the studies investigated the possibility of identifying underlying dimensions of the multiple indicators to facilitate providing information to decision makers in a format useful in quality-evaluation decision making. The IRC study (Steuart & Rathburn, 1982) included some program characteristics representative of many of these quantifiable indicators of quality which is another reason the data from that study provided an excellent opportunity for identifying underlying dimensions for information useful in program qualityevaluation decision making. A discussion of the utility of the technique of factor analysis for identifying any underlying dimensions of a multi-variate data set is presented in the next section of this chapter.

PAGE 50

40 Determining Underlying Dimensions: Factor Analysis In the decision-oriented model of evaluation as described by Stufflebeam et al . (1971), once the information useful for making an evaluation has been determined in interaction with the decision maker, that information should be provided to the decision maker in a format useful to the decision maker. If relatively few items of information are involved, then the means of providing the information in a useful format would appear relatively straightforward. However, from the review of selected studies on quality evaluation in higher education, multiple indicators of quality have been identified. In the IRC study (Steuart & Rathburn, 1982), more than 100 program characteristics were identified as highly useful in making quality-evaluation decisions. Providing such a wide array of information in a format useful to a decision maker is a problem. Craven (1980) indicated that "providing the desired information in an appropriate format" (p. Ill) is a major concern if evaluation processes are to effectively address the higher education issues of the 1980s. Applicability of Factor Analysis The situation of administrators in higher education attempting to use multiple indicators of quality when making quality judgments about programs or services is similar to the situation psychologists faced when evaluating human personality: interpreting multiple measures to describe or evaluate a person (Ha man, 1976, p. 4). This was the context for the origin of factor analysis in psychology. It was developed as a technique to determine dimensions of personality that would facilitate the evaluation of personality (Cattell , 1950, pp. 26-27). Although it was developed within the field of psychology, the mathematical

PAGE 51

41 techniques involved are not limited to psychological applications (Harman, 1976, p. 4). Cattell (1966) stated that the use of factor analysis was particularly advantageous where "the number of variables to be watched over and thought about is bewilderingly large . . . [and where] there has been little success after several years in reaching agreement on the major concepts [in the area of inquiry]" (p. 175). Both of these criteria appear to apply to the field of quality evaluation in higher education. Burt (in Cattell, 1966) has stated that the primary aim of factor analysis is "to discover principles of classification [of individuals or variables]" (p. 268). Simply because the technique of factor analysis originated in the field of psychology, applications of factor analysis were primarily in that field up until increasing accessibility to computers facilitated the use of the technique (Harman, 1976, p. 7). Harman (1976) has collected more than 200 studies using factor analysis in fields other than psychology including such diverse fields as economics, medicine, the physical sciences, political science, sociology, and regional science (p. 7). Also, he cited a number of taxonomic applications in fields other than psychology (pp. 7-8). Harman stated that Unlike the field of psychology, in which theory has been primary and the factor-analytic model has been used to test and modify such theory, the application of factor analysis in the areas noted has been exploratory, almost exclusively, in the hope of bringing order out of the relationships among the many variables that could now be investigated with the aid of the computer, (p. 8) Guertin and Bailey (1970) suggested numerous applications for factor analysis in the field of educational psychology (Chapter 14). The pervasiveness of its use in research in higher education is indicated by the numerous entries under the subject heading "factor analysis" in

PAGE 52

42 each issue of Resources in Higher Education published by the Educational Resources Information Center (ERIC). The following recent studies in higher education are cited because, as in this study, factor analysis was used for discovering dimensions or categories among a set of variables. Smart (1975) used the technique in a survey of students, faculty, and administrators to determine salient dimensions of 47 institutional goals rated by respondents for degree of importance to a college. In a survey of a stratified random sample of 722 Minnesota citizens, Biggs, Brown, and Kingston (1977) used factor analysis to determine "categories of educational values" (p. 157) from respondents' ratings of the importance of various university goals and activities, the importance of various academic fields, and the importance of various reasons for students attending the University of Minnesota. During the development of a model for evaluating educational innovations, Bess and Hayes (1970) used factor analysis as a means "of assembling meaningful clusters of student characteristics into subcultures" (p. 44) from students' responses to a questionnaire that was devised to measure a combination of student personality characteristics, value orientations, attitudes, goals, perceptions, and behaviors. In a study to investigate the possibility of clustering academic departments on dimensions that could provide an equitable basis for departmental funding, Dressel and Simon (1976) used factor analysis on 35 descriptive variables representing various characteristics of the instructional load and output of academic departments to determine the dimensions for grouping the departments. At the University of Toledo, a study was done with an objective very similar to the objective in this study (Perry & Lind, 1976). In

PAGE 53

43 the Perry and Lind study, factor analysis was used on the ratings by 140 department chairpersons and 272 program graduates of the importance of 33 criteria in evaluating academic programs to determine "what latent factors or dimensions were involved in the data" (p. 20). In their most recent reputational study of undergraduate educational quality, Solmon and Astin (1981) used factor analysis to determine patterns among the ratings of seven discipline areas in selected American undergraduate institutions by faculty representing undergraduate institutions in four selected states. Each of these studies is illustrative of the use of factor analysis for discovering categories or dimensions of an underlying pattern within a set of variables. It appeared appropriate in this study to use factor analysis to determine the underlying dimensions of the multiple indicators of quality identified in the IRC project (Stuart & Rathburn, 1982) to use in developing guidelines for organizing the identified information into a format useful to administrators in making quality-evaluation decisions about programs. Definition of Factor Analysis Spearman is generally credited with the origin of factor analysis in his development of a psychological theory involving the specification of a general factor and a number of specific factors related to describing general intelligence: the two-factor theory (Harman, 1976, p. 3). Finding Spearman's theory insufficient to describe a battery of psychological tests, other psychologists explored the possibility of extracting several general or common factors from a matrix of correlations among tests. These explorations led to the development of multiplefactor analysis (Harman, p. 4).

PAGE 54

The principal concern of factor analysis is the resolution of a set of variables into a smaller number of categories or "factors." The resolution is accomplished by analysis of the correlations among the variables within the set. A satisfactory resolution produces a set of factors (or categories or dimensions or variables) smaller than the original set of variables that conveys the essential information of the original set of variables. Thus, "the chief aim [of factor analysis] is to attain scientific parsimony or economy of description" (Harmon, p. 4). Economy of description is precisely the goal in providing to decision makers in a useful format the information represented by multiple indicators of quality. As Fox (1969) stated, factor analysis is a procedure for "identifying the underlying structure of the interrelationships expressed in the correlational matrix [of a set of variables]" (p. 216). The procedure estimates the minimum number of separate variables or dimensions, called factors, necessary to provide the information contained in the correlation matrix (Fox, p. 216). Steps in Factor Analysis Fox (1969) described the procedure of factor analysis as typically involving a five-step process (pp. 216-218). The first step is to identify the variables to be studied. The second step is to create a matrix of correlations expressing the correlation between each pair of variables in the set of variables being studied. The third step is "to put this matrix through the first computational process of factor analysis that produces what is called an unrotated matrix of principal components, from which the minimum number of separate factors required to account for the data can be identified" (p. 217). A full description of the calculation procedures is presented in Harman (1976).

PAGE 55

45 Harman (1976) described two basic approaches to the calculations involved (pp. 14-15). Within the framework of the linear mathematical model used in factor analysis, the calculations can either extract the maximum variance or best reproduce the observed correlations (p. 14). The method for the reduction of a large body of data so that the maximum variance is extracted was first proposed by Pearson and later developed as the method of principal components or component analysis (p. 14). In contrast to the maximum variance approach is the classical factor-analysis model developed to maximally reproduce the correlations. It is generally called common-factor analysis because each of the observed variables involved in the analysis is defined linearly in terms of a number of common factors and a unique factor (p. 15). "The common factors account for the correlations among the variables, while each unique factor accounts for the remaining variance (including error) of that variable" (p. 15). The common-factor analysis approach was used in this study because the intent was to determine as clearly as possible the dimensions (interrelationships) among the variables involved and not to determine the amount of variance attributable to a variable or a group of variables (See Guertin & Bailey, 1970, pp. 82-83). The method of calculation generally used for common-factor analysis was described by Thurstone and has been labeled the "principal axes solution" (in Guertin & Bailey, 1970, p. 61). The essential difference between the methods is whether in the mathematical computations unities are inserted in the diagonal of the correlational matrix (component analysis) or whether "communal i ties" are inserted (common-factor analysis) (Harman, 1976, p. 70). According to Guertin and Bailey (1970), the use of unities in the diagonal of the correlation matrix causes the

PAGE 56

46 intercorrelation matrix to take on a higher rank than it would with values less than unity in the diagonal (p. 33). Since the object in factor analysis is to find the minimum number of factors or dimensions or variables necessary for economy of description of the total set of variables, values less than one are desired in the diagonal (Guertin & Bailey, p. 33). The values less than one in the diagonal are called "communal i ties." The communal i ties express the amount of the commonfactor variance (the variance shared with all the other variables in the analysis) (Guertin & Bailey, p. 33). The correlation matrix with communal i ties rather than unities in the diagonal is called the reduced intercorrelation matrix (Guertin & Bailey, p. 33). One of the problems encountered in common-factor analysis is that the appropriate communalities are not easily computed with precision and various methods of estimating them have been developed. The best estimate of the communalities appears to be the squared multiple correlations of each variable with the remaining variables (Guertin, 1977, p. 21). On the other hand, Harman (1976) stated that "it matters little what values are placed in the principal diagonal of the correlation matrix when the number of variables is large (say, n> 20)" (p. 86), because the number of values in the diagonal is relatively small compared to the many values off the diagonal so the factorial results are little affected (p. 86). However, the use of communalities in the diagonal prior to factor extraction makes possible the obtaining of the maximum amount of common-factor variance, a chief emphasis of common-factor analysis (Guertin, 1977, p. 22). Once the principal axes factors have been extracted from the reduced intercorrelation matrix through the processes involved in step three,

PAGE 57

47 they can then be rotated to gain the clearest view of the common-factor space or configuration. This is step four of the factor-analysis process described by Fox (1969, p. 217). Rotation is performed mathematically, but the concept of rotation is based upon geometry. A clear description of the relationship may be found in Guertin and Bailey (1970, pp. 26-34 and Chapter 6). The reason for rotation is that although the intial factors may be mathematically satisfactory solutions, the factors themselves may have little meaning relative to determining constructs or principles of concern to the investigator (Guertin & Bailey, 1970, pp. 87-88). Since the principal axes method extracts the maximum possible common variance, the primary decision in rotation becomes that of determining the number of principal axes to carry into rotation to gain the clearest picture of the common factors (Guertin, 1977, p. 22). At this point in the factor-analysis process, there is encountered another major problem: what criterion or criteria to use to decide what number of factors to carry into rotation (Guertin & Bailey, 1970, Chapter 7). Guertin (1977) stated that the universally accepted criterion that is followed is Thurstone's principle of simple structure that yields factors that are relatively invariant across studies (p. 22). Guertin and Bailey (1970) asserted that the simple structure criteria not only provide a unique solution but at the same time assure meaningful factors (p. 42). In simplest terms, the concept of simple structure dictates that both variables and factors be described by a minimum number of sizable loadings (Guertin, 1977, p. 22). In reference to the matrix representation of factors (columns) and variables (rows), the concept of simple structure specifies that the columns (factors) should have the largest possible

PAGE 58

number of zero or negligible loadings (values), the rows (variables) should have the largest possible number of zero or negligible loadings (values), and every pair of columns (factors) should have the largest possible number of values approaching zero in one column (factor) (Guertin & Bailey, 1970, p. 99). The ideal situation would be to have each variable have a high loading on only one of the factors and for each factor to have only a few variables with high loadings with all the other variables having loadings approaching zero on that factor (Guertin & Bailey, 1970, p. 98). To approximate the ideal of simple structure for a given factor matrix, the factors may be rotated in either an oblique or an orthogonal fashion (Guertin, 1977, p. 22). As with the term rotation, these terms reference a geometric perspective. Conceiving of the factors as dimensions (vectors), an orthogonal rotation assumes that the factors are unrelated and places the factors (vectors) in relation to each other at 90° angles. An oblique rotation is not held to that criterion. According to Guertin and Bailey (1970), with the use of real data, true simple structure must provide for correlated factors so an orthogonal representation of factor space is unsatisfactory (p. 100). They recommend the use of the oblique rotation procedures and if that results in factors that are only slightly correlated, then an orthogonal rotation may be performed (p. 101). It is their opinion that it is necessary to use oblique rotation procedures to properly represent underlying dimensions or factors of a set of variables (p. 89). The utilization of rotation to identify simple structure completes step four of the factor-analysis process as outlined by Fox (1969, p. 217). The resulting matrix is the factor pattern and the values forming

PAGE 59

49 this matrix are called the factor loadings (Fox, 1968, p. 217; Harman, 1976, p. 1!3). The loadings have the same characteristics as correlation coefficients in that they are two-digit decimal numbers in the range of +1.00 to -1.00 through a midpoint of zero. A variable can have a positive or negative loading on a factor and the sign indicates whether the factor operates to raise or lower the value of that particular variable (Fox, 1969, pp. 217-218). The magnitude of the loading indicates the importance of the factor on each variable (p. 218). The fifth and final step in the factor-analysis process as outlined by Fox (1969) is for the researcher to label the factors (p. 218). Generally, this involves determining the variables that have relatively high loadings on a factor and then abstracting a term or concept that reflects the content of these variables (p. 218; see also Guertin & Bailey, 1970, p. 87). This description of factor analysis has presented only the salient features of the process related to this study. A thorough discussion of factor analysis may be found in Harman (1976). For the less mathematically inclined person, Guertin and Bailey (1970) present an excellent description of factor analysis.

PAGE 60

CHAPTER III METHODOLOGY The Problem The problem in this study was the identification of any underlying dimensions within the multiple quality indicators rated by administrators in Florida public community/junior colleges as highly useful in making program quality-evaluation decisions. The research questions were: (1) What is the "best" factor structure for the usefulness ratings? (2) For the identified "best" factor structure, are there significant differences in the mean factor scores between classifications of respondents by program area and between classifications of respondents by administrative area? Description of Data Used The data used in this study were generated in the IRC project (Steuart & Rathburn, 1982). A full description of the methodology used in that project is in Appendix B. The survey population consisted of all administrators in Florida public community/junior colleges who were classified by their institutions as executive, administrative, or managerial personnel under part three of the "Personnel and Salary Report (SA-1)" as defined in the Community College Management Information System Procedures Manual of the State of Florida (Division of Community Colleges, 1980, pp. 10.1-10.2). There were 631 administrators identified and 450 respondents representing 24 of Florida's 28 public community/junior colleges for a response rate of 71.3% (Steuart & Rathburn, 1982, p. 49). 50

PAGE 61

51 The responding administrators rated 434 program characteristics, contained in a survey questionnaire (Appendix C), for degree of useful ness in program quality-evaluation decision making. The rating scale was 1 = ESSENTIAL ("I do not see how I could make a judgment about the quality of a program without considering this characteristic") 2 = VERY USEFUL ("I would feel hindered in making a judgment about the quality of a program without considering this characteristic, but I would make a judgment without it.") 3 = SOME USEFULNESS ("Although I would like to consider this characteristic in making a judgment about the quality of a program, I would not feel hindered in making a judgment without it.") 4 = LITTLE OR NO USEFULNESS ("I probably would not consider this characteristic in arriving at a judgment of the quality of a program.") (Steuart & Rathburn, p. 144) Also, any program characteristics that were considered "not applicable" by the raters were rated with a "4" (Steuart & Rathburn, p. 144). Each respondent was assigned a "position code" (p. 46) based upon a self-reported position title on each questionnaire. The position codes, a description of the position titles associated with each code, and frequencies of respondents for each code are reported in Appendix D. The program areas and administrative areas used to classify the responding administrators were defined as follows: Program Areas Advanced and Professional Program Area --commonly referred to as university parallel, the first two years of a baccalaureate program. Occupational Program Area --or vocational -technical education, terminal certificate or degree programs preparing students for employment in a specific trade or field. Community Instructional Services Program Area — programs of short, credit or noncredit classes designed to provide enrichment for students.

PAGE 62

52 Developmental Program Area --of compensatory education, designed to assist students in improving deficient basic skills necessary for program-required work. Student Services Program Area — various auxilliary services provided to students facilitating their progress through one of the program areas including such services as counseling, student activities, admissions, financial aid, etc. Administrative Areas General Administration— respondents with responsibilities of a general nature in the operation of the college's programs or services. Academic Affairs —respondents with responsibilities of administering one or more of the college's academic programs. Student Affairs — respondents with responsibilities of administering one or more of the college's student services programs. Community Instructional Services — respondents with responsibilities of administering the college's adult and continuing education or community instructional services programs. Business Affairs —respondents associated with the operation of the business offices (budget, accounting, personnel, etc.) of the college. President — the chief executive officer of the college. (Steuart & Rathburn, 1982, Appendix A) Based upon their position titles, only respondents who were perceived as having major responsibility in one of the five program areas were included in the analysis by program areas. For example, presidents, vice presidents, research and planning directors, and other administrators with responsibilities across program areas were not included in the analysis by program areas. All respondents were included in the analysis by administrative areas. Operational definitions for these classifications are given in Appendix A. Mean ratings were calculated for each program characteristic in the questionnaire. Using these means, ranks were calculated for the program characteristics based upon the responses of all respondents (N = 450). When the ranks for two or more program characteristics were tied, the tied values received the mean of the ranks that would have been assigned had the ranks not tied (Steuart & Rathburn, 1982, p. 54).

PAGE 63

53 Only those program characteristics that were in the top quarter of the ranked mean ratings were discussed in the presentation of results for all respondents (N = 450) in the IRC report (Steuart & Rathburn, 1982, p. 54). All 108 program characteristics in the top quarter had a mean rating on the "essential" side of the rating scale (p. 62). The mean ratings of these 108 program characteristics ranged from 1.38 to 2.05 (p. 54). The analyses in this study included only these 108 program characteristics. The means for these program characteristics are reported in Appendix E. Analysis of the Data Research Question One To discover the best factor structures for the usefulness ratings, two sets of data were used for analysis. An analysis was performed based upon those respondents who rated all 108 program characteristics (i.e., respondents with missing data were excluded). There were 315 such cases. The same analysis was performed using the ratings of all 450 respondents by changing any missing ratings for an item to the mean ratings for respondents rating that item. The use of all 450 respondents was desirable so that all respondents could be included in the comparisons of factor scores between program areas and between administrative areas (research question two). The following procedures for obtaining the best factor structure were performed on each of these sets of data and the results compared through use of the coefficient of congruence for matching factors, inspection of the difference in the rootmean-square values, and the criteria for simple structure (Guertin & Bailey, 1970, p. 99; Harman, 1976, pp. 343-344).

PAGE 64

54 The first step in the analysis was the production of the correlation matrices representing the correlations between the ratings of all possible pairs of the 108 program characteristics. These correlation matrices constituted the basis for what has been defined as an R analysis (Cattell, 1950, p. 28). An R analysis consists of looking at the interrelationships of variables (program characteristics) rather than cases (respondents) (Cattell, 1950, pp. 30-31). The correlation coefficients represented the degree of similarity in the ratings by the respondents of any pair of program characteristics. The correlation matrices were factor analyzed using the principal axes method with iterations. It has been described as the most widely used technique in determining the initial principal axes (Guertin & Bailey, 1970, p. 62; Harman, 1976, p. 133). Following Guertin and Bailey's (1970, p. 101) suggestion, the principal axes matrices were submitted initially to an oblique rotation to determine whether the factors were essentially uncorrected. The direct oblimin rotation procedure (Jennrich & Sampson, 1966) was used with gamma equal to zero. Program P4M was used in the BMDP Biomedical Computer Programs P-Series 1979 (Dixon & Brown, 1979). The squared multiple correlations were used as the initial estimates of the communal ities (Guertin & Bailey, 1970, pp. 147, 163). The number of factors to carry into successive rotations was determined by inspecting the results for decrements in the latent roots, the cumulative percentages of common variance for which successive factors accounted, and the criteria for simple structure (Guertin & Bailey, 1970, pp. 115-120). Since the factors proved to be essentially uncorrected, the principal axes matrices were then submitted to an orthogonal rotation. The varimax method for orthogonal rotations was used since

PAGE 65

55 there appeared to be general agreement that this method was preferred with regard to giving the closest approximation to simple structure (Guertin & Bailey, 1970, pp. 98-99; Harman, 1976, Chapter 14). Again the number of factors to carry into successive rotations was determined by inspecting the results for decrements in the latent roots (eigenvalues), the size of the latent roots, the cumulative percentages of common variance for which successive factors accounted, and the criteria for simple structure (Guertin & Bailey, 1970, pp. 115-120). The factor procedure in the SAS computerized package was used for the orthogonal rotations (SAS Institute, Inc., 1979, pp. 203-210). The resulting factor solutions from both sets of respondents (N = 315 and N = 450) were compared for congruence using the coefficient of congruence (Harman, 1976, pp. 343-346). If the coefficient of congruence between any pair of factors was .90 or greater, the factors were considered congruent (Mulaik, 1972, p. 355). Since the factor structures were congruent, the factor structure based upon the set of 450 respondents (with missing values set equal to the mean value for that variable) was selected as the best representation of the underlying dimensions of the 108 indicators of quality. The loadings of the variables on each factor in this factor structure were inspected. Any variable having a loading of .50 or greater was considered in determining the meaning of a factor (Guertin & Bailey, 1970, pp. 78, 81). Based upon the nature of the program characteristics with a .50 or greater loading, each factor was described. With the description of the factor structure, the methodology involved in the first research question was completed.

PAGE 66

56 Research Question Two For the second research question, the best factor structure was used, as determined by the methodology for the first research question, to calculate factor scores for the respondents. The regression method was used for the factor score computations (SAS Institute, Inc., 1979, p. 204). The score procedure in the SAS computerized package was used (SAS Institute, Inc., 1979, pp. 371-372). Mean factor scores were determined for the respondents classified by the described program and administrative areas. The differences in mean factor scores between the program areas and between the administrative areas were tested for significance using the t statistic at .10 level of significance. Since the variances of the factor scores for some of the program areas and some of the administrative areas were significantly unequal, as tested by use of the F statistic at the .05 level of significance, it was inappropriate to perform an analysis of variance prior to testing for significant differences between mean factor scores. Also, since the likelihood of a Type I error increases as the number of contrasts tested increases, the Bonferroni correction for the t statistic was used (Myers, 1979, pp. 298-300). Essentially, this correction results in rejection of the null hypothesis (i.e., there is no significant difference in the means) when the obtained t exceeds the value of t in the standard t table at a level of significance equal to the selected level of significance for the comparisons (.10 in this study) divided by the number of comparisons. Since there were 10 comparisons between the program areas and 15 comparisons between the administrative areas, the obtained t for these comparisons had to exceed the value in the t table at .01 (.10 divided by 10) and .007 (.10

PAGE 67

57 divided by 15) levels of significance, respectively, for rejection of the null hypothesis. Where the variances were significantly different for the factor scores being compared, the t value was calculated on the assumption of unequal variances (SAS Institute, Inc., 1979, p. 425). The t-test procedure in the SAS computerized package was used (SAS Institute, Inc., 1979, pp. 425-426). Using the results of these analyses, guidelines were forumlated for organizing the multiple indicators of quality into a format useful to administrators in Florida public community/junior colleges in making quality-evaluation decisions about programs offered by their colleges.

PAGE 68

CHAPTER IV RESULTS AND DISCUSSION There were 450 administrators representing 24 of Florida's 28 public community colleges who rated 434 program characteristics, contained in a survey questionnaire (Appendix C), for degree of usefulness in program quality-evaluation decision making. The rating scale ranged from 1 (essential) to 4 (little or no usefulness). Only the 103 program characteristics in the top quarter of ranked mean ratings were included in the factor analysis. Based upon all 450 respondents, the mean ratings for each of these 108 program characteristics are presented in Appendix E. All of the 108 program characteristics in the top quarter of ranked mean ratings had a mean rating on the "essential" side of the rating scale. The mean ratings of the 108 program characteristics ranged from 1.38 to 2.05. The mean ratings of each of the 108 program characteristics, based upon the 315 respondents who rated all of them, are reported in Appendix E. These mean ratings ranged from 1.36 to 2.13. The Pearson product-moment correlation coefficients for the intercorrelations of the 108 program characteristics, based upon the ratings by all respondents (N = 450) with missing values for any program characteristic set equal to the mean rating for that program characteristic, are presented in Appendix F. The Pearson product-moment correlation coefficients for the intercorrelations of the 108 program characteristics, based upon the ratings by respondents with no missing responses (N = 315), 58

PAGE 69

59 are presented in Appendix G. These two sets of correlation coefficients were used in the factor analysis. Factor Analysis Results The iterated principal axes factor-analytic method as applied to both sets of correlation coefficients resulted in a solution with 21 principal axes. The principal axes solution based upon N = 450 is presented in Appendix H with the final communal ity estimates and eigenvalues. The principal axes solution based upon N = 315 is presented in Appendix I with the final communal ity estimates and eigenvalues. For the principal axes solution based upon N = 315, the latent roots (eigenvalues), differences in the latent roots, cumulative variance for which successive axes accounted, and the percentage of common variance for which successive axes accounted are presented in Table 1. These were the values that were examined to determine the number of factors to carry into the initial rotations. In factor analyses, the latent roots generally fall off rapidly at first because systematic common variance is being extracted. The roots start decreasing almost linearly as mostly error variance is being extracted. It is generally accepted that one criterion for the cutoff point for the number of factors to rotate comes just before this linear descent (Guertin & Bailey, 1970, p. 117). Although the differences in the latent roots decreased greatly after factor 5, they did not become linear until after factor 10 (Table 1). Using the differences in the latent roots, the rotation of 10 factors was indicated. The rotation of 10 factors accounted for 80.7% of the common variance compared to 64.5% accounted for by the rotation of five factors. Following the suggestion of Guertin and Bailey (1970, p. 117), one more and one less than the indicated number of factors were rotated with the

PAGE 70

60 Table 1 Variance Accounted for by Successive Principal Axes for N=315 Principal Axes

PAGE 71

61 Table 2 Program Characteristics With Factor Loadings of .50 or Greater in the Three Rotations of the Principal Axes Solution Based Upon N=315 Rotations Factors Characteristics 7 15 41 89 59 63 101 95 17 31 48 75 79 96 99 1 6 13 44 72 29 28 73 51 37 25 16 20 60 39 36 50 "7T" 87 70 56 49 46 30 80 27 22 26 9

PAGE 72

62 Table 2 (continued)

PAGE 73

63 Table 2 (continued)

PAGE 74

64 For the three rotations, the rotation of 10 factors produced the clearest common-factor structure. The rotation of 11 factors resulted in the same nine interpretable factors as the 10-f actor rotation but with a slightly less clear structure. A trial rotation of 12 factors confirmed this analysis. The 12-factor rotation resulted in the fission of factors 1 and 5 into more specific factors. Therefore, the 10factor rotation of the principal axes solution for N = 315 was chosen as the rotation most closely approximating the criteria for simple structure and producing the clearest picture of the common-factor structure for the ratings of the 108 program characteristics. For the principal axes solution based upon N = 450, the latent roots, differences in the latent roots, cumulative variance for which successive axes accounted, and the percentage of common variance for which successive axes accounted are presented in Table 3. Using the differences in the latent roots, the rotation of 10, 11, and 12 factors was indicated. For all the program characteristics that had factor loadings of .50 or greater, the loadings that resulted from the three rotations are presented in Table 4. The complete factor structures for the three rotations are presented in Appendix K. As in Table 2, the most evident feature of the data presented in Table 4 was that, regardless of the rotation examined, there was a relatively stable nine-factor structure. For factors 1, 2, 3, 7, 8, and 9, the variables with loadings of .50 or greater on the factors were the same for the three rotations. The variables loading .50 or greater on factors 4 and 5 were the same for the three rotations with the exception of one variable that loaded slightly less than .50 (.49) on factor 4 in the 11-factor rotation and one variable that loaded less than .50 (.41)

PAGE 75

65 Table 3 Variance Accounted for by Successive Principal Axes for N=450 Principal Axes

PAGE 76

66 Table 4 Program Characteristics With Factor Loadings of .50 or Greater in the Three Rotations of the Principal Axes Solution Based Upon N=450 Rotations Factors Characteristics 10

PAGE 77

67 Table 4 (continued) Rotations Factors Characteristics 10 M 12 76 .72 .72 a .71 78 .50 .49 a .51 102 .56 .56 .55 74 .64 .65 .64 4 77 .57 .60 .61 98 .73 .73 .73 82 .55 .54 .57 92 .51 .53 .54 86 ..61. ..62 .62__ 59 .37* .37 a .26 a 81 .41 a .42 a .32 a 66 .50 .50 .41 a 2 .59 .59 .61 b 19 .68 .68 .69 42 .66 .66 .66 34 .71 .71 .72 100 _ .65 ___!§4 1 §5__ 88 .49 a .50 .47 a 4 .59 .60 .54 83 .53 .55 .51 6 40 .77 .78 .82 12 .73 .74 .80 35 .73 a .74 a .79, 11 _ ; 48 a __ __ __ z 49_ _.46__ 85 .62 .60 .65 90 .61 .60 .64 103 .56 .61 .59 _ 33 .53 .55 .53 7 24 .71 .70 .72 32 .71 .70 .72 58 .63 .66 .63 __9__ ; 61 ..64 .59_. 52 .72 .72 .72 18 .75 .75 .75 23 .77 .77 .77 8 65 .73 .73 .73 38 .72 .72 .72 53 .62 a .61. .61. __105 „___.._i47_ .47_ .47__ 54 .55 .57 .60 45 .58 .60 .64 5 .51 .52 .51 Q 64 .64 .64 .62 y 84 .64 .64 .66 14 .64 .62 .63 8 .61 .60 .60 62 .50 .50 .50

PAGE 78

68 Table 4 (continued)

PAGE 79

69 coefficients for the intercorrelation of the factors for both N = 315 and N = 450 are presented in Table 5. Since the factors were essentially uncorrelated with no correlation coefficient exceeding .42, then the orthogonal rotation was accepted as producing the best solution for the common-factor structure. The next task was to determine whether the 10-f actor structure from the analysis based upon N = 315 was congruent with the 10-f actor structure from the analysis based upon N =450. The coefficients of congruence between comparable factors in the two factor structures are presented in Table 6. Since all the coefficients were at least .90, the Table 5 Intercorrelations of the Factors for the 10-Factor Rotation of the Principal Axes Solutions for N=315 and N=450

PAGE 80

70 Table 6 Coefficients of Congruence Between Comparable Factors for the 10-Factor Structures for N=315 and N=450 Factors

PAGE 81

71 Table 7 Program Characteristics With .50 or Greater Loadings on Factor 1 Number Loading Program Characteristics Total cost of a program Total cost of a program per FTE Total cost of program per unduplicated headcount Cost of instructional personnel per total program Cost of instructional personnel per program FTE Cost of instructional personnel per program unduplicated headcount Cost of program administration per total program Cost of program administration per program FTE Cost of program administration per program unduplicated headcount Cost of support services per total program Cost of support services per program FTE Cost of support services per program unduplicated headcount Number of support staff per total program Number of support staff per program FTE Number of support staff per program unduplicated headcount Cost of materials per program FTE Cost of materials per program unduplicated headcount Equipment utilization per total program Equipment utilization per program FTE Equipment utilization per program unduplicated headcount Cost of equipment maintenance per total program Cost of equipment maintenance per program FTE Space utilization per total program Cost of space utilized per total program 1

PAGE 82

72 The program characteristics with loadings of .50 or greater on factor 2 are listed in Table 8. These program characteristics concerned ratings of program support services and student services by students enrolled in a program and students who have completed a program. The ratings of student services included ratings of the usefulness, accessibility, and ease of use of the services. Based upon the content of these program characteristics, factor 2 was interpreted as involving student ratings of support services, including student services. Factor 2 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Student Ratings of Support Services" dimension. The program characteristics with loadings of .50 or greater on factor 3 are listed in Table 9. These program characteristics involved Table 8 Program Characteristics With .50 or Greater Loadings on Factor 2 Number Loading Program Characteristics 61 .56 Ratings of support services by currently enrolled students 60 .58 Ratings of support services by program completers 25 .76 Ratings of usefulness of student services by currently enrolled students 39 .81 Ratings of usefulness of student services by program completers 16 .79 Ratings of accessibility of student services by currently enrolled students 36 .81 Ratings of accessibility of student services by program completers 20 .79 Ratings of ease of use of student services by currently enrolled students 50 .80 Ratings of ease of use of student services by program completers

PAGE 83

73 Table 9 Program Characteristics With .50 or Greater Loadings on Factor 3 Number Loading Program Characteristics 46 .55 Number or percent of full -time faculty/staff by a productivity ratio 71 .52 Number or percent of part-time faculty/staff by a productivity ratio 30 .58 Number or percent of full-time faculty/staff by number of course hours taught per term 87 .56 Number or percent of part-time faculty/staff by number of course hours taught per term 27 .72 Number or percent of full-time faculty/staff by number of student contact hours per term 70 .73 Number or percent of part-time faculty/staff by number of student contact hours per term 22 .76 Number or percent of full-time faculty/staff by number of students per term 56 .75 Number or percent of part-time faculty/staff by number of students per term 26 .74 Number or percent of full-time faculty/staff by average class size 49 .72 Number or percent of part-time faculty/staff by average class size 80 .60 Number or percent of full -time faculty/staff by number of FTE students per term information about full-time and part-time faculty or staff in a program. The information included the number or percent of full-time and part-time faculty or staff by (1) their rating on some productivity ratio, (2) the number of course hours they taught per term, (3) the number of student contact hours they had per term, (4) the number of students they had per term, and (5) their average class size. Additionally, but for full-time faculty or staff only, the information included the number of FTE students they taught per term. Based upon the content of these program characteristics, factor 3 was interpreted as involving information on the productivity of faculty or staff in a program. Factor 3 was identified as

PAGE 84

74 another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Faculty/Staff Instructional Productivity" dimension. The program characteristics with loadings of .50 or greater on factor 4 are listed in Table 10. These program characteristics involved information about students entering a program and students currently enrolled in a program. For both entering and currently enrolled students, the information included the number or percent of students by major area of study, by type of handicap, and by types of developmental or remedial assistance desired. For entering students only, the information included the number or percent of students by level of previous academic Table 10 Program Characteristics With .50 or Greater Loadings on Factor 4 Number Loadings Program Characteristics 102 .56 Number or percent of entering students by level of previous academic achievement 77 .57 Number or percent of entering students by academic skills level as measured by local instruments 78 .50 Number or percent of entering students by major area of study 82 .55 Number or percent of currently enrolled students by major area of study 76 .72 Number or percent of entering students by type of handicap 98 .73 Number or percent of currently enrolled students by type of handicap 74 .64 Number or percent of entering students by types of developmental or remedial assistance desired 86 .61 Number or percent of currently enrolled students by types of developmental or remedial assistance desired 92 .51 Number or percent of currently enrolled students by number of hours with failing grade

PAGE 85

75 achievement and by academic skills level as measured by local instruments. For currently enrolled students only, the information included the number or percent of students by number of hours with failing grade. Based upon the content of these program characteristics, factor 4 was interpreted as involving the identification of any physical or cognitive needs of students relevant to their performance in their selected programs. Factor 4 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Physical and Academic Skills Needs Assessment Enrolled Students" dimension. The program characteristics with a loading of .50 or greater on factor 5 are listed in Table 11. These program characteristics involved ratings of various aspects of a program by students who have completed a program or who are currently enrolled in a program. The aspects of a program to be rated by program completers included program staff, program facilities and equipment, program instructional strategies, program administration, and program curriculum. Also included were ratings of a Table 11 Program Characteristics With .50 or Greater Loadings on Factor 5 Number Loadings Program Characteristics Ratings of program staff by program completers Ratings of program facilities/equipment by program completers Ratings of program instructional strategies by program completers Ratings of program administrators by program completers Ratings of program curriculum by program completers Ratings of program staff by currently enrolled students 34 19

PAGE 86

76 program's staff by currently enrolled students. Based upon the content of these program characteristics, factor 5 was interpreted as involving student ratings, primarily ratings by program completers, of various aspects of a program. Factor 5 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Student Ratings of Program" dimension. The program characteristics with loadings of .50 or greater on factor 6 are listed in Table 12. These program characteristics concerned information on the quantity of students completing a program and the average time taken for completion, the number or percent of those completing a program who take state board or licensure exams, the number passing those exams, and the type of license, certificate, or registration received. Based upon the content of these program characteristics, factor 6 was interpreted as involving measures of the quantitative output of a program and certain student follow-up information. Factor 6 was identified as another common dimension underlying the ratings of the 108 Table 12 Program Characteristics With .50 or Greater Loadings on Factor 6 Number Loadings Program Characteristics 4 .59 Number or percent of students completing a program 83 .53 Number or percent of program completers by average time taken for completion of a program 40 .77 Number or percent of program completers taking state board or licensure exams 12 .73 Number or percent of program completers passing state board or licensure exams 35 .73 Number or percent of program completers by type of license, certificate, or registration received

PAGE 87

77 program characteristics and was labeled the "Program Student Output" dimension. The program characteristics with loadings of .50 or greater on factor 7 are listed in Table 13. These program characteristics concerned various attributes of both the full-time and part-time faculty or staff in a program. The attributes included degrees held, total years taught or served, years taught or served in a specific program, and type of certification or rank held. Based upon the content of these program characteristics, factor 7 was interpreted as involving indicators of the level of preparedness of faculty or staff serving in a program. Factor 7 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Faculty/Staff Preparedness" dimension. Table 13 Program Characteristics With .50 or Greater Loadings on Factor 7 Number Loadings Program Characteristics 9 .61 Number or percent of full -time faculty/staff by degrees held 33 .51 Number or percent of part-time faculty/staff by degrees held 24 .71 Number or percent of full -time faculty/staff by years taught or served 85 .62 Number or percent of part-time faculty/staff by years taught or served 32 .71 Number or percent of full -time faculty/staff by length of service in a program 90 .61 Number or percent of part-time faculty/staff by length of service in a program 58 .66 .Number or percent of full-time faculty/staff by certification or rank held 103 .57 Number or percent of part-time faculty/staff by certification or rank held

PAGE 88

73 The program characteristics with loadings of .50 or greater on factor 8 are listed in Table 14. These program characteristics involved ratings of various aspects of a program by a program's faculty or staff. The aspects of a program to be rated included instructional strategies, facilities and equipment, staff, curriculum, administration, and support services. Based upon the content of these program characteristics, factor 8 was interpreted as involving ratings of a program by a program's faculty or staff. Factor 8 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Faculty/Staff Program Ratings" dimension. The program characteristics with loadings of .50 or greater on factor 9 are listed in Table 15. These program characteristics included the number or types of changes in a program as a result of program evaluations or accreditation studies; ratings of a program by certification boards or accreditation agencies; level of demand for a program in the college's service area, by students, and in the college's state; and clearly stated objectives for a program. Based upon the content of Table 14 Program Characteristics With .50 or Greater Loadings on Factor 8 Number Loadings Program Characteristics 23 .77 Ratings of program instructional strategies by faculty/staff 18 .75 Ratings of program facilities/equipment by faculty/ staff Ratings of program staff by faculty/staff Ratings of a program curriculum by faculty/staff Ratings of program administration by faculty/staff Ratings of support services by faculty/staff 65

PAGE 89

79 Table 15 Program Characteristics With .50 or Greater Loadings on Factor 9 Number Loadings Program Characteristics 64 .64 Number/types of changes as a result of program evaluations 84 .64 Number/types of changes as a result of accreditation studies 54 .55 Ratings by certification boards 45 .58 Ratings by accreditation agencies 8 .61 Level of demand for program or service in a college's service area 14 .64 Level of demand for program or service by students 62 .50 Level of demand for program or service in college's state 5 .51 Clearly stated program objectives these program characteristics, factor 9 was interpreted as involving the responsiveness of a program to program evaluations, certification boards . accreditation agencies, the community it serves, the students it serves, and the state it serves. Although not an object of a program's responsiveness, program objectives clearly related to assessing that responsiveness. Factor 9 was identified as another common dimension underlying the ratings of the 108 program characteristics and was labeled the "Program Responsiveness" dimension. The factor analysis has resulted in the identification of a 10-factor structure with nine interpretable factors that remained relatively stable across several rotations and for the two groups of respondents (N = 315 and N = 450). The identified factor structure has been interpreted as representing the underlying dimensions common to the ratings of the 108 program characteristics. Using the content of the program characteristics that loaded .50 or greater on the factors, each of the

PAGE 90

80 nine dimensions has been described and labeled. The labels have been created to reflect the content of the program characteristics loading .50 or greater on the factor representing a dimension. The following labels have been created for the nine dimensions: Resources Usage Student Ratings of Support Services Faculty/Staff Instructional Productivity Physical and Academic Skills Needs Assessment of Enrolled Students Student Ratings of Program Program Student Output Faculty/Staff Preparedness Faculty/Staff Program Ratings Program Responsiveness. In accordance with the evaluation theory developed by Stufflebeam et al. (1971), the program characteristics that have been identified as included in these dimensions were delineated in interaction with the administrators making program quality-evaluation decisions in Florida public community/junior colleges. These program characteristics were rated by the administrators as the ones most highly useful in making program quality-evaluation decisions. According to the results of this study, the data represented by these program characteristics are those data that should be collected, organized, and analyzed for the purpose of providing information useful to the administrators in program quality-evaluation decision making in Florida public community/junior colleges. The results of the factor analysis performed in this study have demonstrated that there are nine common dimensions that should be used to organize those data for presentation of information to administrators involved in program quality-evaluation decision making. As developed in the theoretical rationale for this study, based on the theory of evaluation developed by Stufflebeam et al . (1971), the items of information

PAGE 91

identified reflected those aspects of the aggregate value system of these administrators that are relevant to program quality-evaluation decision making and the underlying dimensions of those items reflect the dimensions of the aggregate value system that are relevant to this decision situation. Therefore, the utilization of the nine identified common dimensions to organize the relevant data should result in an information format that these administrators should find most useful, since the format should approximate the dimensions of those aspects of the aggregate value system that are common to these administrators and that are being used in making program quality-evaluation decisions. Any individual administrator should find such a format more or less useful to the degree that the relevant dimensions of his value system are reflected in the aggregate value system represented in the nine dimensions. It should be noted that these nine dimensions are dimensions representing the parameters of the information an administrator is most likely to find useful in making program quality-evaluation decisions. It should be understood that the information these dimensions reflect might be positively or negatively valued by an administrator and in varying degrees in relation to assessing a program. Since quality is a value judgment and not an attribute or characteristic of a program, these nine common dimensions are the dimensions of an aggregate value system used by administrators in making program quality-evaluation decisions. They should not be interpreted as dimensions of quality. The identification of these nine dimensions completed the analysis required for resolving the first aspect of the problem with which this study was concerned: to determine any underlying dimensions of the multiple items of information rated as highly useful in program quality-

PAGE 92

82 evaluation decision making by administrators involved in such decision making in Florida public community/junior colleges. In the next section of this chapter, the results are presented of a comparison of the mean factor scores of the administrators classified first by program areas in relation to which they had major administrative responsibilities and then by administrative areas as reflected in their position titles. The following results reflect an attempt to determine any significant differences between program areas or between administrative areas in emphasis on any of the nine dimensions in order to refine the description for formatting by program or administrative area the information included in these dimensions. Factor Score Comparisons Program Areas Using the selected factor structure, factor scores were computed for the 450 respondents using the regression method in the SAS factor procedure and the SAS score procedure. Mean factor scores were calculated for the respondents grouped according to program area. Included in this analysis were those respondents whose position title indicated that they had major responsibility in one of the five program areas common to most community colleges in Florida: the Advanced and Professional, Occupational, Developmental, Community Instructional Services, and Student Services program areas. Not all the administrators who participated in this study had major responsibility in a specific program area. The position codes used to classify the administrators included in each program area are listed in Appendix A. Position codes, associated titles, and frequency of the position codes are in Appendix D. The program areas, the number of respondents classified in each program area, and the percentage of all respondents that this represents are given in Table 16.

PAGE 93

83 Table 16 Number of Respondents Per Program Area and Corresponding Percentages of All Respondents (N=450) Program Areas Number of Respondents Percentage of N Advanced and Professional 65 14.4 Occupational 83 14.4 Developmental 5 1.1 Community Instructional 21 4.7 Services Student Services 88 19.6 TOTAL 262 58.2 Although the number of administrators with primary responsibility in the Developmental Program Area was small (N = 5), they represented five different colleges. According to the list of administrators with responsibility for compensatory/developmental education in the 1981-82 Directory of Florida Community Colleges (Division of Community Colleges, 1981a, p. 71), there were very few position titles reflecting primary responsibility in this program area. The mean factor scores and standard deviations for the program areas are presented in Table 17. It should be recalled that all of the program characteristics included in this study were rated as highly useful in program quality-evaluation decision making. Therefore, the factor scores indicated the relative emphasis placed upon the program characteristics with relatively greater loadings on a factor by the administrators classified in a program area. Since the rating scale was 1 (program characteristics

PAGE 94

84 r». i— . lo «*• i — us cm lo cr* r-~ ocm r^ — oo — lo CO CO CO i — CO CM LO VO r— i — COO OCTI <£>** «dco «aco Mren cm cmoo no coo n r^ oo OO Or— Or— Oi— OO Oi— OrOO Or— ojLn i — lo i — oo " — o lo cm cn^i^-cn or--. or-~ r^ r-» ^i-cn * lo r-io cor-. — co cm io lo r-~ on O^ i — i — COO~> LO r~ «3"CT1 tOCO O irt CO00 LOO Or— Or— OO OO OO <—i — OO OO Or— r— CO *dr>> o r>. «dr— «dco co co cm r— o en «*• r-~ r— cr> Cn CO

PAGE 95

85 essential to quality-evaluation decision making) to 4 (program characteristics of little or no use in quality-evaluation decision making), a low factor score indicated that administrators included in the program area rated the program characteristics with relatively greater loadings on a factor as relatively more highly useful in program quality-evaluation decision making and a high factor score indicated that they rated them relatively less highly useful. The results of testing for significant differences in mean factor scores between program areas for all factors are presented in Appendix L. As indicated in the description of the methodology for this study, an analysis of variance prior to performing the t tests was inappropriate due to unequal variances among some of the program area classifications. The Bonferroni correction for multiple t tests was applied to the obtained t statistics. For factors 1, 3, and 9, there were no significant differences in mean factor scores between any of the program areas (Appendix L). For factor 1, the Resources Usage dimension, the mean factor scores ranged from -.072 for the Developmental Program Area to .487 for Community Instructional Services (Table 17). For factor 3, the Faculty/Staff Instructional Productivity dimension, the mean factor scores ranged from -.341 for the Developmental Program Area to .231 for Community Instructional Services. For factor 9, the Program Responsiveness dimension, the mean factor scores ranged from -.142 for the Occupational Program Area to .500 for the Developmental Program Area. These results indicated that the administrators classified into the five program areas did not differ significantly in their emphasis on these three dimensions: Resources Usage, Faculty/Staff Instructional Productivity, and Program Responsiveness.

PAGE 96

86 For factors 2 and 3, there were significant differences in mean factor scores between Student Services and all other program areas except the Developmental Program Area (Appendix L). For factor 2, the Student Ratings of Support Services dimension, the mean factor scores ranged from -.480 for Student Services to .487 for Community Instructional Services (Table 17). For factor 8, the Faculty/Staff Program Ratings dimension, the mean factor scores ranged from .410 for Student Services to -.350 for the Developmental Program Area (Table 17). These results indicated that the administrators classified in Student Services emphasized the Student Ratings of Support Services dimension significantly more than did all other program areas except the Developmental Program Area and emphasized the Faculty/Staff Program Ratings dimension significantly less than did all other program areas except the Developmental Program Area. Also, the results indicated that the other program areas did not differ significantly in their emphasis on these dimensions. It should be recalled that the number of administrators classified in the Developmental Program Area was relatively small (N = 5) which influenced the tests for significant differences in mean factor scores. For factor 4, there were significant differences in mean factor scores between Community Instructional Services and all other program areas except the Developmental Program Area (Appendix L). For factor 4, the Physical and Academic Skills Needs Assessment of Enrolled Students dimension, the mean factor scores ranged from -.133 for the Advanced and Professional Program Area to .952 for Community Instructional Services (Table 17). These results indicated that the administrators classified in Community Instructional Services emphasized the Physical and Academic Skills Needs Assessment of Enrolled Students dimension significantly

PAGE 97

87 less than did all other program areas except the Developmental Program Area. Also, the results indicated that the other program areas did not differ significantly in their emphasis on this dimension. There were significant differences in mean factor scores between the Occupational Program Area and the Advanced and Professional and the Student Services program areas on factor 5 (Appendix L). For this factor, the Student Ratings of Program dimension, the mean factor scores ranged from -.435 for the Developmental Program Area to .219 for Community Instructional Services (Table 17). The mean factor score for the Occupational Program Area was .143 (Table 17). These results indicated that the administrators classified in the Occupational Program Area emphasized the Student Ratings of Program dimension significantly less than did the Advanced and Professional or the Student Services program areas. Also, the results indicated that the Occupational Program Area did not differ significantly from Community Instructional Services and the Developmental Program Area in emphasis on this dimension and that program areas other than the Occupational Program Area did not differ significantly in their emphasis on this dimension. For factor 6, there were significant differences in mean factor scores between the Developmental Program Area and all other program areas except Community Instructional Services (Appendix L). For this factor, the Program Student Output dimension, the mean factor scores ranged from -.579 for the Occupational Program Area to 1.619 for the Developmental Program Area (Table 17). These results indicated that the administrators classified in the Developmental Program Area emphasized the Program Student Output dimension significantly less than did all other program areas except Community Instructional Services. Also, the results

PAGE 98

indicated that the other program areas did not differ significantly in their emphasis on this dimension. For the remaining factor, factor 7, there were significant differences in the mean factor scores between the Advanced and Professional Program Area and the Occupational and Community Instructional Services program areas (Appendix L). For this factor, the Faculty/Staff Preparedness dimension, the mean factor scores ranged from -.390 for the Advanced and Professional Program Area to .387 for Community Instructional Services (Table 17). These results indicated that the administrators classified in the Advanced and Professional Program Area emphasized the Faculty/Staff Preparedness dimension significantly more than did the Occupational and the Community Instructional Services program areas. Also, the results indicated that the Advanced and Professional Program Area did not differ significantly from the other two program areas in their emphasis on this dimension and that the program areas other than the Advanced and Professional Program Area did not differ significantly in their emphasis on this dimension. As indicated in the preceding section of this chapter, the utilization of the nine identified common dimensions to organize the 108 program characteristics identified as most useful in program quality-evaluation decision making should result in increasing the probability that the format of the presented information will be perceived as credible and useful by the administrator involved in the decision situation. Examination of the differences in mean factor scores for the five program areas was done to determine if there were any statistically significant differences that might be useful in tailoring by program area the format of information presented to administrators in the five program areas for use in program quality-evaluation decision making.

PAGE 99

89 The results presented in this section indicated that the administrators classified into the five program areas did not differ significantly in their emphasis on three dimensions: Resources Usage, Faculty/Staff Instructional Productivity, and Program Responsiveness. For the Student Ratings of Support Services dimension, the results indicated that Student Services emphasized this dimension significantly more than did all other program areas except the Developmental Program Area. Community Instructional Services emphasized the Physical and Academic Skills Needs Assessment of Enrolled Students dimension significantly less than did all other program areas except the Developmental Program Area. The Student Ratings of Program dimension was emphasized significantly less by the Occupational Program Area than by the Advanced and Professional and Student Services program areas. The Developmental Program Area placed significantly less emphasis on the Program Student Output dimension than did all other program areas except Community Instructional Services. For the Faculty/Staff Preparedness dimension, the Advanced and Professional Program Area emphasized this dimension significantly more than did the Occupational and Community Instructional Services program areas. The Faculty/Staff Program Ratings dimension received significantly less emphasis by Student Services than by all other program areas except the Developmental Program Area. These results should be useful in tailoring by program area the organization of information for presentation to administrators involved in quality-evaluation decision making in a specific program area. For example, the results indicated that the information included in the Faculty/Staff Preparedness dimension should be emphasized when presenting information to administrators with major responsibilities in the Advanced

PAGE 100

90 and Professional Program Area to increase the probability that administrators in that program area will find the information credible and useful in program quality-evaluation decision making. The nature of this emphasis, although not an objective of this study, might include the presentation of more information or more detailed information or some type of weighting of the information related to this dimension. Similarly, these results may be used to tailor the presentation of information for program quality-evaluation decision making to administrators in other specific program areas. The results presented in this section applied only to significant differences among the mean factor scores for administrators classified within the specified program areas. In the next section of this chapter, the results are presented for comparison of the mean factor scores between administrators classified within six administrative areas as defined in Chapter III. Administrative Areas Using the selected factor structure, mean factor scores were calculated for the administrators classified within the six administrative areas defined in Chapter III. The six administrative areas were General Administration, Academic Affairs, Student Affairs, Community Instructional Services, Business Affairs, and Presidents. A description of the administrative areas and the position codes used to classify the administrators included in each administrative area are given in Appendix A. The administrative areas, the number of respondents in each administrative area, and the percentage of all respondents that this represents are given in Table 18.

PAGE 101

91 19

PAGE 102

92 The mean factor scores and standard deviations for the administrative areas are presented in Table 19. It should be recalled that a low factor score indicated that administrators included in the administrative area rated the program characteristics with relatively greater loadings on a factor as relatively more highly useful in program quality-evaluation decision making and that a high factor score indicated that they rated them relatively less highly useful. The results of testing for significant differences in mean factor scores between administrative areas for all factors are presented in Appendix M. As indicated in the discussion of the results for the analysis by program areas, an analysis of variance prior to performing the t tests was inappropriate due to unequal variances among some of the administrative area classifications. The Bonferroni correction for multiple t tests was applied to the obtained t statistics. For factors 3, 5, 6, 7, and 9, there were no significant differences in mean factor scores between any of the administrative areas (Appendix M). For factor 3, the Faculty/Staff Instructional Productivity dimension, the mean factor scores ranged from -.093 for the Presidents to .231 for Community Instructional Services (Table 19). The mean factor scores on factor 5, the Student Ratings of Program dimension, ranged from -.090 for the Presidents to .219 for Community Instructional Services (Table 19). For factor 6, the Program Student Output dimension, the mean factor scores ranged from -.418 for the Presidents to .340 for Community Instructional Services (Table 19). The mean factor scores on factor 7, the Faculty/ Staff Preparedness dimension, ranged from -.112 for Student Services to .600 for the Presidents (Table 19). For factor 9, the Program Responsiveness dimension, the factor scores ranged from -.170 for General

PAGE 103

93

PAGE 104

94 Administration to .435 for Business Affairs (Table 19). These results indicated that the administrators classified into the six administrative areas did not differ significantly in their emphasis on these five dimensions: Faculty/Staff Instructional Productivity, Student Ratings of Program, Program Student Output, Faculty/Staff Preparedness, and Program Responsiveness. Similar to the results for the program areas, there were significant differences in mean factor scores on factor 2 between Student Affairs and all of the other administrative areas except the Presidents (Appendix M). Also, there were significant differences in mean factor scores on factor 8 between Student Affairs and the administrative areas of Academic Affairs and Community Instructional Services (Appendix M). For factor 2, the Student Ratings of Support Services dimension, the mean factor scores ranged from -.480 for Student Affairs to .435 for Community Instructional Services (Table 19). For the administrative areas of General Administration, Academic Affairs, and Business Affairs, the mean factor scores were -.004, .142, and .236, respectively (Table 19). For factor 8, the Faculty /Staff Program Ratings dimension, the mean factor scores ranged from -.396 for the Presidents to .410 for Student Affairs (Table 19). For the administrative areas of Academic Affairs and Community Instructional Services, the mean factor scores were -.154 and -.301, respectively (Table 19). These results indicated that administrators classified into the administrative area of Student Affairs gave significantly more emphasis to the Student Ratings of Support Services dimension than did all the other administrative areas except the Presidents and significantly less emphasis to the Faculty/Staff Program Ratings dimension than did two of the other administrative areas. Also, the results indicated that Student Affairs

PAGE 105

95 did not differ significantly from the Presidents in emphasis on the Student Ratings of Support Services dimension and did not differ significantly from either the Presidents or General Administration or Business Affairs on the Faculty/Staff Program Ratings dimension. Also, the results indicated that the administrative areas other than Student Affairs did not differ significantly in emphasis on these two dimensions. For factor 1, there was a significant difference in the mean factor score between the administrative areas of Community Instructional Services and Business Affairs (Appendix M). Also, similar to the results for the program areas, there were significant differences in mean factor scores on factor 4 between the administrative area of Community Instructional Services and the administrative areas of General Administration, Academic Affairs, and Student Affairs (Appendix M). For factor 1, the Resources Usage dimension, the mean factor scores ranged from -.356 for the Presidents to .487 for Community Instructional Services (Table 19). The mean factor score for Business Affairs was -.178 (Table 19). The mean factor scores on factor 4, the Physical and Academic Skills Needs Assessment of Enrolled Students dimension, ranged from -.109 for Student Affairs to .952 for Community Instructional Services (Table 19). For the administrative areas of General Administration and Academic Affairs, the mean factor scores were .050 and -.079, respectively (Table 19). These results indicated that administrators classified into the administrative area of Community Instructional Services gave significantly less emphasis to the Resources Usage dimension than did administrators classified into the administrative area of Business Affairs and significantly less emphasis to the Physical and Academic Skills Needs Assessment of Enrolled Students dimension than did administrators classified into the administrative areas

PAGE 106

96 of General Administration, Academic Affairs, and Student Affairs. The results indicated that administrators within Community Instructional Services did not differ significantly from any of the administrative areas other than Business Affairs in emphasis on the Resources Usage dimension and did not differ significantly from Business Affairs or the Presidents in "emphasis on the Physical and Academic Skills Needs Assessment of Enrolled Students dimension. Also, the results indicated that administrators within administrative areas other than Community Instructional Services did not differ significantly in emphasis on these dimensions. As indicated previously in this chapter, the utilization of the nine identified common dimensions to organize the 108 program characteristics identified as most useful in program quality-evalution decision making should result in increasing the probability that the format of the presented information will be perceived as credible and useful by the administrator involved in the decision situation. Examination of the differences in mean factor scores for the six administrative areas was done to determine whether there were any statistically significant differences that might be useful in tailoring by administrative area the format of information presented to administrators in the six administrative areas for use in program quality-evaluation decision making. The results presented in this section indicated that the administrators classified into the six administrative areas did not differ significantly in their emphasis on five dimensions: Faculty/Staff Instructional Productivity, Student Ratings of Program, Program Student Output, Faculty/ Staff Preparedness, and Program Responsiveness. For the Resources Usage dimension, the results indicated that Business Affairs emphasized this

PAGE 107

97 dimension significantly more than did Community Instructional Services. Student Affairs emphasized the Student Ratings of Support Services dimension significantly more than did all other administrative areas except the Presidents. For the Physical and Academic Skills Needs Assessment of Enrolled Students dimension, the results indicated that Community Instructional Services emphasized this dimension significantly less than did all other administrative areas except Business Affairs and the Presidents. The Faculty/Staff Program Ratings dimension received significantly less emphasis by Student Affairs than by the administrative areas of Academic Affairs and Community Instructional Services. These results should be useful in tailoring by administrative area the organization of information for presentation to administrators involved in quality-evaluation decision making. For example, the results indicated that the information included in the Student Ratings of Support Services dimension should be given more emphasis and the information included in the Faculty/Staff Program Ratings dimension should be given less emphasis when presenting information to administrators in Student Affairs. As indicated previously in this chapter, the nature of this emphasis might include the presentation of more or less information or might involve some type of weighting of the information related to these dimensions. Similarly, these results might be used to tailor the presentation of information for program quality-evaluation decision making to administrators in other specific administrative areas. Summary The results presented in this chapter have demonstrated that there were underlying dimensions of the multiple items of information rated as

PAGE 108

98 highly useful in program quality-evaluation decision making by administrators in Florida public community/junior colleges. Nine such dimensions were identified, discussed, and labeled. Through comparison of the mean factor scores of administrators classified by program area and by administrative area, it was demonstrated that there were statistically significant differences in the degrees of emphasis within the nine identified dimensions between some of the program area classifications and between some of the administrative area classifications. These differences here identified and discussed. Based upon these results, guidelines are recommended in the next chapter for organizing the identified multiple indicators of quality to increase the probability that administrators will find the presented information credible and useful in making program quality-evaluation decisions in Florida public community/junior colleges.

PAGE 109

CHAPTER V SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER STUDY Summary The purpose of this study was the identification of any underlying dimensions within the multiple quality indicators rated by administrators in Florida public community/junior colleges as highly useful in making program quality-evaluation decisions. The research questions were: (1) what is the "best" factor structure for the usefulness ratings? and (2) for the identified "best" factor structure, are there significant differences in the mean factor scores between classifications of respondents by program area and between classifications of respondents by administrative area? Additionally, the results of the study were to be used to develop guidelines for organizing the presentation of the information represented in the identified multiple indicators of quality to increase the probability that the organization of the information will be perceived as credible and useful by the administrators involved in making program quality-evaluation decisions in Florida public community/junior colleges. Of 631 administrators identified to participate in the study, 450 responded by rating 454 items on a survey questionnaire (Appendix C) for their degree of usefulness in program quality-evaluation decision making. Using the mean responses, Pearson product-moment correlation coefficients were calculated for the intercorrelations of the 108 program characteristics 99

PAGE 110

100 rated as highly useful. The correlation matrices were factor analyzed and the "best" factor structure determined. Factor scores were calculated for the respondents by use of the factor loadings on the identified factors. Comparisons were made of mean factor scores between respondents classified by program area and between respondents classified by administrative area to determine any significant differences. Summary findings are presented for each research question separately. From the results, conclusions are drawn regarding guidelines for organizing the presentation of the information represented in the program characteristics to increase the probability that the organization of information will be perceived as useful by administrators involved in making program quality-evaluation decisions in Florida public community/ junior colleges. Recommendations are made regarding needs for further research. The analyses related to the first research question resulted in the identification of a factor structure that contained nine interpretable factors representing the underlying dimensions of the ratings of the 108 program characteristics rated as most highly useful in program qualityevaluation decision making. Based upon these nine factors, the following nine dimensions were identified, discussed, and labeled: Resources Usage Student Ratings of Support Services Faculty/Staff Instructional Productivity Physical and Academic Skills Needs Assessment of Enrolled Students Student Ratings of Program Program Student Output Faculty/Staff Preparedness Faculty/Staff Program Ratings Program Responsiveness The analyses related to the second research question resulted in the determination of no significant differences in the mean factor scores

PAGE 111

101 between classifications of administrators by program or administrative area on the Faculty/Staff Instructional Productivity and the Program Responsiveness dimensions. Additionally, there were no significant differences in mean factor scores between program areas for the Resources Usage dimension and between administrative areas on the Student Ratings of Program, Program Student Output, or Faculty/Staff Preparedness dimensions. Significant differences in the mean factor scores between some classifications of respondents by program area and between some classifications of respondents by administrative area were found on the following dimensions: Resources Usage Dimension Community Instructional Services gave significantly less emphasis to this dimension than did Business Affairs. Student Ratings of Support Services Dimension Student Services gave significantly more emphasis to this dimension than did all other program or administrative areas except the Developmental Program Area and the Presidents. Physical and Academic Skills Needs Assessment of Enrolled Students Dimension Community Instructional Services gave significantly less emphasis to this dimension than did all other program or administrative areas except the Developmental Program Area and the administrative areas of Business Affairs and the Presidents. Student Ratings of Program Dimension The Occupational Program Area gave significantly less emphasis to this dimension than did the Advanced and Professional and Student Services program areas.

PAGE 112

102 Program Student Output Dimension The Developmental Program Area gave significantly less emphasis to this dimension than did all other program areas except Community Instructional Services. Faculty/Staff Preparedness Dimension The Advanced and Professional Program Area gave significantly more emphasis to this dimension than did the Occupational and Community Instructional Services program areas. Faculty/Staff Program Ratings Dimension Student Services gave significantly less emphasis to this dimension than did the administrative areas of Academic Affairs and Community Instructional Services and significantly less emphasis than all did all other program areas except the Developmental Program Area. Conclusions From the results of this study, the following conclusions were drawn. It was concluded that there were nine common dimensions underlying the ratings of the 108 program characteristics rated as most highly useful in program quality-evaluation decision making. It was concluded that these nine dimensions involved: (1) fiscal, physical, and human resources; (2) student ratings of support services including student services; (3) information on the instructional productivity of faculty; (4) the identification of any physical or cognitive needs of students relevant to their performance in their selected programs; (5) ratings of specified aspects of a program by students; (6) indicators of the quantitative output of a program including certain student follow-up information; (7) specified attributes of both full-time

PAGE 113

103 and part-time faculty; (8) ratings of specified aspects of a program by faculty; and (9) indicators of the responsiveness of a program to program evaluations, certification boards, accreditation agencies, the community it serves, the students it serves, and the state it serves, including specified program objectives. It was concluded that these nine dimensions represented the common dimensions of those aspects of an aggregate value system used by the administrators included in this study when making program quality-evaluation decisions. It was concluded that, since quality is a value judgment and not an attribute or characteristic of a program, these nine common dimensions are the dimensions of program quality-evaluation decision making in Florida's public community/junior colleges. It was concluded that the utilization of these nine common dimensions to organize the information represented by the 108 program characteristics for presentation to administrators involved in program qualityevaluation decision making would increase the probability that the administrators will find the format of the information to be credible and helpful in that decision situation. It was concluded that there were significant differences in the degree of emphasis on specified dimensions between administrators classified within some program areas and between administrators classified within some administrative areas that should be used for tailoring the organization of information for presentation to the administrators in these specified areas for use in program quality-evaluation decision making.

PAGE 114

104 Based upon these conclusions, the following guidelines have been formulated for organizing the information represented in the 108 program characteristics for presentation to administrators involved in program quality-evaluation decision making. The purpose of these guidelines is to assist the person or persons responsible for organizing information for presentation to administrators involved in program quality-evaluation decision making in Florida public community/junior colleges. First, the nine dimensions could be used as guides for selecting the information to include in proposals to administrators for program quality-evaluation information systems. Inclusion of information from each of the nine dimensions should result in increasing the probability of proposal approval. In general, the inclusion of those items with greater loadings on the factor representing a dimension should increase the probability of meeting an administrator's information requirements since those are the items with the higher levels of agreement in ratings by administrators for their usefulness in program quality-evaluation decision making. Of course, since these dimensions reflect an aggregate value system, any individual administrator should value information representing these dimensions for usefulness in program quality-evaluation decision making to the degree that the dimensions of his value system relevant to the decision situation are reflected in the aggregate value system. Second, the nine dimensions could be used as guides for organizing information for presentation to administrators for use in program quality-evaluation decision making. Organizing the information into the nine dimensions should increase the probability that an administrator will find the presented information credible and useful in program quality-

PAGE 115

105 evaluation decision making. For reports to be used in program qualityevaluation decision making, the information items in the report should be grouped as follows: (1) information relating to the fiscal, physical, and human resources used in a program; (2) information relating to student ratings of support services including student services relating to a program; (3) information relating to the instructional productivity of faculty in a program; (4) information relating to any physical or cognitive needs of students relevant to their performance in a program; (5) information relating to ratings of a program by students; (6) information relating to the quantitative student output of a program including student follow-up information; (7) information relating to the attributes of full-time and part-time faculty in a program; (8) information relating to ratings of a program by faculty; and (9) information relating to the responsiveness of a program to program evaluations, certification boards, accreditation agencies, the community it serves, the students it serves, and the state it serves including clearly specified program objectives. An example of a possible report format is presented in a reduced reproduction in Figure 1. The information items included in each section of the two-page report were presented in Tables 7-15 in Chapter IV. The specific headings used under the subdivisions, such as the headings under "Years in Program" or "Total Years Taught" within the Faculty Preparedness dimension, are used for illustration purposes. The actual headings would be determined locally. If a greater reduction of information were desired, the nine dimensions could be used as dimensions in a profile format. The items of information included within each dimension could be used to create indices for the dimensions. Methods of creating such indices are suggested as an

PAGE 116

106 1

PAGE 117

107 objective for further research. A sample format for such a profile is presented in Figure 2. Greater detail could be incorporated into the profile as illustrated in Figure 2 for the Student Ratings dimension. Finally, the results from the analyses related to the second research question could be used to adjust or tailor program quality-evaluation information reports or profiles for the specific program or administrative areas where significant differences in mean factor scores were identified. These results indicated that when preparing program quality-evaluation information reports for Community Instructional Services, the Resources Usage dimension and the Physical and Academic Skills Meeds Assessment of Enrolled Students dimension should be deemphasized. When preparing program quality-evaluation reports for Student Services, the Student Ratings of Support Services dimension should be emphasized and the Faculty/Staff Program Ratings dimension should be deemphasized. For quality-evaluation information reports to administrators in the Occupational Program Area, the Student Ratings of Program dimension should be deemphasized. For quality-evaluation information reports to administrators in the Advanced and Professional Program Area, the Faculty/Staff Preparedness dimension should be emphasized. The nature of emphasizing or deemphasizing a dimension could consist of the order of placement of the information in the report or the profile, highlighting in some manner the specific dimension to be emphasized, including more or less information related to a specific dimension in the report or profile, or weighting the index of a dimension for use in the profile format. The method of emphasis was not an objective of this study, only the determination that certain dimensions should receive more or less emphasis for specified program or administrative areas.

PAGE 118

108 QUALITY-EVALUATION INFORMATION PROFILE: PROGRAM A DIMENSIONS RESOURCES USAGE INSTRUCTIONAL PRODUCTIVITY FACULTY PREPAREDNESS FACULTY RATINGS ENROLLED STUDENTS NEEDS ASSESSMENT STUDENT RATINGS Facilities/ Equipment Staff Curriculum Administration Instructional Strategies STUDENT RATINGS/ SUPPORT SERVICES STUDENT OUTPUT PROGRAM RESPONSIVENESS INDEX 12 3 4 5 6 7 10 y Figure 2. Sample format for program quality-evaluation information profile.

PAGE 119

109 Recommendations for Further Study It is recommended that this study be replicated with the same administrators to determine whether response patterns remain the same or change. If the dimensions identified in this study represent the dimensions of an aggregate value system relevant to the decision situation, then response patterns should remain relatively stable over time since attitudes tend to remain relatively stable over time. It is recommended that this study be replicated in other community college systems and in other types of colleges to determine whether the dimensions identified in this study are unique to program quality-evaluation decision making in Florida community colleges or can be generalized to other institutional settings. It is recommended that the methodology used in this study to identify underlying dimensions of the ratings of information useful in program quality-evaluation decision making be used in other decision situations to identify relevant dimensions for formatting information for presentation to decision makers in those decision situations. It is recommended that the information identified as included in the nine dimensions in this study be used in a study to determine methodologies for creating indices for the nine dimensions for use in producing program quality-evaluation information profiles. Finally, it is recommended that additional research in program quality-evaluation decision making involve investigation of the attitudes of the administrators involved to determine in as direct a manner as possible the dimensions of administrators' attitudes that are relevant to this particular decision situation. The creation of an instrument to assess such attitudes might be useful for quickly determining what types of

PAGE 120

no information an individual administrator or a specific group of administrators value for program quality-evaluation decision making.

PAGE 121

APPENDIX A CLASSIFICATIONS OF RESPONDENTS USED IN DATA ANALYSIS Program Areas 1 . Advanced and Professional Program Area — commonly referred to as university parallel, the first two years of a baccalaureate program. Included in this group were respondents classified with position codes 14-20. Position titles with associated codes and frequencies are listed in Appendix D. 2. Occupational Program Area — or vocational-technical education, terminal certificate or degree programs preparing students for employment in a specific trade or field. Included in this group were respondents classified with position codes 21-26. 3. Community Instructional Services Program Area —programs of short, credit or noncredit classes designed to provide enrichment for students. Included in this area were respondents classified with position code 29. 4. Developmental Program Area — or compensatory education, designed to assist students in improving deficient basic skills necessary for program work. Included in this area were respondents classified with position code 30. 5. Student Services Program Area — various auxilliary services provided to students facilitating their progress through one of the academic areas including such services as counseling, student 111

PAGE 122

112 activities, admissions, financial aid, etc. Included in this group were respondents classified with position codes 32-41. Administrative Areas 1. General Admini strati on —respondents with responsibilities of a general nature in the operation of the college's program or services. Included in this area were respondents classified with position codes 2-12. 2. Academic Affairs —respondents with responsibilities of administering one or more of the college's academic programs. Included in this area were respondents classified with position codes 1328 and 30-31. 3. Student Affairs —respondents with responsibilities of administering one or more of the college's student services programs. Position codes included in this area were identical to those used in the Student Services Program Area. 4. Community Instructional Services — respondents with responsibilities of administering the college's adult and continuing education or community instructional services programs. The position code included in this area was identical to that used for the Community Instructional Services Program Area. 5. Business Affairs —respondents associated with the operation of the business offices (budget, accounting, personnel, etc.) of the college. Included in this area were respondents classified with position codes 42-50. 6. President — the chief executive officer of the college. Included in this area were respondents classified with position code 1.

PAGE 123

APPENDIX B DESCRIPTION OF IRC PROJECT METHODS AND PROCEDURES The Problem The problem in this project was to determine the degree of usefulness of various types of information (program characteristics) as perceived by administrators in Florida's public community colleges in making quality-evaluation decisions about programs or services offered by their colleges. An ancillary purpose was to identify similarities or differences in the perceived mean usefulness-ratings of the program characteristics for administrators according to various classifications including: 1. The program or service area with which the responding administrator was primarily associated, 2. The administrative area within which the responding administrator held his or her position. 3. Personal characteristics of respondents including degree level, sex, years in present position, years at present college, years in community college education, and years in education other than community college education. 4. General characteristics of the institution within which the administrators were employed including the market region of the state where the institution was located, whether or not the institution was designated by the state as an area vocational 113

PAGE 124

114 education school, the total size of the institution in terms of FTE served, the percent of total college FTE served in the Advanced and Professional Program Area, the percent of total college FTE served in the Occupational Program Area, the percent of total college FTE served in the Developmental Program Area, and the percent of total college FTE served in the Community Instructional Services Program Area. 5. Opinions of respondents related to amount of time spent in, extent of involvement in, or level of experience in program quality-evaluation decision making. Also the opinions of respondents as to their perception of the degree to which their positions are associated with each of the program areas. A description of these classifications can be found in Appendix A. The following sections describe the design of the project, the development of the project questionnaire, the survey population, collection of the data, and analysis of the data. Design of the Project This project was designed to assess the perceptions of community college administrators of the usefulness of various program characteristics for program quality-evaluation decision making. The review of related literature on the decision-making model of educational evaluation indicated that the determination of what type of information to be used in educational decision making should be the responsibility of the decision maker, not the evaluator (Stufflebeam et al . , 1971; Alkin, 1969; Craven, 1975). Therefore, a survey research design was adopted for this project and a questionnaire was developed to measure administrators' perceptions

PAGE 125

1T5 of the usefulness of various program characteristics for program qualityevaluation decision making for programs or services offered by their colleges. This research design was very similar to the design used by the Educational Testing Service to assess quality in doctoral education programs (Clark et al . , 1976). This questionnaire was organized to collect data in four areas: 1. Demographic data of respondents. These data included the respondent's name, position, college, years in present position, years at present college, years in community college education, years in education other than community college education, age, sex, and highest degree held. 2. The program perspective respondents used in rating the usefulness of the program characteristics. The perspectives were general (no specific program area in mind), advanced and professional, occupational, developmental, community instructional services, student support services, and other. 3. Usefulness ratings of the program characteristics for program quality-evaluation decision making. 4. Opinions of respondents of the amount of time spent in program quality-evaluation activities, the extent of their involvement in program quality-evaluation decision making, their perceived level of experience in program quality-evaluation, and the degree to which their position was associated with each program area. Development of the Project Questionnaire In making program quality-evaluation decisions, administrators may desire information related to many aspects or characteristics of a

PAGE 126

116 program. The questionnaire designed and used in this project contained a "list of 434 program characteristics for respondents to rate for degree of usefulness in making program-quality decisions. The program characteristics rated in this study were identified by: 1. A review of evaluative criteria utilized to rate the quality of programs or institutions in various quality-evaluation studies including those designed to identify "indicators of quality" (e.g., Banghart et al . , 1978; Fotheringham, 1978) for educational programs or institutions. 2. A review of various state and federal government reports identifying different types of information currently being collected and reported. The primary source in this area was the Florida Community College Management Information System Manual (Division of Community Colleges, 1980) which contained copies of many data reporting forms, including the required data with formatting requirements, used for various state and federal reports. From these sources a list of program characteristics was compiled. This list was submitted for review by a panel of community college management information specialists and institutional researchers consisting of IRC institutional representatives for the year 1980-81. A letter was sent to each representative with a list of the program characteristics requesting that the list be reviewed and characteristics added, deleted, or modified in relation to their potential use in program quality-evaluation decision making. In this review none of the identified characteristics were deleted, six characteristics were added, and the descriptions of various characteristics were modified. The 434 program characteristics included in the project questionnaire resulted from this process.

PAGE 127

1T7 Using these program characteristics, a questionnaire was developed to collect the required data. The questionnaire was submitted for review to the same panel of IRC representatives utilized for refinement of the program characteristics. The panel evaluated the questionnaire and provided input in the following areas: 1. Refinement of the questionnaire's directions. 2. Refinement of the statements describing the program characteristics. 3. Refinement of the organization of the characteristics. 4. Refinement of the rating scale. 5. Refinement of the questionnaire format. 6. Determination of the time needed for questionnaire completion. This process resulted in various modifications of the questionnaire which was sent out again for review by the panel. The final form of the questionnaire resulted from this second review. A copy of the questionnaire can be found in Appendix C. The questionnaire consisted of five sections. Section one requested respondents to print their name, current position, and name of college. Section two described the purpose of the project, the organization of the questionnaire, and the directions for rating the program characteristics. The program characteristics were organized into four categories concerning information about students, faculty/staff, costs/resources, and general information. Examples describing the rating process were provided at the beginning of each category. Respondents were requested to add any program characteristics which they thought were of use but which were not included in the questionnaire.

PAGE 128

lis Section two also contained a description of the four-point rating scale used to rate the program characteristics for degree of usefulness in program quality-evaluation decision making. The scale was: (1) essential, (2) very useful, (3) some usefulness, and (4) little or no usefulness. Respondents were requested to rate any program characteristics they perceived as not applicable to their respective program or service area with a "4." The rating scale was printed on a loose insert providing respondents a quick reference when completing the questionnaire (Appendix C). Section three requested that the respondents indicate the program perspective they would use in rating the program characteristics. Six choices of perspectives were listed: general, advanced and professional, occupational, developmental, community instructional services, and student support services. An "other" choice was provided for respondents to specify a perspective different from those listed. Following section three, respondents were requested to proceed in rating the characteristics. Section four consisted of a series of questions designed to collect demographic data on the respondents. These data included years in present position, years at present college, years in community college education, years in education other than community college education, birthdate, sex, and highest degree held. The fifth section of the questionnaire requested respondents to indicate their opinion of the degree to which their position was associated with each of the program areas, the amount of time they spent in program quality-evaluation decision making, and their level of experience in program quality-evaluation. Also, respondents were requested

PAGE 129

119 to add any comments regarding the design of the project, the questionnaire, or the program quality-evaluation process at their college. Collection of Data During the development of the questionnaire, the review panel was asked to approximate the amount of time needed for its completion. The consensus of the review panel was that approximately 45 minutes to one hour was needed. Realizing the difficulty of securing the participation of administrators in a project that required such substantial investment of their time, procedures for the collection of data were used that would increase the probability of obtaining their participation. To gain publicity and support for the project, the endorsement of the Council of Presidents was requested and received. Under this endorsement, a letter was sent to each community college president describing the project and requesting that they appoint an individual at their college to serve as a project coordinator. Twenty-four of the 28 public community colleges in Florida chose to participate in the project through their appointment of project coordinators. Project coordinators were sent a letter thanking them for agreeing to serve and describing their role as project coordinator for their college. The first task of the project coordinator was to identify, by name and position, all administrators at their college who met the criteria for participation in the project. Forms and self -addressed envelopes were included for their convenience in completing this task. When the lists of administrators were received, letters were sent to all describing the project and encouraging their participation. Packets were prepared for each participating administrator which included a cover letter, a one-page synopsis of the project, the questionnaire,

PAGE 130

120 and a return label addressed to their institution's project coordinator. The second task of the project coordinators was to distribute and colect the questionnaires. Each project coordinator was sent a letter describing the distribution and collection process along with the prepared packet for each identified administrator at their college. This letter explained that the packets were to be distributed as soon as possible to the participating administrators. The participating administrators were requested to complete the questionnaires within ten days and return them to their college's project coordinator by affixing the included return label. The project coordinators were requested to allow approximately two weeks from the data of the distribution of the project questionnaires for their return and to forward to the IRC the questionnaires that had been returned by that date. With the return of the completed questionnaires, project coordinaors were sent a letter thanking them for their help, requesting the return of any subsequently received questionnaires, and informing them that they were not responsible for conducting follow-up activities. Follow-up procedures involved two steps. First a letter was sent to those administrators from whom questionnaires had not been received requesting that they complete the questionnaires at their earliest convenience and return them as soon as possible. If this process was ineffective, a second letter was sent which included a copy of the questionnaire and a request that the administrator complete and return it as soon as possible. Each administrator completing and returning the questionnaire was sent a letter thanking them for their investment of time and effort in the project.

PAGE 131

121 When received, each questionnaire was given a position code based on the reported position and an institutional code based on the reported college. These codes were used to identify the respondents for follow up and to facilitate classification of the respondents for various analyses. The position codes used for classifying the respondents can be found in Appendix D. Survey Population The identification of the decision makers included in the project was the responsibility of the designated project coordinator at each participating college. Project coordinators identified, by name and position, all administrators with some instructional or student personnel services responsibility as identified on the institution's yearly personnel report (SA-1 , part 3) as administrative, managerial, or professional (Division of Community Colleges, 1980, p. 10.1). Analysis of the Data The data were analyzed with the assistance of the SAS (Statistical Analysis System) computer system for data analysis. The mean, standard deviation, variance, range, and measures of skewness and kurtosis were calculated for each program characteristic for all respondents and for each classification of respondents described in Appendix A. Using the calculated means, the program characteristics were ranked for all respondents and for respondents in each classification. Spearman rankorder correlation coefficients were calculated for the upper quartile of program characteristics ranked by the mean usefulness ratings for all responsdents and for respondents in each classification. For all respondents and for respondents classified into the five program areas and the five administrative areas and presidents, the

PAGE 132

nz program characteristics in the upper quarter of ranked mean usefulness ratings were organized into four categories as they were presented in the project questionnaire (program characteristics relating to students, faculty/ staff, costs/resources, and general information). The differences or similarities in the program characteristics and in the ranks of the program characteristics contained in these groupings were discussed. For all respondents and for respondents classified into the five program areas, the program characteristics in the upper quarter of ranked mean usefulness ratings were organized into information profiles using eleven types of information for all program areas except Student Services which required a twelfth type of information. The areas of similarities or differences in these information profiles for each of the five program areas were discussed.

PAGE 133

APPENDIX C QUESTIONNAIRE PROGRAM QUALITY INDICATORS PROJECT QUESTIONNAIRE STEP 1 Print or type: YOUR NAME YOUR POSITION NAME OF COLLEGE STEP 2 You may use various information (program characteristics) to evaluate the quality of academic or student support services programs. The purpose of this questionnaire is to determine your rating of the USEFULNESS of these characteristics in evaluating program quality. Rating choices are provided on a loose insert for quick reference. The program characteristics are organized into four categories concerning information about: I. Students II. Faculty/Staff . III. Costs/Resources IV. General In each category you are to rate each characteristic for DEGREE OF USEFULNESS in making QUALITY-EVALUATION DECISIONS about programs. Examples are given for each category. Within each category space is provided >for you to add characteristics. Rate any added characteristic in the same manner as described for other characteristics in the category. SCAN THE ENTIRE QUESTIONNAIRE BEFORE YOU BEGIN RATING. PLEASE USE A PENCIL FOR YOUR RESPONSES. DIRECT AMY QUESTIONS REGARDING THE QUESTIONNAIRE THROUGH YOUR COLLEGE'S PROJECT COORDINATOR TO THE IRC. STEP 3 In the list below, check one or more program areas to indicate the perspective you will use in rating the characteristics for degree of usefulness in making qualityevaluation decisions about programs. General (No specific program area in mind) Advanced inc Professional Occupational Developmental Community Instructional Services Student Support Services Other: (Please specify) \\\l I COOPERATION FOR PROGRESS THROUGH RESEARCH Honda community junior college inter institutional research council CONTINUE ON NEXT PAGE 123

PAGE 134

APPENDIX C (continued) . PROGRAM CHARACTERISTICS RELATING TO STUDENTS STUDENT CLASSIFICATIONS ENTERING » students at the time they begin a program CURRENTLY ENROLLED « all students currently enrolled in. a^program, not just those beginning a program COMPLETERS « students who have received a degree 6T certificate, finished an orientation program, etc. LEAVERS = students who have withdrawn or otherwise left a program witnout officially completing it 124 FOR EACH CHARACTERISTIC, IN EACH OF THE FOUR COLUMNS OF STUDENT CLASSIFICATIONS WRITE A NUMBER SELECTED FROM THE RATING CHOICES (LOOSE INSERT) TO INDICATE YOUR OPINION OF THAT CHARACTERISTIC'S "USEfULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS. EXAMPLE employment status currently entering enrolled 05 i_ ^a_ completers leavers 0_ ±_ From the perspective checked in STEP 3: The "3" in the "entering" column and the "3" in the "currently enrolled" column indicate an opinion that the information on employment status of ENTERING and CURRENTLY ENROLLED students has "some usefulness" in making quality-evaluation decisions about programs. The "1" in the "completers" column and the "4" in the "leavers" column indicate respectively the opinions that information on employment status of COMPLETERS is "essential" and of LEAVERS is of "little or no usefulness" (or "not applicable") in making quality-evaluation decisions about programs. CHARACTERISTICS Percent or number of students by: sex classification age classification race classification marital status employment status citizenship classification household income classification commuting distance categories educational status of family STUDENT CLASSIFICATIONS currently entering enrolled completers leavers CONTINUE ON NEXT °AGE

PAGE 135

125 APPENDIX C (continued) (CONTINUE RATING EACH STUDENT-RELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM PERSPECTIVE CHECKED IN STEP 3.) CHARACTERISTICS STUDENT CLASSIFICATIONS Percent or number currently of students by: entering enrolled completers leavers military service classification 1_0 * language spoken in home 1_1_ parents' occupational categories ±2 source of financial support J_3 type of nandicap J£ career decision status 15 major area of study _16. type of high school diploma 17_ degree level sought 18/ part time/full time classification J_9 number of hours enrolled 20_ number of hours completed 21_ number of hours withdrawn 22_ numDer of hours incomplete 23_ number of hours with failing grade 24 number of hours of developmental/ 25_ remedial work number of hours repeated 2£ term GPA categories 27 cumulative GPA categories 28 cumulative GPA categories for Z9_ program related course work number of CLEP hours earned 30 amount of time since last formal 31. educational experience level of previous academic 32_ achievement number of years of related work 33_ CONTINUE ON NEXT PAGE

PAGE 136

126 APPENDIX C (continued; (CONTINUE RATING EACH STUDENT-RELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS 11 MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM PERSPECTIVE CHECKED IN STEP 3.) CHARACTERISTICS STUDENT CLASSIFICATIONS Percent or number currently of students by: entering enrolled completers leavers level of awareness of college's 34 __ programs, services, etc. I.Q. categories 35 personality types 36 _ self-concept categories 37 _ test anxiety levels 38 level of financial assistance 39 __ desired types of developmental or 40 _ __ remedial assistance desired level of public service involve41 __ ment (however measured) level of involvement in high iZ_ ___ __ school activities academic skills level as measured 43_ _ _ by local instruments academic skills level as measured 44 by state instruments academic skills level as measured 45 ^ _ by national instruments scholastic honors, awards, or 46_ _ memberships earned (scnolarships, nonorary societies, etc.) nolding jobs for which trained 47_ salary categories 48 __ _ __ rate or number of legal violations 49 time spent in program 50 number of jobs held since leaving 51_ _ __ program performance on standardized 52_ state tests performance on standardized 5_3 national tests CONTINUE ON NEXT PAG

PAGE 137

127 APPENDIX C (continued) (CONTINUE RATING EACH STUDENT-RELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM PERSPECTIVE CHECKED IN STEP 3.) CHARACTERISTICS STUDENT CLASSIFICATIONS Percent or number currently of students by: entering .enrolled completers leavers use of various student services 5£ need for various student services 55 PLEASE NOTE FOR THE FOLLOWING CHARACTERISTICS A SINGLE STUDENT CLASSIFICATION IS IMPLIED Number of students enrolling in 56 X X X a program Percent of total college FTE 57 X X X in program Percent of total college undupli58 X X X cated headcount in program Average GPA of students in program 59 X X _X Average course load for students 60_ X X X in program 56

PAGE 138

128 APPENDIX C (continued; THE FOLLOWING CHARACTERISTICS RELATE TO STUDENTS WHO HAVE TRANSFERRED OR ARE NATIVE TO FOUR-YEAR COLLEGES OR UNIVERSITIES. STUDENT CLASSIFICATIONS NATIVE STUDENTS TRANSFERS WITH ASSOCIATE DEGREE TRANSFERS WITHOUT ASSOCIATE DEGREE CONTINUE RATING EACH STUDENT-RELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS FROM PERSPECTIVE CHECKED IN STEP 3. CHARACTERISTIC STUDENT CLASSIFICATIONS Percent or number of students by: enrollment in a four-year institution native transfer with transfer without students assoc. degree assoc. degree type of college or university entered

PAGE 139

APPENDIX C (continued) (CONTINUE RATING EACH STUDENT-REi ATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM PERSPECTIVE CHECKED IN STEP 3.) 129 CHARACTERISTICS Percent or number of students by: Other Other Other Other STUDENT CLASSIFICATIONS native transfer with transfer without students assoc. degree assoc. degree II. PROGRrY-1 CHARACTERISTICS RELATING TO FAQJLTY/STAF FOR PART-TIME AND FULL-TIME CLASSIFICATIONS, RATE EACH FACULTY/STAFF RELATED CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY -EVALUATION DECISIONS FROM PERSPECTIVE CHECKED IN STEP 3. EXAMPLE degrees held parttime fulltime Mli. From the perspective checked in STEP 3: The "2" in the "part-time" column indicates an opinion that information on degrees held by PART-TIME faculty/staff is "very useful" in making program quality-evaluation decisions. The "1" in the "full-time" column indicates an opinion that information on degrees held by FULL-TIME faculty/staff is "essential" in making program qualityevaluation decisions.

PAGE 140

APPENDIX C (continued; (CONTINUE RATING EACH FACULTY/STAFF-P.ELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM PERSPECTIVE CHECKED IN STEP 3.) 130

PAGE 141

APPENDIX C (continued) II!, PROGRAM CHARACTERISTICS RELATING TO COSTS AND RESOURCES 137 FOR EACH CHARACTERISTIC, IN EACH OF THE THREE COLUMNS OF REPORTING CLASSIFICATIONS WRITE A NUMBER SELECTED FROM THE RATING CHOICES (LOOSE INSERT) TO INDICATE YOUR OPINION OF THAT CHARACTERISTIC'S USEFULNESS IN MAKING QUALITY-EVALUATION DECISIONS ABOUT PROGRAMS FROM THE PERSPECTIVE CHECKED IN STEP 3. EXAMPLE cost of materials per

PAGE 142

132 APPENDIX C (continued) (CONTINUE RATING EACH COSTS/RESOURCES-RELATED PROGRAM CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS FROM PERSPECTIVE CHECKED IN STEP 3.) CHARACTERISTICS REPORTING CLASSIFICATIONS per per program program undupl icated per total FTE headcount program IV. PROGRAM CHARACTERISTICS RELATING TO GBERAL IfFOFWATION * * * * • FOR EACH OF THE FIVE RATING SOURCE ALTERNATIVES, RATE EACH CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS. EXAMPLE currently enrolled program program faculty/ community students completers leavers staff (general) a program curriculum pi 3_ JL _£. tt. From the perspective checked in STEP 3: The "3" in the "currently enrolled students" column indicates an opinion that RATINGS BY currently enrolled students of a program curriculum are of "some usefulness" in making program quality-evaluation decisions. The "1" in the next column indicates an opinion that RATINGS BY program completers of a program curriculum are "essential" in making program quality-evaluation decisions. In the next three columns, the "4" indicates an opinion that RATINGS BY program leavers, faculty/staff, and community (general) of a program curriculum are of "little or no usefulness" (or "not applicable") in making program quality-evaluation decisions. CHARA CTERISTICS RATING SOURCE ALTERNATIVES RATINGS BY: currently enrolled program program faculty/ community Ratings of: students completers leavers staff (general) a program curriculum 0T_ program facilities/equipment 02 program instructional strategies 03 program staff 04 program administration 05_ CONTINUE ON NEXT PAGE

PAGE 143

133 APPENDIX C (continued) (CONTINUE RATING EACH GENERAL INFORMATION-RELATED CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS FROM PERSPECTIVE CHECKED IN STEP 3.) CHARACTERISTICS Ratings of: support services usefulness of student services accessibility of student services ease of use of student services RATING SOURCE ALTERNATIVES RATINGS BY: currently enrolled program program faculty/ community students completers leavers staff (general) 09 Other Other Other Other FOR THE FOLLOWING CHARACTERISTICS, SIMPLY RATE EACH CHARACTERISTIC FOR DEGREE OF USEFULNESS IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS FROM PERSPECTIVE CHECKED IN STEP 3. employer opinion of program 10 completers employer opinion of program 11 non-completers job satisfaction ratings by 12 completers job satisfaction ratings by 13 non-completers ratings by external consultants H ratings by certification boards ]_5 ratings by accreditation agencies \6_ number of alternative educational 17. methods offered program admission requirements J_8 clearly stated program objectives 1_9 number/types of changes as a 20 result of program evaluation number/types of changes as a 21_ result of accreditation studies level of demand for program/ service by students level of demand for program/ service in service area level of demand for program/ service in state level of demand for program/ service in nation 22 11 24 25 Other Other Other CONTINUE ON NEXT PAGE

PAGE 144

134 APPENDIX C (continued) STEP 4 : Please provide the following information. Years in present position: Years at present college: Years in community college education: Years in education other than community college education: Birthdate: / / Sex: female male TioTTdZTTyrT Highest degree held: bachelor master specialist doctorate STEP 5 : Please indicate your opinions of the following. OPINION CHOICES 1 NONE ("None of my activities or time") 2 = LITTLE ("Less than one-fourth but more than none of my activities or time") 3 = SOME ("One-fourth or more but less than three-fourths of my activities or time") 4 » CONSIDERABLE ("Three-fourths or more but less than all of my activities or time"! 5 ALL(TOTAL) ("100% of my activities or time") Using one of the OPINION CHOICES listed above, indicate your perception of: the degree to which your POSITION is associated with each program area: Advanced and Professional Community Instructional Services Developmental Occupational Student Support Services the amount of TIME you spend in program quality-evaluation activities: the extent of your INVOLVEMENT in program qua i lty-evaluation decision-making at your institution: Please indicate your perception of your LEVEL OF EXPERIENCE in program qualityevaluation decision-making by checking one of the following: NONE LITTLE SOME CONSIDERABLE PLEASE ADD ANY COMMENTS REGARDING THE PROGRAM QUALITY-EVALUATION PROCESS AT YOUR COLLEGE OR ANY COMMENTS ABOUT THIS QUESTIONNAIRE (ATTACH ADDITIONAL PAGE IF REQUIRED). STEP 6 : Please return this questionnaire to the project coordinator at your college. Thank you for the expenditure of your time and energy on this project.

PAGE 145

APPENDIX C (continued) 135 RATING CHOICES 1 = ESSENTIAL VERY USEFUL 3 = SOME USEFULNESS 4 = LITTLE OR NO USEFULNESS ("I do not see how I could make a judgment about the quality of a program without considering this characteristic") ("I would feel hindered in making a judgment about the quality of a program without considering this characteristic, but I would make a judgment without it") ("Although I would like to consider this characteristic in making a judgment about the quality of a program, I would not feel hindered in making a judgment without it") ("I probably would not consider this characteristic in arriving at a judgment of the quality of a program") Please rate any "Not Applicable" judgments with a "4"

PAGE 146

APPENDIX D POSITION CODES USED IN THE CATEGORIZATION OF RESPONDENTS BY ADMINISTRATIVE AREA AND PROGRAM AREA WITH FREQUENCIES Position Code 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 Title Frequency 13 9 24 4 10 President Executive Vice President Provost/Center Director Assistant to the President Research and Planning Development 8 Special Services 8 EA/EO Coordinator 8 Special Projects 7 Public Relations 3 Management Information Systems 5 Other General Administration 3 Chief Academic Officer 30 Program Director-Communications 12 Program Director-Mathematics 13 Program Director-Sciences 8 Program Director-Humanities 10 Program Director-Fine Arts 2 Program Director-Social Sciences 10 Program Director-Other General Education 10 136

PAGE 147

137 Position Code Title Frequency 21 Program Director-Business 20 22 Program Director-Industrial 9 23 Program Director-Allied Health 23 24 Program Director-Law Enforcement 2 25 Program Director-Other Technical 14 Education 26 Program Director-Other Occupational 15 Education 27 Director CETA/Cooperative Education 1 28 Instructional Resources 23 29 Director-Continuing Education/Com21 munity Instructional Services 30 Developmental Education 5 31 Other Academic Affairs 3 32 Chief Student Affairs Officer 36 33 Financial Aid 11 34 Counseling 1° 35 Admissions 7 36 Veterans' Affairs 2 37 Registrar 11 38 Placement & Fol low-Up 2 39 Student Activities 3 40 Athletics 2 41 Other Student Affairs 4 42 Chief Business Affairs Officer 6 43 Budget 3 44 Purchasing 1

PAGE 148

138 Position

PAGE 149

10 i— 3 > to •!-t-> to 4x: o i — to A3 CD SX3 ItQ. >> rO E >— 4-> O +-> , E SO to . +-> cn &>> >, ro iQJ+J Wii. CI XI r— S+-> 0) XI E 3 ro c: > E 3 O 0) Ol rC 3 c ro >> Si >-, c 3 cn o XI 3 U O o re s>-, l/l U 4ClXI I o « . T0) >> >, to X) JD >> >> 3 to XI >>x> o SXI 0) 44>> 4J 4tO 4X) m ig si ib s>> ns Vs• 4-> tO X LlJ "I— o o UJ C£. CL. < c_ a: i-

PAGE 150

140 to -C CD +-> CD CO sC CD CD CD i— "O 44CD +J 44+J re re CD >> +-)+->.— J3 to to Cl *•*.->» E w >, >, o s+->+-> o cd 3 3 QJ O O (O t— m hj s. a +J +-> >, lO to i — ai 4J o — cd u •-E > re SSCD CO to O CD i— "D re 3 +-> +-> O O CD fO V 41S_ > CD CD CD to I— to CO CD +-> •rtO C0"-^ CD >,' 4-> +J «3 i— S3 O — +-> —

PAGE 151

o> i— QJ >>r— Exi-d a 141 c

PAGE 152

S_ QJ •QJ Wc S< 3 a. o o • 1—

PAGE 153

143 lxj a: D. cc Q. O 10 en ai O +J S_ o QfD SO U3 <£> CO Lf> O CO i— i— CM O O O O CM CO O CO CSJ CM CM i— O O 00 CM «aID l-O O CM CO CM CM i— CM OLIO"— CM O CM "* ID Oi— CM CM CM CM r^ rooiocr>*d-LncMCQ«3o .— i— i i— cmcm«^-i— r^ O — ojr^cMi — — id — lt> lt> r>OCMi— CO.— CMCMCMr— CO CM OCTiOCOCOt— r— COCMr— COCO OCMCOC^lCMCO«^-l£)«a-CMi— CM oroocMunOr-Ln^-tOLn^-r— oco«d-CMCM<*CM.— cocoi—coco o^rcocoocMCO«d-oo«a-cMLnvD OCCCOCMCOCMCMCMrCOCMCMCMCSJ Ocoi-nLncocMcocO'— OCMr— CMCMr^lDi— CMCMCMi— <£> CM LT> r-cMco«a-Lri<^f^cocrvOr— cMco«=)-Ln

PAGE 154

144 o o O CO O CM O «* CO O I s * CM o rr-. ld O CO CM <£> o cm co Ln r-~ O r>s CM CM L£"> O O CO i— CO CO O CM CM CM r— CM O Lf> CM lO 1X5 o r^ O i— «* «" CM CM «* ovd>— cocn^ococo Or-COCMr— i— — i— Oi— t^-cMOOCMr--^OCM«d"CMCOCOCOCMVO ocMiDOcooLncMcyi«dOCvJi— CMCMCMCMi— OCM Ocor-~-cTi«d-^J Lr)cOvDCMCO OCMi— CMi— CO>— >— >— r— ooLnmr-cocMoocO'— coco OCMCMCMCMi — CMi — CMCMi — CM ocmcovtjoi — Oi — cr> o <£> co i-n 0«d"CMi— CMr>-r— CO.— CMr— I— I— or^cocMCOLT>ovor--OLOLncr>i— OOr-r— I— COrCMi — «tfCO *3" CO CO o«*criOcocMOcnincTicocTicOLf)cn OrCMCMCTlCMCMCMr-COi— r— rr— r— E 4-> (0 10 S•!CO SO QJ ex o (0 vdi — cocTiOi — cMco*3-Lni£>r-.cocy>o r— i — i — r — CMCMCMCJCMCMCMCMCMCMCO

PAGE 155

145 O O «* O CM O CTi O CM r— «3" CM CM i— O^l-VOlOCTiCTiLnOCO OCMCMi — CMi — CJ«d"i — oroiDoor-^CMOCMcnoo O CM CM 00 i— I— «* CM I — r— OCIOOCOLOCO^Oi— CMH3CM Oi — «d-CM«2-«3-i — i — lOCMi — CM OOVDOCT>r~~VDi — «JCO CT> CM U"> OCMCMi — i — i — t— nCMi — 00 CM CM or-.ocMOcr>ocoo*3-'— co co co O CO CM CM CM CM i— I — CM CM CM «3" CM CM to i/> S-iO CD S4J Q. O (T3

PAGE 156

OLOvocodcsjor-^cTiocvj Oi — csjroi— cvjcvjf— ooo<£> 146 E -u fC 1/1 O) s_ o ix>t~^coCTiO «d ^a-^-«3-LOLnLr>LnLr>u">LnLr>Lf)Lr)<£>

PAGE 157

147 O *d" ^O O"! CM CO Lf) O <*> <*> *1" «=tCM I s * o ri— i— ^3-rooor^ Oi— corocvjCMi— i— OCTiLf)^-CTiOOCOCOVD O ^" i— CM CM i— CM CM i— o*a-co<"o«d-i— 10 «ao lt> O CM r— i— CM CM O i— COO oOLnc\icooc\jr^.r^c\ic\i O <* r— i— O CM CM O — CM I— OCMCMCMCMr— CMCMi— i— CMi— oococococM^a-Lni— rocsjroo OCMOi — i— CSJP^-fOCMLn^J-CVJUD O CM CO t— CM r— CM I— •— Cvli— — i— >— cvioocvjr^uDvD^J-r^cvjrocovo^-oo CM I— CM CM «3" I— I— r— CM r— Oi— COr— £ +* CO C/l SiCJ> So a) s+-> Q. O n3 c\joo»d-Lr)vDr^cocnoi — cm co *did

PAGE 158

143 E CT5 •!SSO +-> so D. 03 S-

PAGE 159

149 m

PAGE 160

150

PAGE 161

O O O i— CTi O «3CM CM CM o ro o *a*alo O CO CM CM CM i— O CO I-CO CO CM ID O CM CO CM CM r— CM (3 -ZL O X i—i OOCMCOCOCTlCMOO Oi— CMCMCMi— r^i— UJ UJ a. a:
PAGE 162

152 O CM V£> LT> O CO CM VO o — 01 r^ o o r*. cm cm to O O «3" LT> CO CT> O CM CM CM CM I— O co co r-<* 1— r-~ o 1 — «a«acm cm ^~ ooooor-i— <— cri O «— CO CM .— 1— >— 1— Oi— r^-i— OCTlCMLDLT) OCM"vt-CMCOr-~.COCM<£) OOOLDCO^a-LDOCMOOOO OCM.— CMCMCMCOt— OCM O^-^LDCMrj-COOCMCMCM OCMi— CMi— COi— <— CM CM ro^r^^cMCO^-tor-^r-^oi^OCMCMCMCMi— CMi— CM 1— 1— 1— OOI — rO (£><£> CX> 1 — LnOLDVOlD O'd-CMi— 1— VOOCOt— CMr— r—i— CT^^OCOCOO^C-iCOtor-.tOCMCM O,— C\Jr--COOCMCM*d-CO^-'=3;CO t-^cM^-CMCn'*cocoiocococoLn CMCMCnCMi— CMi— COi— <— CM CM r— CT> CD

PAGE 163

153 o co r-» i— o O O CM LT> CM o co * — cr> i — 01 o cm CM CM CM 1— o cm co o o 00 o O CM 1— «3" CM 1— 1— o>— ro sr vo co vo vo C2 CO r— r— CM 1 — OrocO"ri-crir—roor-~r— Or— CM 1— CO 1 — CMrJ-i — OLfi^OCOi — LDCMCMO"* OCMCMC0CMr--=i-CMCMr— O CM CO CO r— 1— CMCOrj-lDVO OCMr-i— CMOOCMCMCOr-i— or~~r^Lnr^.— cocMr--~ocMor^r-Lr> «3-rCMCMCMr-r-C\JCMCM^3-CMCM r^.,_ r^oroioocooo<— * $ *o CMi— CMCMOOi— CMCM^-r— COrOi— I/) (O •!SsO -M So s_ ro«3-Lnoi^cocr>Oi — cMCO>rj-Lr> co r orooorooooo«3-«3-«d-" : 3"*3'*3"

PAGE 164

154 ITS

PAGE 165

155 O «dro CTi O O f> CM "3" «3" O en CO co >— 0010"— r-. O *S" i — csjooi— CMfOr— olocnji— oor-i =a-oivD , sa-r— OCOi— i— Oi— CM O i— i— •— OLn^i-crioCTi«3-LnLf>Oi— cm Or— CM CM 00 i — CM CM i— I— CM CM Oi — r^c0CM00«^-Lf)o«^-i — ir>cr> OCMOi— CMCMr-~-rOCMUD"5}-CMLf> OLncMLn«d-orocMLT>iDCMCTiLr)0 O CM CO i— CMCMCMt— i — CM CM r— r— CM Oi — i — cvj^-CMr->-rocMroocT>ocT>oo 0>— CMCMCM>a-i— i— CMCM.— Oi— CMr— (0 isscr> CU

PAGE 166

156 o»— r-~ r-~ co o 10 co O CM O CM CM Lf> f— i— oOi — s n * id n * O CO CM O *} CM CM i — i — OOCocticmcmcmi— COO OCVIr— CM f— CO CM CM O CM Or— r->cr>r— oo<— t— co r^ lt> Of— CM CM CM f— CM «3" CO CM CO ooLDr— «a-cMr^cor^i— r-» co O CO O CM CM CO i — CM CM CO vocMOOCMCMCMir>r--.vovo 0*3-CMCMCM*3-CMCMf— LTJCMCOf— f— ocrir— r^r— oc\jor-~.c\io*a-«3-r^evj O CM CO r— i— CM CO i — I — r— «a-Or— Of— (0 fss-

PAGE 167

157 O CM «3CM «3 CM "3" O i — «3" CM i — CM CM o r^ i-n m m cr> m co OOOi — COOr— o ocM«3-r-~r-~^rror^ «dO O M i— Mi— CMi— CM Oi — Mi — in lt> ci r^ cm r-~ 0«=3-r— VDr— r»-i— t— M i— O ' — CM^i-lDOO^CMCTlCn i — 0«d-MOr~<— locmi— i— cm Or— LD r— l£5 «3" f»~ CM CM «=JVO CO Or— OOi— OCMi— r-0>— CM oMcrir~~Lncrvr~-«d-Lf)CMur)CM — O >— O CM CM CM r— i — i— LT) i— CM r— oix>«a-'3-r^'— ^-co'd-r^co'd-i—i— lt> O CM CM i — CMMCMi — 00 i — CM CM M CO I — (0 1S_ Scn cu o +-> iu o. re sre i— CMro^i-Lnvor^oocriOt—icMroi^j-Ln cr>CTicrio~iCTiC7icncria^ooooo o

PAGE 168

150 O +J so

PAGE 169

APPENDIX H Principal Axes Solution Based Upon N=450 With Final Communal ity Estimates and Eigenvalues Program

PAGE 170

APPENDIX H (extended) 160

PAGE 171

161 APPENDIX H (continued) Program

PAGE 172

162 APPENDIX H (extended)

PAGE 173

163 APPENDIX H (continued) Program

PAGE 174

154 APPENDIX H (extended) —

PAGE 175

APPENDIX I Principal Axes Solution Based Upon N=315 With Final Communal ity Estimates and Eigenvalues Program

PAGE 176

166 APPENDIX I (extended)

PAGE 177

167 APPENDIX I (continued) Characteristics

PAGE 178

APPENDIX I (extended) 168

PAGE 179

169 APPENDIX I (continued] Program

PAGE 180

170 APPENDIX I (extended)

PAGE 181

iTJ -C CU <-3 \— »/> Z CD uj sx Q. 0< Q. 4«C i— in (B CD QS-iZ5 O 4-> C O T3 SSD+J t/) cu

PAGE 182

us r-. «SCM CO CO CO K 1 .. ,,^.,^ IOI IOI IVOI I VO I I CM I IOI I i— I I i— I '•— ° ° I . I I • I I • I I -II -II -II >ll -II -II •!! • Lf) ** i— r— coco cvno r^ co «dCM MN ^o> 01 co r> 90 S iS IOO IOO I 1— O I MO IOO I • — 1— IOO IOO IOO IOO OO I • • 1 I II I cfti^Noocoajr-vofn^coinwN^orJojroojr-oioaiwcnoicOLOiDCOinin OOOOOOCMi— 1— Mir— 1— OOOOOi— 1— I— OrO CM 1— 1— 1— r-r— 1— 1— 1— ,— ,_ ,_ CM CM CM r— t— O — 1— OOOOi— OOi— 1— 1— OOOOOOOOOi— 1—1— nrocM*invD^c^ococv)cocrir-crio\iifl<*cfiCTi*Dco^D< r >oojo< • )inif>Oi-i,— ,— ,— ooorororocvjMCvJoocoooooor— 1— 1— 00O1— 1— 1— 1— 1— 1— — 1— — ^i-c\Jcor^c0^cnNroLncow^ojri-M^tvN^Ni-o«3«Dtvovicoincjtoi£) ,_ r _ , ^j CVJPJr _ r _^ 1 _ l _otf)mir)'!j-«j-^-ir>inL')Lnintnr^M~>r^rNrvtvNiv inmi^NcoNrvcvjomwNO^iDNi^NioinNtv^mNNr-r-cvjr-gffim OOOOOOOOi— 1— OOi— 1— — OJi— 1— 1— 1— 1— 000000000000 I I I I (0«3H)fOP1«^-fONNi-^-U3inCHOlD^OOCOCOCO«30JMr-MwronfOrO lO(£>(£>«3-<3-«=t-iX3i— cMCMLncsiLnncncvicovDr^oMCsJcococni— cmcocmoo^j-pocmi— &>c* , ,_ , — CM CM CM r— 1 — 1— 1— O 1 — 1 — 1 — — 1 — < — ' — I — ' — ' — CMCMCMi — 1— r— 1 — 1 — 1 — 1 — OO ri-rr-OOOOOOOOOOOOl— *"" "" r_ "~ '"" OOOOOOOOOOOO ini/)inaiCftc^cooowcnoNNCONi-r-i-^->d-roinLn<3- O 1— O1O1— CfiOrro ssCX> 0) o +-> so D(B siDi£nococoooi-'-'- — i— oooon

PAGE 183

173 r-. cm m i — i — CT> ^" I , — | IOI IOI li — I I CM I IOI IOI I .11 • I I • I I -II • I I • I I • I i — co o cm ctioo \d co men n n oi i IOO I CM CM lOi— IOO I i— o IOO io> ir> co

PAGE 184

174 (O -iS_ So) aj o 4-> so D(O sI n I IOI IOI IOI 1 i— I IOI IO I .11 .11 .11 -II -II -II 00N CO CO ^OO.— n-)^OOOOOOOOOCvJCMCMOOOOOOOOOOOO 00000000°^™OOOOOOr-000 0000 00 K^^Kp^"SSSSSScOrOCoSc5cvJC0C>JCvlCMCNJ^CVJCMC>JCSJCNJrOOOOOO.— r— i— OJ CvJ Csl .— i— r-OOf-rr-r-r-r— prO O rp;0 ^ 00 OO CT> |^ CO o 1 -cONOimmOr-cnOr-cT)Oi— o^Or-cr>Or— cr>Oi— cno^ Si^^^^^SSS^^^^^^^^^^^^SSS^^^^^^

PAGE 185

175 rovor— por-~(Dt— cvj^j-Oi— I I r— I I i— I li — I IOI It— I ll — I lOl I CO I I i — I I O • li — II • I I • I I -II -II • I I • I I • I I • I I • I I • I I ON 00 CM C0«3CO «£> r— <3" CO (-»• (SI IT) CO 5jC\J O CM i — O lOO IOO IOO IOO IOO icoro iron I CM i — IOO li — i — I MM I I I I I rsMnoicocvJOONcooooiroror-iDin^-N(oaiOONOi-«OrOr— o o OOOOOt— OOOOOOi— r— i— OOOOOOr— i— oooooooooo I I I I I I I I oiOr-oiOf-aiOr-oiOi — oiOi-wor-moi-oiOi-moi-moi — cr> o — to ss_ o *-> i~ o svoix>vDr~.r-~r~oooooor~-r^r^rororor~.r^r~.inirjir5r— i — > — aioio^cricTiaioocooo CMCMCMVOVOV£>VOlOVO«3-*J «3-«3-"s}-*d— i — i — ^«t"d COCOOOlOU>VDlDVDl£)

PAGE 186

175 i— I I rllOllr— I I O I I CM I I O I I O I I O I I i— III— IIO H I I O I I • I I .1 I .|| .|| in co >v)-cr! cn^o r— oo i— vo i— o i coo ic(o co «aocti coi— I ri— IOO ICMCM I MM I i— O I OO IOO I MM IOO I MM I rCM F

PAGE 187

177 i— I i o i i CO VO Lf) i — CO O Ol lOI IOI IOI IOI Ir— I .11 • I I 'II 'II • I I • I «* I— O 1 I I— _+,_ ,-tcooo i— oi c\ji— «a-ro r-rcoo r~.ro wn on ioo i ^ id ir-ri^o too inn ' ^ ^ ' «*> «*> J'-I' . | *? *? | ] °i 1 * * ' |" |" i* i* i*i" ii ii ii ii ii J-Z^^OOOOOOOOOOOOOOOr-r-.— r; rr-; rr.— CO CO 00 C\J (NJ C\J i>.ineN(owiftNi-^r-r-^roNcocsjm^o^o^cDi-«ONoON^r-c;(virvj OOOr-OOOOOOOOOOOOOOr-OOCSJCNJN I III ooooooooooooooooooooooooooooooooo r r i* i* i* t ' ' i iii oooooooooooooooi^oocnj^i— cvjr-i— i— oocNj«*rocounin i* i i i i i r i OOOOOOOOOOOOOOOOOOr-i— i— OOOOOOr-^r-CvjNN i i i i I i I I I II Lninin,-LnkDr-inco^rooococo--NrNNOr-NinLn*^NOr-r-ir)NCO (^(.jM^^^^^r-^^r-r-i-r-i-r-i-Or-r-Or-r-OOONr-r-r-OO OOOO^r-OOOi— — i— >— r-i— OOOOOOOOOi— ^r-C\Jr-i— i— i— i— III LnLninooiCTi>*-NrniDLOiD<3TomuiOr-CT>Ol— CnOr— CT>Oi— OlOr-OlOiOlOrCTlOr— OlOrOlOr
PAGE 188

178 miorocji — «d-i — co v£> en CO I IOI lr-l • — I IOI IOI IOI IOI IOI IOI I O I IO I I .11 • I I 'II • I I • I I -II -II -II -II • I I • I I I I I CM LT> .— 00 Or~CM CM f— CM i— CM CM t— COCO CTIN ON r— CO I CM CM I CM — I .— O I l— — I — rI I— I— IOO IOO IOO I ri— I rI— • • • • ' ' , ,* r .''.'..' i " <" NisaieoiotMrvNcowiiiONcoaiNSLnrNcocoor-i-ifliDNinigixioo «3" CM CM CO r— Wi-OOrI— rW r— rI— I— r— OOOOOCMl— I— I— O O r— I— r(Mcni-stJ-CMOviococoi^cooor-ficxii-mcnrvcocoioifiMi-cvir-onopcovD i-Ol-NNWNWi-^i---t-r-i-i-i-r-OOO^r-r-r-i-r-Nr-r-Wl-rOOOOOOOOOOOOOOOOOOf— — i— oooooooooooo I 1 I I I I I MOi-mi-( v >fooo^.NNOMr)i^soNOo>ooscrnDcrioco oooooooooooooooooocororooJcsjcvj^-cMCMi— — r-or-o i i i i i i i ^inin^irnDr-NNOOi-oooooo«tmmLnioiDic^unDU(OLOinu3 onDNrNir>Lr)r^r^t^coa>cocoaDcocncoc^cMCMCMCMCMCMcr>cnoocMc\jcM OOOOOOi—f— — ri— i— r— p-i— i— f— r— OOOr-r-i— OOOOOOOOO I I I CTiOi — cnOr—CTiOi— CTiOi— CTiOr— CTiOi— CTiOi— CTiOi— CTiOi— CTiOi — CTiOr— SScn a) , — , — , — ix>v£) — if)LfnnioiciDOOOo>nMMOiciaic\iPsiMd-ooo cococoix)vovou3iDi£)OJCMC\ji — i— i — M w M i— — i— *
PAGE 189

179 i i § i i o i 101 101 101 101 10; jo; ioi ;o o ii .11 .11 'ii • i i -II .ii .ii .ii • i i i i _,. c)-c\i«=t-^t-«=i-«=t-cococ\]t\jcoi— Or— *d-cococ\jcvj«d-cooocricricricric\jCMCMroa>o i r00 CO Lfi CO CO OOl Cn O 0><^ CO r-rr-r-OOVOtO CO CM CM CO CVJr-OOOr-^OCVJCvJ OOOOOOi— OOi— OOOOOOOOi— r-i— i— r— i— OOOr-OOOOO ' i" i 'iii r 2 r 2S88g^-^l2!2!2^ r 2? r 2°°§5ooSo°2g§ooooS I* I* I* ' * III cm ai *± — ocxjr— oeMOOooo^r-.LDCoCOCOCOCvJr-i— ID «3" «3" 00 00 O^ rr-r— <—<— »— £9 2? 2 !i 2R 2 OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO iii ^^J—Or^r^OOOOOOr-.— r-OOOOr-OOOOOOOOOOi— r— r— I Or-r-r-csir-^-«d-io aCVDMCOeOCOfflfflCOCOfflr-r-i-r-r-f-i-f-i^i^r-r-r-r-r-nnn^i*^ LT>^^^^^cooocococMCNjcoco<^cocNjoj^^i£j^r^r^rararo^^^r-po OlOr-OlOr-OlOr-OlOr-CJlOr-OlOr-mOr-OlOf-OlOr-mOr-OlOtB to vcu3000CNjcvjc\jcocococococoLnLQLn»TOfgrococoi£)i^Ln SvOV£>COrOCOCOOOCOlDLOLf)Lriir)Lnr— r— r-CMCMCNj^DtOVDCOCOCOlDLOLDOOO

PAGE 190

ll g I1 J2 I .J£..2..5i!..S|.5i..8..J2..i2.!8 i i . i i • i i ' . ii • ' IOO IOO Ir^N I r-CM 100 100 oo ^ ># " • • i • • i ••••' " . * ' " ' ii S^SSSSSSsSSS^SSSSSSsSSSSESSSSSS ^.^ r _^l or^ivLnttMNW H---S-SSSgSSoSS§^2^S^J2^^§S-§^-.^SS 88&88S8S5888SnaKE:E:25!"2r*8588S88888 saasa58SS888J225aS8SS28SS8SS288S&& ^^^ggggo^^^^ggggoo^o^cjggg ^J-"^-Lf)CSJr— I— Lf)V£>VDOi— O00C0C0 22288SSSS^S8288888SSJ2S|5SSJ2J2Si^^S|^ OiOi — CT1O1 — CT1O1 — CTiO i— CTiOr— CTiOi— CTi O >— Ol O >— CT)Oi— CTiOi— OlOr— 10 CO •!Sscn ai o *-> so Dro SfO M( v,(v ) ^^ r _^^^Lnir)Lnini^LnLr)U^Ln^^^^3-^-^-^r«d-cocooocvJCMC^ 180

PAGE 191

c o res z: +j o c a: o Q. a) o 0) s-a -C CD a; co X .£ i— < +-> in Q 0) z: ix u o< D. 4Dr— •a: in ro o a. si3 U -P C O >r3 SSQ. I/) CD f=

PAGE 192

182 LD LD U3 <£> O I IOI lOI IOI IOI IOI IOI IOI IOI IOI IO II -II • I I -II • I I • I I I « I I I I I ' cmoo Of— r*. i— i— cm or-, ro * oo oo oo o r~.ro ojj I i— O l«DO IUDO I MO I i— O I i— O If— O If— O IOO IOO If— O I • • I • • I • • I • • I • • I • • I • • I • • I • I • • I • II III I > r-.csir--.iOi— Of— CMcr>c\JC\ir^.rocor-.r--cvj*i-rOf— co *t cm cr> t— cot— r— OLnco^co ooocoor^«d-or— i— ocsji— oooocmooi— ooooooooooOf— iit'"'' i ' i i i i ii ' oo^iflo^tsooosooiaiOOCJioor-offioooiotocofoaiaiLogiKj ,_,_,_ O O O f— OOr—OOf— — Of— i— r— f— Of— CVJCSJf— i— f— r— l— i— i— ooo ior^r^cvjcoo-icX)^-Lr>Lnr3->d-r-oDcOf— i— cm cm f— cm cm cm f— ^tM222tSC2!S r-r-,— r-OOOOOOOOOOOf— rr— ooooooooooooooo cjvD^ror--c\icDCMr— t— vDLn<*mniDr-»cr>cricriCsir-.Lr)cx3Lr>cOf— ooo^fcMCMoocvif— cocoooooOf— f— i— oooooooot— ooor— <— •— | i* •" i* " i ni-cnn , cDrsN*f-oiONaioiowo«00N(»5"fi , o< r )*oig2!fi cm cm i cm cm i — i — cm t— ^ir>^coc^^LntoLnLr>LnLnr^r-«cOi^r--.cor-~r-»r^^-*n^oiCftwcicOr-')-'t(* , >McxiONr.ir)CMr%j«3OOOOOOOOOOOOCMCMCMt— t— CVJOOf— OOOOOOOOOf— r— t— ............ ( LONCDCTii-M^COaiOOOCTikDkDrvCfll^OcocMrocMiococr> <— <— •— oof— ooot— t— r— t— r— i— — i— ,;' ; c ^ c ^ , ~^ ,;' ;' ; r ^ ,; ,;' ; ,; ,;' ;' i i iDir)Lf)lDC f iCD'JttH00lOOOOOaiOKI*W'4-ri ai o +j so D. rtj Smmmr-f-r-f»**eg00C0NNN***«MnOOOC\IWNI;ninUJr-i-rooociDOiooocococoinmiri cococo«3-«3-"3-f— f— t— cococOf— f— r—

PAGE 193

i— o I C\J I I CM I I i— 1 It— I O I I i— f> o injco cm r^ octi Oi— NO IOO IOO IOO If— i— mis «ars ro^vols cTilt) IOO IOO IOO IOO I ON r— OOOOOOOOOOOOOOOOOOi— OOOOOOOOCSJOOOO ," , r i* r r i > wc«norv.^*so)^monr-NO;oocgrap.'-'-'-p-jDic^;^;2aQo 000>— OOOOOOOf— CMCVIr— Oi— — OOOt— — t— OOOOJCMt— OOO ,— ,— ,— ,— OOOOOr— i— i— r— oo i — oJcncyicororonLnroc\ji — r— o , — , — OOOOOO" — i — i — i — i — i — ooo NOlflf-r-^Sr-OinifinCflkOONCONNNi-OlCONN'-ONCOr-r-pN iDlDlOaUkDLTlDiniflinmr-r-Mr-r-r-r-r-i-OOOr-r-r-i-Or-SMv r-NlOr-CONinN^miDfllilNMr-OWOOOinvD^lDlOVOnWOr-r-rxl OOOOi— Oi— t— OCMCVJCMOOi— CMCSJr— i— r-i— i— — f— i— f— — OOr-OOO rvNominn>*ifiC[)r-ofOiDinoinininr-r-ai su D(0 wiflinooo(nnMMnMr-r-r-Msrsooo«3wiD222aiciDfl;*fl; ooMoioiSoootonroMsNcotocosNMnwi^fi-oooMNCM 183

PAGE 194

184 n oo •— I ,— I IOI IOI IOI I i • i i i— i i • i i • i i o i i — (B • su Cu (O sfO COCO «d-CM CsJi — LOCO CSJi— _"'"'• IOOIOO IOO IOO IOO I O C\J o"oOOOOOi— OOOOOOOOOOOn-OOOOOOOOOi^CNJO ,',''*,' I* l' * I* l' ' ' ,' I* ,' ' I* II I I IZ^oOOOOOi— OOOCSJCSlr-OOf—OOOOOOOOOOOOr-i— i— McrjMc^^cn^oocrirocoro^^rocMcsiMroco^Ln^i^^^^oooooDr^p-j^ OOO^^OOOOOOOOOOOOOOOOOOOOOOOOOOOO ^oc\icioro 1 -^oimooimi-CvJoooiLnNU30dLn^coocoiDN--cooirv^ OOOr-OOWi-i-OOOOOr-i-i-r-r-i-OOOOOr-Or-r-OOOrMCvJOLOWOr-^lfl^r-r-OlCOinCONOin^^CSJf-WnWINJj-OCJOLnN ^^^OOr-OOOOOOOOOOO^-OOOr-^r-r-r-^-OOOCVJCNJCsJ v£>uDinc\jir>r^r^c*r-^^^r^in^.-.-cococsi^cococovo^^cocvjcogcoco ^Z. ^Z. rrrrt—i— Odi— r— I— OOONMNrr— I— I— I— r— f— l—i— 1— I— I— CO CO CO NC\JWWPJC\iPJC>oc\iforimir)LniflininvovDvoiDr^rxiv.NorvNrvh.Mmro OOOOOOr^r-OOOOOOOOOOOOOr— rrOOOOOOOOO * * ' , I* l' .* '" ' ^oooo^^^oor^coc^o^oo^^ Oi— CvJOi— CMOrOJOrCMOi— CSJOr— NOrCsJOr— CM O I— Cvl O > rocoroSSSooo^^^cococooocoooojc\icsjcvJC\icsicsjevjc\jvovo«3

PAGE 195

I o i to i • i i I O I I i— I I i— I ,— I I , — I I N I I CM I lr-1 I CM | .11 -II • I I -II -II • 5o i?» i o B i £ £ -So . ^ o . J2 ri r-* j ^ eg t^-^^^-^^g^^^gSo r * r ,*'*.' r * * r •' • ^^j2j2j£«j:-J25SSSSSSSg&8SSSg5SS^5S^55 555SSS22j:5S82SP55 8gS5SSS58gSrrS!2^ fc»fcaas»»ag5g88?:85S88S5gSS88S8828S iONMCMCir^LnocnMgLnr-oj o-jn»p:»aaS22"a2S85 888sggSSS^SSSo5o Sin8a5588e8588ggS8 8888888888^.8888 rocMCMWDLoo\Jvor^.«d-UDOCTjr^.coo "~~*|2*?28g?^oo§22§"^~oooooo^SSdo Libii^iv^^rrtronOrvirvip-lCMCMCMCMCMt— CM CM r— r— f— I— I— I— i— <— <—_ <-» . . SSSS88S88S2£SS888S8gS85828:=S888« NOi — (MOi — CM O I — CM O r— CM O — CM O >— CNJOr-CMOr— CMOr—CMO t— CM t/1 (0 TSSCT> CD O +J SO 0_ rO 8g8SSS333^^£2S«883S8SSSSS55 ggg

PAGE 196

186 I O I I O I I O I I o odo r— co ldlt) co en cvjro •—in •— <£> SibS ^ K . i£ "" , £S iooo o cm i .— o 100 ir-o .«dir-ro loco i .— co o.— o o ^oii-Ti-o^nir)(ni-^^N^r-Lnrfo^g^^i2N>3-coowooojor : .Ln NNr-NnOr-ONOOOr-Or-N*r-r-nONnOr-«r-OOOr-00 ','"',*,' |* " | | I II I I nioco^oi(^rxiooiNiDincn— ^r-OOOOOOOr-OO^OO^ m co co cm o i— «oix>LOr— r-ofnnnooiotoocONon*N«)Nigf;OioioN OOOOOOOOOOOOOOOOOOr— OOf— r— i— i— i—i— OOi-^OOO I I I I III III iflONncicoocou30cooiNi-woidrocr;or^o^roco^oj2Sfc Oi— i— OOr— i— t— i— OOOOOOi— <— i— OOOOOi— OOOOOOOOO I oiDCOOirnDir>iDNn^-«tiniriLni-(flcoomm(DOcjOi-Mcaw3CTiMr-5t y^^^r-^OOOOOOOOOOOOOOOOJCVJCMCOCNJCSJr— i— r-r-.— i— OC0^r-MMCMNmL0LnVDNNN<3-OOOfJinsNcoc3ii s -ix)coor-oioqir-r-p Oi— OJOrCMOr-CMOi— CMOi— NOrC-J O >— C\J O i— N O iN O rCVJO-— C\J E w re iSsC7> CU o +-> s_ o a. res Si — r — i — — i — i— cococoLniDLncricricri^oiovDcricricrii— i — i— ioiovdcococo^j-^J"*

PAGE 197

137 i— CO O «3" I ID I I LD I I «3I I -3" I -II • I cold o«3too ctiud u3co «3co «* oo r~i"*cm ^tr~~ o ro «aOO li — O IOi— I ON IO«3" li — CM IOCNJ I NO I no fi — O I NO o^ooi«jiDr^u3' , if r )5jOfNOLntncoanDiOOOOOOOi— t— r— f— i— r— i— i— i— i— r— O O O CM CM r— OOCMCMCOOOCMCMCMCM
o -i — OOOOOOOOOOOOi— i— O CM CM r— i— r— i— OOOOOOOOOOOO CTioo^vom^T-eovor-Mt-cMnMrsair^i-^vofo^NcorsM^-iocMinLOiOOOOOOi— OOOOOOOOOOOOOOOOOOOOOOOOOO ir)Lnco^f«j«jrvrNoaiooo>s-NLTOcOi-voiflwcooirNSNiflc-{\i(MOOiooooooooi— ooi— r— i— — i— Oi— o o o cm cm — mroM«a-^tMLflLT<3to n in Mi — o^-cvji — cni — oo -Jo cm — ocMLr)oopoi~~r-~cMcr>iocMi — o i~~ — cr> io OOOOOOOOOOOOi — i — i — i — i — i — OOOi — i— i — CMCMCM0OO0CMO0CMCM cm cm i — oocnoocNji— lt> o -o o oo — lOSMONcorooi — poorv^-^-isi-M«t OOOr— i— Of— I— — OOOOOOOOOi— •— i— r— I— i— I— r— r— OOOi— I— i— ^^cor~-r~.i~~Lnir>^o^^oo^oiDcocytcr>co^o*£>LOi — < — (vjr-ioNforofnin^j-^t r-r~-i~-.r-r-~r~-.r~i — r~~voio«D-o«X)ix>cocoroLnLr)ir)oooooooooooo Oi — CM O i — CM O i — CM O i — CM O i — CM O — CM O i — CM O i — oo o r— cm o « — cm o i— CM fO ts_ s0> ID

PAGE 198

188 ooc\iiooooor-.o I , — | IOI IOI IOI Ir— I I — I lOI I i— I -II -II 'II -II • II (TifO CO cn oro 00 C\J OCO OM r>. rCNJ «3" ^O L£J LO 00 OD UO Ir-O I i— O li— O IOO I i— O IOO IOO IOO IOO IOO IOO moMtoo^«50Mooi^i^csjro^coooaj^-LnLr>(NJC\iro>a-co«*oor-r^.rocrj OjeucClCNJ.— .— .— .— .— .— .— <—.— I— ,— r-.— .— .— r— .— *aianD(*)i-c^( y )noNrvnf— OOOi — i— O i— r— r— OOnnM.-.-f-MOOMCTiOti-OlOianOVOlD'-r-Nin^-Wg^MNNh. ,— r— , — oooooooooinDioiD(ouuiDvoNSMDiouf*n«" coLO^->3-c\iowi-(TiNvo«d-NOcvii-i-o>Dcocri'a-'a-tfiL'5<3-w;o*ocoinocg OOCNJCMOJCMCvJCVJCMr— i— i— i— ooooooooooooooooooooo cvim^Oi-or>.cotvinu3^<3-r-Mt-(Mmi-Oi-NivcoiDOOO>5;ONiginr; ,— r— i— i— r— i— OOOOOOi— i— — r— i— i— i— i— r— OOOOOOOi— i— ooo WNanDNOOioMaioojioirinooa)— i— rocMr— i— i— — CMcvjr— — — — Lnir>Lncoooi^ oicocoocTiONio^iOLom^-'a-^( , O( , )MOOOcri0icorvrvkOCOCON* OOOi— O >— OOOOOOi— i— i— t— i— i— ri— i— OOOOOOr—.— i— r— i— i— Or— CVJO>— NOrMOiCMOi — CM O i— CM O i— CM O rCM O ' — N O rW O rN 10 (0 •iS_ cn cu o -P so s_ i — , — r-inmL'noo(DOOOCMNNWOiai(\j(MNV'J , *ooooooo>crioi io vr> 10 cm cm cm i — i— f— cm cm cm i— r— i — ^
PAGE 199

189 I O I I O I I IOI I i— I IOI IOI IO m r-~ void coo r-io ocnj 10 £. £i! ;r . J£> °5 . £J i2 IOO IOO IOO IOO IOO IOO IOO IOO oo ^S<->rS<-><-^oooOOOOOOOOOOOOO.— OOOOO.-OO.-roooooooooooooooooo wMXjsoooiwowoo.-srooiooioio.-Mw^^iDUimLnrauiono oooooooi— Oi— >— r-oooi— oo>— >— >— >— i— i— r—i— r-.cx3raotoio^«3rvMMfor^ifl«*aiLOLn OOOOOO^-f^f-OOOOOOOOOOOi-OOOr-r-r-r-r-i— ^-i— rI t-ocooooococNjco^^^^^^r^!^cororoc\jraajajoococo«d-c25:^2^ COCOf^COCOt^r— r-^^^^i— r-n-OOOi— r-c— COCOrO^^^f— r-r-r-i— r,Z ,_ ,_ r,— ,— OOOi— r— i— OOOOOOi— OONNCVIrr— r— OOOOOO I— CVJOi— CvJOf-CMOi— CvJOt— OdO-— C\JO>— CvJOi— CnJOi— CVJOi— CMOi— C\J ^iD^ooowwcMcoraoOMMroi^ininKmranrom^^Lftmroror-r-rmroMSLOinwinuir-rr— ojCMCvj«D^oiocorocoLnir)ir)000 cnjcmcm

PAGE 200

190 I i — 1 I i— I I o I I I O I I i— I I i— I in criOCOcriu3"5t «S-Oi — c\ii — o r— I— r— r— r— I— r— I— r— OOOOOOOr— OOOOOOOr— r— r— C\JOVOlf)CO(£)ODVDLf)lOCO^J ^«*ioooooc\ii — r— OOOOOOWWNri — i— r— r— r— r— i — i — OOOOOOOOO wr-^cot^cocr> OOOOOOr— r— r— CMCMCMr— r— r— r— r— r— OOOOOOOOO Oxji — or^vou^roc^(^cor^oocv)ovjrnro<^t\jcvicoro«^-Lr>Lf)r~«oOf^ ,— r— r— OOOr— r— r— OOOOOOOOONNWCMNNrr— r— r-r-r-C0 0:ivNNNC0Oiaiiriir)ir)Lnir)LOLnLr)^j-«d-«d-^)-^-^i-^-^3-"5j-cocococMCMCM LnLnLn^-^-^j-LnmLf) lovoiocococor— r— r— iovdio

PAGE 201

APPENDIX L t Statistics for Mean Factor Score Comparisons Between Program Areas Based on Assumption of Equal Variances Factor 1 " ' Community Student Instructional Occupational Services Developmental Services (N=83) (N=88) (N=5) (N=21) Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) Developmental (N=5) -0.723 0.037 0.824 0.070 -0.321 -0.059 -2.185 1.727 2.275 0.724 c Factor 2 Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) Developmental [fc§)„. -0.242 4.600 4.765 1 0.691 -0.960 -0.711 0.772 0.910 s.oeo ' 1 0.893 Factor 3 Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) 1.339 •1.060 0.834 -0.998' -2.403 -0.285 1.951 1.036 0.436 191

PAGE 202

192 APPENDIX L (continued) " Community Student Instructional Occupational Services Developmental Services (N=83) (N=88) (N=5) (N=21 ) Developmental (Nz§2 1 .044 Factor 4 Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) Developmental (N=5) -0.493 -0.174 0.344 -1.813 1.476 1.656 •3.676 e ' 1 4.106 2.628 0.642 f,' Factor 5 Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) Developmental (N=5) 2.574 0.072 -2.534 y ' 1.263 -0.449 -1.133 -0.517 2.419 0.535 1.557 Factor 6 Advanced and Professional (N=65) Occupational (N=83) Student Services (N=88) Developmental (N=5) 0.884 0.025 -0.916 -3.725 4.281* 3.737* -1.255 1.928 1.311 -2.273 Advanced and Professional (N=65) •2.833 R ' -2.029 •1.081 -3.814

PAGE 203

APPENDIX L (continued) 193

PAGE 204

APPENDIX M t Statistics for Mean Factor Score Comparisons Between Administrative Areas Based on Assumption of Equal Variances Community Academic Student Instructional Business Affairs Affairs Services Affairs Presidents (N=210) (N=88^ (N=21) (N=29) (N=13) General .117 .585 1.91 -1.170 1.125 Administration (N=89) Academic .481 -2.137 -1.027 -1.343 Affairs (N=210) Student 2.275 -0.666 -1.0777 Affairs Factor (N=88) Community -2.850 2.773 Instructional Services (N=21) Business — ----6421 Affairs (N=29) _ General 1.281 a 3.967* 1.461 b 1.065 .980 Administration (N=89) Academic _. 5.662 C '* -1.215 -0.474 2.128 d Affairs (N=210) Factor Student __ — 3.060 e '* 4.103* 1.049 2 Affairs (N=88) Community .. — --0.634 2.041 Instructional Services (N=21) 194

PAGE 205

195 APPENDIX M (continued) "" Community Academic Student Instructional Business Affairs Affairs Services Affairs Presidents (N=210) (N=88) (N=21) (N=29) (N=13) Business Affairs (N=29) 1.746 General Administration (N=89) Academic Affairs (N=210) -1.268 -0.708 -1.834 Student _ , Affairs Factor (N=88) .696 -0.525 9 .530 -1.526 -0.262 n .009 0.399 -0.662 -0.775 Community Instructional Services (N=21) Business Affairs (N=29) -0.734 1.071 1 .225Factor 4 General Administration (N=89) Academic Affairs (N=210) Student Affairs (N=88) Community Instructional Services (N=21) Business Affairs (N=29) -0.255 .042 3.428 k '* .445 -1.294 0.256 -3.643 1 '* -1.685 -1.469 3.628 m '* 1.660 1.519 -2.145 1.601 -0.196

PAGE 206

196 APPENDIX M (continued) Community Academic Student Instructional Business Affairs Affairs Services Affairs Presidents (N=210) (N=88) (N=21) (N=29) (N=13) General -1.301 -0.701 .683 .623 .505 Administration (N=39) Academic -1.575 -1.558 -1.043 -0.008 Affairs (N=210) Student 0.611 .014 -0.608 Fartor Affairs hactoi (N=88) 5 Community -0.442 1.001 Instructional Services (N=21) Business -— ~ --528 Affairs __(N=29] _ _ __ General .830 -1.890 1.960 1 .830 n 1.311 Administration (N=89) Academic — -0.296 -1.468 -0.243 2.463° Affairs (N=210) Student — — 1.159 .052° -1.799 Affairs Factor ^ =88 ^ 6 Community — — — .845 2.648 q Instructional Services (N=21) Business ----1.702 Affairs (N=29) __ ___. General -0.082 -0.030 1.872 .830 -^.325 Administration (N=89)

PAGE 207

197 APPENDIX M (continued) Community Academic Student Instructional Business Affairs Affairs Services Affairs Presidents (N=210) (N=88) (N=2U (N=29) (N=13) Factor Academic Affairs (N=210) Student Affairs (N=88) Community Instructional Services (N=21) Business Affairs (N=29) .558 •1.978 2.085 -1.521 -0.449 -2.371 1.753 2.645 -0.606 -0.969 Factor General -1.367 Administration (N=89) Academic Affairs (N=210) Student Affairs (N=88) Community Instructional Services (N=21) Business Affairs (N=29) -2.268 -4.731 •1.428 .726 3.433 .930 1.503 -2.344 .951 -0.631 -2.621 2.082 .319 1.961 General 1 Administration (N=89) Academic Affairs (N=210) 542-1.184 1 -0.293 0.959 1 -0.316 2.455 v -1.196 -2.237 -0.385

PAGE 208

198 APPENDIX M (continued) Factor 9 Student Affairs (N=88) Community Instructional Services (N-21) Business Affairs (N=29) Community Academic Student Instructional Business Affairs Affairs Services Affairs Presidents (N=210) (N=88) (N=21) (N=29) (N=13) 0.144 1.953 .264 1.186 -0.104 'Thi t t t t t t t t t t t t t t t t t t t t t t *Signif > e Th f Th n Th Th Th k Th l Th > n Th s Th r Th u Th v Th s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances s for unequal variances is for unequal variances icant at .10 level . (F-l.

PAGE 209

REFERENCES Alkin, M.C. Evaluation theory development. Evaluation Comment , 1969, 2(2), 2-7. Alkin, M.C, & Fitz-Gibbon, C.T. Methods and theories of evaluating programs. Journal of Research and Development in Education , 1975, 8(3), 2-15. Anderson, S.B., Ball, S., & Murphy, R.T. Encyclopedia of educational evaluation . San Francisco: Jossey Bass, 1975. Astin, A.W. Undergraduate institutions and the production of scientists. Science , 1963, 4J_, 334-338. Astin, A.W. Who goes where to college? Chicago: Science Research Associates, 1965. Astin, A.W. Predicting academic performance in college . New York: Free Press, 1971 . Astin, A.W., & Henson, J.W. New measures of college selectivity. Research in Higher Education , 1977, 6, 1-8. Astin, A.W., & Solmon, L.C. Measuring academic quality: An interim report. Change , November 1979, pp. 48-51. Astin, A.W., & Solmon, L.C. Are reputational ratings needed to measure quality? Change , October 1981, pp. 14-19. Balderston, F.E. Thinking about the outputs of higher education . New York: Ford Foundation, 1970. (ERIC Document Reproduction Service No. ED 078 770) Balderston, F.E. Managing today's university . San Francisco: Jossey Bass, 1974. *~~ Banghart, F.W., Kraprayoon, P., & Clewell, B.C. A study of e ducational qu ality: Development of indices and evaluation model (Final Report, STAR Contract No. 77-2012). Tallahassee: Office of the Governor, State of Florida, 1978. Bess, J.L., 8. Hayes, C. Benefits and beneficiaries of innovations: A research model. In C.T. Stewart (Ed.), Institutional research and i nstitutional policy formulation: 11th annual forum of the Association for Institutional Research 19TT Tallahassee, Florida: Association for Institutional Research, 1971, 43-47. T99

PAGE 210

200 Biggs, D.A., Brown, J., & Kingston, G. Citizens' educational values and their satisfaction with a state university. Research in Higher Education , 1977, 6, 157-167. Blackburn, P.T., & Lingenfelter, P.E. Assessing quality in doctoral programs: Criteria and correlates of excellence . Ann Arbor: Center for the Study of Higher Education, University of Michigan, 1973. (ERIC Document Reproduction Service N. ED 078 728) Bloom, B.S., Hastings, J.T., & Madaus, G.F. Handbook on formative and summative evaluation of student learning . New York: McGraw-Hill, T9TL Bowen, H.R. The tyranny of numbers. Phi Delta Kappan , 1963, 44, 427. Bowen, H.R. (Ed.). Evaluating institutions for accountability. New Directions for Institutional Research (No. 1). San Francisco: Jossey Bass, 1974. Bowker, A.H. Quality and quantity in higher education. Journal of the American Statistical Association , 1964, 10(3), 1-15. Boyer, E.L. Academic excellence. Improving College and University Teaching , 1964, ]_2, 164. Brown, D.G. The mobile professors . Washington, D.C.: American Council on Education, 1967. Burt, C.L. The appropriate use of factor analysis and analysis of variance. In R.B. Cattell (Ed.), Handbook of multivariate experimental psychology . Chicago: Rand McNally & Co., 1966. Carpenter, R.L., & Carpenter, P. A. The doctorate in librarianship and an assessment of graduate library education. Journal of Education for Librarianship , 1970, U(2), 3-45. Cartter, A.M. An assessment of quality in graduate education . Washington, D.C.: American Council on Education, 1966. Cartter, A.M., & Solmon, L.C. The Cartter report on the leading schools of education, law, and business. Change , February 1977, pp. 44-48. Cattell, R.B. The dimensions of culture patterns by factorization of national characters. Journal of Abnormal and Social Psychology , 1949, 44, 443-469. Cattell, R.B. Personality: A systematic theoretical and factual study . New York: McGraw-Hill, 1950. Cattell, R.B. The meaning and strategic use of factor analysis. In R.B. Cattell (Ed.), Handbook of multivariate experimental psychology , Chicaao: Rand McNally & Co., 1966.

PAGE 211

201 Cattell R B. The structure of personality in its environment: Volume T, personality and learning theory . New Yorkl Springer Publishing Co., 1979. Cattell, R.B., Breul , H. , & Hartman, H.P. An attempt at more refined definitions of the cultural dimensions of syntality in modern nations. American Sociological Review , 1951, 17. 408-421. Clark, M.J., Hartnett, R.T., & Baird, L.L. Assessing d imensions of quali ty in doctoral education: A technical report of a national study in three fields . Princeton, New Jersey: Educational Testing Service, 1976. (ERIC Document Reproduction Service No. ED 173 144) Cole, J.R., & Lipton, J. A. The reputation of American medical schools. Social Forces , 1977, 55, 662-684. Cox, W.M., & Catt, V. Productivity ratings of graduate programs in psychology based on publication in the journals of the American Psychological Association. American Psychologist , 1977, 32, 793813. Craven, E.C. Information decision systems in higher education: A conceptual framework. Journal of Higher Education , 1975, 46, 125-139. Craven, E.D. (Ed.). Academic program evaluation. New Directions for Institutional Research (No. 27). San Francisco: Jossey Bass, 1980. Cronbach, L.J. Educational psychology (2nd ed.). New York: Harcourt, Brace, & World, 1963. Division of Community Colleges. Community college management informa tion systems procedures manual Tallahassee, Florida: Division of Community Colleges, 198U. Division of Community Colleges. Directory of Florida community colleges . Tallahassee, Florida: State Department of Education, 1981a. Division of Community Colleges. Report for Florida community colleges 1979-80 . Tallahassee, Florida: State Department of Education, 1981b. Division of Community Colleges. Quality indicators (Memorandum No. 8290). Tallahassee, Florida: State Department of Education, 1982. Dixon, W.J., & Brown, M.B. BMDP biomedical computer programs: P-series 1979 . Berkeley, California"! University of California Press, 1979. Dolan, W.P. The ranking game: The power of the academic elite . Lincoln, Nebrasks: University of Nebraska Press, 1975. [ERIC Document Reproduction Service No. ED 129 131). Doll, R.D. Curriculum improvement: Decision making and process (2nd ed.). Boston: Allyn & Bacon, 1970.

PAGE 212

202 Dressel , P., & Simon, L.K. Allocating resources among departments. New Directions for Institutional Research (No. 11). San Francisco: Jossey Bass, 1976. Drew, D.E. Science development: An evaluation study . Washington, D.C.: National Academy of Sciences, 1975. [ERIC Document Reproduction Service No. ED 108 967) Dube, W.F. Undergraduate origins of U.S. medical students. Journal of Medical Education , 1974, 49, 1005-1010. Elton, C.F., & Rose, H.A. What are ratings rating? American Psycholo gist , 1972, 27, 197-201. Finn, C.E. The future of education's liberal consensus. Change , September 1980, pp. 25-30. Florida Community/ Junior College Inter-Institutional Research Council. A proposal for identifying program characteristics useful in program quality-evaluation decision making in Florida's community college system . Gainesville: Florida Community/Junior College Inter-Institutional Research Council, 1981. Fotheringham, D.J. Quality indicators in undergraduate education as viewed by selected respondents in selected colleges and universities (Doctoral dissertation, State University of New York at Albany, 1978). Dissertation Abstracts International , 1978, 39, 696A. ( Un i versity Microfilms No. 78-14334, 129) Fox, D.J. The research process in education . New York: Holt, Rinehart, & Winston, Inc., 1969. Gardner, D.E. Five evaluation frameworks. Journal of Higher Education . 1977, 48, 571-593. Gourman, J. The Gourman report: Ratings of American colleges . Phoenix, Arizona: Continuing Research Institute, 1967. Gourman, J. A rating of undergraduate programs in American and international universities . Los Angeles: National Education Standards, 1977: Gregg, R.T., & Sims, P.D. Quality of facilities and programs of graduate departments of educational administration. Educational Administra tion Quarterly , 1972, 8(3), 67-92. Guba, E.G. Problems in utilizing the results of evaluation. Journal of Research and Development in Education , 1975, 8_(3), 42-54. Guertin, A.D. Distortion of factor loadings as a function of the number of factors rotated under varying levels of common variance and error (Doctoral dissertation, University of Florida, 1977). Dissertation Abstracts International , 1977, 38, 3365B. (University Microfilms "No. 77-29248, 132)

PAGE 213

203 Guertin, W.H., & Bailey, J. P. Introduction to modern factor analysis . Ann Arbor, Michigan: Edwards Brothers, 1970. Hagstrom, W.O. Inputs, outputs, and the prestige of university science departments. Sociology of Education , 1971,44, 375-397. Harman, H.H. Modern factor analysis (3rd ed. rev.). Chicago: University of Chicago Press, 1976. Hawes, G.R. Hawes comprehensive guide to colleges . New York: New American Library, 1978. Hughes, R.M. A study of the graduate schools in America . Oxford, Ohio: Miami University Press, 1925. Hughes, R.M. Report of the committee on graduate instruction. Educa tional. Record , 1934, ^5, 192-234. Jennrich, R.I., & Sampson, P.F. Rotation for simple loadings. Psycho metrika, 1966, 31_, 313-323. Johnson, R.R. Leadership among American colleges. Change , February 1978, pp. 50-51. Jordan, R.T. Library characteristics ranking high in academic excellence. College and Research Libraries , 1963, 24, 369-376. Keller, J. Higher education objectives: Measures of performance and effectiveness. In J. Minter, & B. Lawrence (Eds.), Management In formation systems: Their development and use in the administration of higher education . Boulder, Colorado: Western Interstate Commission for Higher Education, 1969. Keniston, H. Graduate study and research in the arts and sciences at the University of Pennsylvania . Philadelphia: University of Pennsylvania Press, 1959. King, M.C. Quality assurance at B.C.C. Communique (Brevard Community College, Cocoa, Florida), August 21, 1981, p. 1. Krause, E.D., & Krause, L. The colleges that produce our best scientists: A study of the academic training grounds of a large group of distinguished American scientists. Science Education , 1970, 54(2), 133-140. Lawrence, B., Weathersby, G., & Patterson, V.W. (Eds.). Outputs of higher education: Their identification, measurement, and evaluation , Boulder, Colorado: Western Interstate Commission for Higher Education, 1970. Lawrence, J.D., & Green, K.C. A question of quality: The higher education ratings game. AAHE-ERIC Higher Education Research Report No. 5 . Washington, D.C.: American Association for Higher Education, 1980. (ERIC Document Reproduction Service No. ED 192~667)

PAGE 214

204 Lazarsfield, P.F., & Thielens, Jr., W. The academic mind . Glencoe, Illinois: The Free Press, 1958. Legislators stress quality improvement. Southern Regional Education Board Regional Action , October 1980, pp. 1-3, 5-7. Lewis, L.S. On subjective and objective rankings of sociology departments. American Sociologist , 1968, 3, 129-131. Margulies, R.Z., & Blau, P. America's leading professional schools. Change , November 1973, pp. 21-27. Meder, A.E., Jr. How can an institution safeguard the quality of its educative processes while increasing its enrollments? Current Issues in Higher Education , 1955, 3, 138-192. Morgan, D.R., Kearney, R.C., & Regens, J.L. Assessing quality among graduate institutions of higher education in the United States. Social Science Quarterly , 1975, 57, 671-679. Mulaik, S.A. The foundation of factor analysis . New York: McGraw-Hill, 1972. Munson, C.E., & Nelson, F. Measuring the quality of professional schools. UCLA Educator , 1977, J_9, 41-52. Myers, J.L. Fundamentals of experimental design (3rd ed.). Boston: Allyn and Bacon, Inc., 1979. Nichols, R.C. College preferences of eleventh grade students. National Merit Scholarship Corporation Research Reports , 1966, 1(9). Ostar, A.W. Quality: How is it really measured? College and University Business , 1973, 54(5), 24-28. Ousiew, L., & Castetter, W.B. Budgeting for better schools . Englewood Cliffs, New Jersey: Prentice-Hall, 1960. Perry, R.R., & Lind, D.A. Criteria warranted for evaluation of academic programs at the University of Toledo. In R.H. Fenske (Ed.), Conflicting pressures in postsecondary education: 16th annual forum of the Association for Institutional Research, 1976 . Tallahassee, Florida: Association for Institutional Research, 1977, 19-25. Petrowski, W.R., Brown, E.L., & Duffy, J. A. National universities and the ACE ratings. Journal of Higher Education , 1973, 44, 495-513. Pike. W.L. An analysis of the relationship of current expenditures, enrollment, and expenditure per student to certain variables associated with educational quality for Texas public junior colleges, 1959-1962 (Doctoral dissertation, University of Texas, 1963). Dis sertation Abstracts International , 1963, 24, 2761A-2762A. (University Microfilms No. 64-104, 239)

PAGE 215

205 Pirsig, R.M. Zen and the art of motorcycle maintenance. New York: William Morrow, 1974. Popham, J.W. Education evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, 1975. Pyatte, J. A. Functions of program evaluation and evaluation models in education. High School Journal , 1970, 53, 385-401. Richards, J.M., Jr., Holland, J.L., & Lutz, S.W. The assessment of student accomplishments in college. American College Testing Program Research Reports , 1966, No. 11. (ERIC Document Reproduction Service No. ED 022 417) Rock, D.A., Centra, J. A., & Lynn, R.L. The identification and evaluation of college effects on student achievement . Princeton, New Jersey: Educational Testing Service, 1969. [ERIC Document Reproduction Service No. ED 037 182) Roose, K.D., & Andersen, C.J. A rating of graduate programs . Washington, D.C.: American Council on Education, 1970. SAS Institute Inc. SAS user's guide 1979 edition . Raleigh, North Carolina: SAS Institute Inc., 1979. Scriven, M. Pros and cons about goal -free evaluation. Evaluation Comment , 1972, 3(3), 1-4. Scriven, M. Goal-free evaluation. In E.R. Hoose (Ed.), School evaluation: The politics and process . Berkeley, California: McCutchan, 1973. Smart, J.C. Institutional goal and congruence: A study of student, faculty, and administrator preferences. Research in Higher Education , 1975, 3, 285-297. Solmon, L.C. The definition of college quality and its impact on earnings. Explorations in Economic Research , 1975, 2, 537-587. Solmon, L.C, & Astin, A.W. A new study of excellence in undergraduate education. Part one: Departments without distinguished graduate programs. Change , September 1981, pp. 22-29. Somit, A. & Tanenhaus, J. American political science . New York: Atherton Press, 1964. State Board of Education. Minutes , January 20, 1981, pp. 8-9. Steuart, T.A. & Rathburn, C.B. III. Quality: A decision-making a pproach . Gainesville: Florida Community/Junior College InterInstitutional Research Council, 1982.

PAGE 216

206 Stuff lebeam, D.L. Toward a science of educational evaluation. Educational Technology , 1968, 8(6), 5-12. Stuff lebeam, D.L. Evaluation as enlightenment for decision making. In W.H. Beatty (Ed.), Improving educational assessment and an inventory of affective behavior . Washington, D.C.: Association for Supervision and Curriculum Development, 1969. Stufflebeam, D.L., Foley, W.J., Gephart, W.J., Guba, E.G., Hammond, R.L., Merriman, H.O., & Provus, M.M. Educational evaluation and decision making . Itasca, Illinois: F.E. Peacock, 1971. Tidball, M.E., & Kistiakowski , V. Baccalaureate origins of American scientists and scholars. Science , 1976, ]93, 646-652. Turnball, W.W. Dimensions of quality in higher education. In W.T. Furniss (Ed.), Higher education for everybody? Issues and implications . Washington, D.C.: American Council on Education, 1971. Tyler, R.W. Basic principles of curriculum and instruction: Syllabus for education 360 . Chicago: The University of Chicago Press, 1950. Walters, J.D. Indicators of quality obtained from an analysis of southern accreditation team visits to selected public junior colleges (Doctoral dissertation, University of Florida, 1970). Dissertation Abstracts International , 1970, 32, 1890A-1890B. (University Microfilms No. /l24, 984, 161) Webster, D.S. Advantages and disadvantages of methods of assessing quality. Change , October 1981, pp. 20-24. Wispe, L.G. The bigger the better: Productivity, size, and turnover in a sample of psychology departments. American Psychologist , 1969, 24, 662-668. Wolf, R.M. Evaluation in education: Foundations of competency assessment and program review . New York: Praeger, 1979. Worthen, R.B., & Sanders, J.R. Educational evaluation: Theory and practice. Worthington, Ohio: Jones, 1973.

PAGE 217

BIOGRAPHICAL SKETCH Thomas Albert Steuart was born in Tampa, Florida, in 1938. He lived in Tampa until his graduation from Hillsborough High School in 1956 when he moved to Gainesville, Florida to attend the University of Florida. He received his B.S. in mathematics fron the University of Florida in 1963 and his M.Ed, in educational psychology in 1970. During the interim, he received a B.D. in theology from Duke University. He was married in 1960 and fathered a daughter and a son. He spent two years in the Methodist ministry and in 1972 began teaching psychology at Northern Virginia Community College in Alexandria. He moved to Henderson, Kentucky, and taught at Henderson Community College until 1976 when he assumed a position as substance abuse counselor at Southwestern Indiana Mental Health Center in Evansville. After three years and a divorce, he returned to the University of Florida to pursue his Ph.D. in higher education administration with a career goal in community college institutional research. While working on his doctorate, he was a graduate research associate with the Florida Community/Junior College Inter-Institutional Research Council. He is presently employed part-time at the College of Nursing at the University of Florida as a biostatistician while seeking a position in a community college. 207

PAGE 218

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. John M. Mickens Professor of Educational Administration and Supervision I certify that I have read this study and that in my opinion, it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. /O s Jairies L. Wattenbarger Professor of Educational Administration "and Supervision I certify that I have read this study and that in my opinion, it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as. a dissertation for the degree of Doctor of Philosophy. J / /_/ / Robert S. Soar Professor of Foundations of Education This dissertation was submitted to the Graduate Faculty of the Department of Educational Administration and Supervision in the College of Education and to the Graduate Council, and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. April i -.1983: Dean for Graduate Studies and Research

PAGE 219

UNIVERSITY OF FLORIDA 3 1262 08285 246 7