Group Title: multiple-factor analysis to identify underlying dimensions of multiple indicators of quality
Title: A Multiple-factor analysis to identify underlying dimensions of multiple indicators of quality
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00099602/00001
 Material Information
Title: A Multiple-factor analysis to identify underlying dimensions of multiple indicators of quality rated as useful in making program quality-evaluation decisions by administrators in Florida's community colleges
Physical Description: x, 207 leaves : ill. ; 28 cm.
Language: English
Creator: Steuart, Thomas Albert, 1938-
Publication Date: 1983
Copyright Date: 1983
 Subjects
Subject: Community colleges -- Administration -- Florida   ( lcsh )
Educational accountability -- Florida   ( lcsh )
Decision making   ( lcsh )
Educational Administration and Supervision thesis Ph. D
Dissertations, Academic -- Educational Administration and Supervision -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
 Notes
Thesis: Thesis (Ph. D.)--University of Florida, 1983.
Bibliography: Bibliography: leaves 199-206.
General Note: Typescript.
General Note: Vita.
Statement of Responsibility: by Thomas Albert Steuart.
 Record Information
Bibliographic ID: UF00099602
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000372976
oclc - 10026397
notis - ACB2145

Downloads

This item has the following downloads:

PDF ( 7 MBs ) ( PDF )


Full Text








A MULTIPLE-FACTOR ANALYSIS
TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY
RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS
BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES









BY

THOMAS ALBERT STEUART


A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL
FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY









UNIVERSITY OF FLORIDA

1983














ACKNOWLEDGEMENTS


During the past two years, many persons have assisted and encour-

aged me while I have been engaged in the research that has culminated

in this dissertation. Regretfully, only a few can be mentioned here.

I would like to thank Dr. John M. Nickens, my committee chairman, and

the many members of the Florida Community/Junior College Inter-Institu-

tional Research Council, who, through a research assistantship, supplied

most of my financial support. I would like to express my gratitude to

the other members of my committee, Dr. James L. Wattenbarger and Dr.

Robert S. Soar, whose patience with me has been unending. I express a

great debt to C.B. "Bix" Rathburn, III, who, as a fellow research assis-

tant, provided me with constant feedback and invaluable emotional sup-

port. I wish to thank Dr. Wilson Guertin for his consultations regard-

ing the factor analysis procedures used in this study. Teresa Agrillo,

who typed and edited this dissertation, deserves more than I can give

her. Finally, I wish to acknowledge James D. Cook for his continuing

emotional and financial support, without which this dissertation would

never have been completed.














TABLE OF CONTENTS


Page

ACKNOWLEDGEMENTS ................................................... ii

LIST OF TABLES................................................vi

ABSTRACT......................................................... ix

CHAPTER

I INTRODUCTION............................................1

Rationale .............................................3

Theoretical Rationale.............................. 3
Operational Rationale...............................7

The Problem...................................... .9

Need for the Study ................. .................. 10

Delimitations and Limitations.........................11

Definition of Terms .................................. 12

Organization of the Research Report...................13

II REVIEW OF RELATED LITERATURE.............................14

Educational Evaluation................................14

Toward a Definition of Educational Evaluation.......15
Contemporary Models of Educational Evaluation.......17
Decision-Oriented Model of Educational Evaluation...20

Quality Assessment in Higher Education................23

Graduate Education............................... 25
Undergraduate Education.............................31
Quantifiable Approaches to Quality.................36

Determining Underlying Dimensions: Factor Analysis....40








TABLE OF CONTENTS (continued)

Page

Applicability of Factor Analysis...................40
Definition of Factor Analysis......................43
Steps in Factor Analysis...........................44

III METHODOLOGY.................... ..... .....................50

Description of Data Used...............................50

Analysis of the Data......... .......................... 53

Research Question One..............................53
Research Question Two..............................56

IV RESULTS AND DISCUSSION..................................58

Factor Analysis Results..................................59

Interpretation of the Factors..........................70

Factor Score Comparisons..............................82

Program Areas ................................... 82
Administrative Areas................................90

Summary ............................................... 97

V SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER
STUDY................................................. 99

Summary............................................... 99

Conclusions............................................102

Recommendations for Further Study......................109

APPENDICES

A CLASSIFICATIONS OF RESPONDENTS USED IN DATA ANALYSIS......111

B DESCRIPTION OF IRC PROJECT METHODS AND PROCEDURES.........113

C PROGRAM QUALITY INDICATORS PROJECT QUESTIONNAIRE.........123

D POSITION CODES USED IN THE CATEGORIZATION OF RESPONDENTS
BY ADMINISTRATIVE AREA AND PROGRAM AREA WITH FREQUENCIES..136

E MEAN RATINGS FOR PROGRAM CHARACTERISTICS FOR N=450 AND
N=315................................................139









TABLE OF CONTENTS (continued)


Page
F CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PRO-
GRAM CHARACTERISTICS FOR N=450 ...........................143

G CORRELATION COEFFICIENTS FOR INTERCORRELATIONS OF PRO-
GRAM CHARACTERISTICS FOR N=315 ...........................151

H PRINCIPAL AXES SOLUTION BASED UPON N=450 WITH FINAL
COMMUNALITY ESTIMATES AND EIGENVALUES....................159

I PRINCIPAL AXES SOLUTION BASED UPON N=315 WITH FINAL
COMMUNALITY ESTIMATES AND EIGENVALUES.....................165

J FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCI-
PAL AXES BASED UPON N=315................................171

K FACTOR STRUCTURES FOR THE THREE ROTATIONS OF THE PRINCI-
PAL AXES BASED UPON N=450 ................................ 181

L t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN
PROGRAM AREAS BASED ON ASSUMPTION OF EQUAL VARIANCES......191

M t STATISTICS FOR MEAN FACTOR SCORE COMPARISONS BETWEEN
ADMINISTRATIVE AREAS BASED ON ASSUMPTION OF EQUAL
VARIANCES...............................................194

REFERENCES. ................ ........................................ 199

BIOGRAPHICAL SKETCH...............................................207














LIST OF TABLES


Table Page

1 Variance Accounted for by Successive Principal Axes for
N=315.................................................... 60

2 Program Characteristics With Factor Loadings of .50 or
Greater in the Three Rotations of the Principal Axes
Solution Based Upon N=315 ........... ........ .............. 6

3 Variance Accounted for by Successive Principal Axes for
N=450....................................................65

4 Program Characteristics With Factor Loadings of .50 or
Greater in the Three Rotations of the Principal Axes
Solution Based Upon N=450.................................66

5 Intercorrelations of the Factors for the 10-Factor Ro-
tation of the Principal Axes Solutions for N=315 and
N=450 ..................................................... 69

6 Coefficients of Congruence Between the Comparable Fac-
tors for the 10-Factor Structures for N=315 and N=450...... 0

7 Program Characteristics With .50 or Greater Loadings on
Factor 1 ............................................. 71

8 Program Characteristics With .50 or Greater Loadings on
Factor 2.................................................. 72

9 Program Characteristics With .50 or Greater Loadings on
Factor 3.......... ..................................... 73

10 Program Characteristics With .50 or Greater Loadings on
Factor 4................................ ............ ..... 74

11 Program Characteristics With .50 or Greater Loadings on
Factor 5 ............................................. 7

12 Program Characteristics With .50 or Greater Loadings on
Factor 6............ .. .................................76

13 Program Characteristics With .50 or Greater Loadings on
Factor 7........................... .................. ...... 77








LIST OF TABLES (continued)


Table Page

14 Program Characteristics With .50 or Greater Loadings on
Factor 8................................................... 78

15 Program Characteristics With .50 or Greater Loadings on
Factor 9.................................................... 79

16 Number of Respondents Per Program Area and Corresponding
Percentages of All Respondents (N=450).......................83

17 Mean Factor Scores and Standard Deviations for Respondents
Grouped by Program Area.................................... 84

18 Number of Respondents Per Administrative Area and Corre-
sponding Percentages of All Respondents (N=450).............91

19 Mean Factor Scores and Standard Deviations for Respondents
Grouped by Administrative Area..............................93














LIST OF FIGURES


Figure Page

1 Sample Format for Program Quality-Evaluation Information
Report.................................................... .106

2 Sample Format for Program Quality-Evaluation Information
Profile....................................................108


viii











Abstract of Dissertation Presented
to the Graduate Council of the University of Florida
in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy


A MULTIPLE-FACTOR ANALYSIS
TO IDENTIFY UNDERLYING DIMENSIONS OF MULTIPLE INDICATORS OF QUALITY
RATED AS USEFUL IN MAKING PROGRAM QUALITY-EVALUATION DECISIONS
BY ADMINISTRATORS IN FLORIDA'S COMMUNITY COLLEGES

BY


Thomas Albert Steuart


April 1983


Chairman: John M. Nickens
Major Department: Educational Administration
and Supervision

The purpose of this study was the identification of any underlying

dimensions within multiple quality indicators rated by administrators in

Florida public community/junior colleges as highly useful in making pro-

gram quality-evaluation decisions. It was theorized that utilization of

such dimensions to organize and provide information to administrators

should result in a format that they would find most useful since it

should reflect those aspects of their value systems relevant to the de-

fined decision situations.

Of 631 administrators identified to participate in the study, 450 re-

sponded by rating 454 items on a survey questionnaire for degree of use-

fulness in program quality-evaluation decision making. The correlation

matrix of the intercorrelations of the mean responses of the 108 most








highly rated items were factor analyzed using the iterated principal

axes method and an orthogonal rotation to the varimax criterion with an

oblique rotation to determine intercorrelation of factors. This analy-

sis resulted in the identification of a factor structure accounting for

80.5% of the common variance that contained nine interpretable factors.

The nine dimensions involved information relating to. (1) fiscal,

physical, and human resources; (2) student ratings of support services;

(3) instructional productivity of faculty; (4) assessments of any physi-

cal or cognitive needs of students relevant to their performance in their

selected programs; (5) ratings of selected aspects of programs by stu-

dents; (6) indicators of the quantitative output of a program; (7) se-

lected attributes of full-time and part-time faculty; (8) ratings of se-

lected aspects of programs by faculty; and (9) indicators of the respon-

siveness of a program to certification and accreditation agencies, the

local community, the students, and the state.

The recommendation was made that further research in program quality-

evaluation involve more direct investigation of the attitudes of the de-

cision maker involved and the development of instruments that will facil-

itate the identification of attitudinal dimensions relevant to the de-

fined decision situation.















CHAPTER I
INTRODUCTION


During the 1970s when public confidence in higher education waned

and financial resources became less abundant, there was an emphasis on

accountability. This resulted in a rapid increase of evaluation activi-

ties related to higher education. A major focus of these activities was

the maintenance or improvement of the quality of programs offered by

higher education institutions within the context of a broadening of stu-

dent access in a time of fiscal constraint (Craven, 1980, p. vii).

The conditions of fiscal austerity and the demands for accountability

within the context of broadening student access to higher education have

continued into the 1980s (Craven, 1980). There has been an increasing

concern for maintaining or improving the quality of programs offered by

higher education institutions. The concern is shared by persons within

higher education institutions, state level coordinating or governing

boards, other state executives, and state legislators (Bowen, 1974;

Craven, 1980; "Legislators stress quality improvement," 1980). As Finn

(1980) correctly perceived, quality has emerged as the premier concern

in higher education for the 1980s.

Although it is the premier concern, quality in higher education is a

concept that can mean all things to all people (King, 1981). If used

too loosely with little or no definition, the concept provides little

guidance. If defined too strictly, the concept is of limited use for a

diverse system of higher education (Finn, 1980, p. 2).









Traditionally, the quality of a program or institution in higher ed-

ucation has been determined by subjective evaluations of experts. One

criticism of this approach has been that 20 to 30 higher education in-

stitutions have been identified consistently as institutions of quality,

with all other institutions of higher education virtually ignored

(Lawrence & Green, 1980, p. 1). Another criticism has been that the

bases of these evaluations have been related to the missions and goals

of the institutions identified and institutions with other missions and

goals, such as community colleges, have been excluded (Bowen, 1974;

Fotheringham, 1978). Usually researchers in higher education have tried

to avoid constitutively defining quality, but have operationally defined

it through their choices of research designs and evaluative criteria

(Astin & Henson, 1977; Blackburn & Lingenfelter, 1973; Cartter, 1966;

Krause, 1970).

However quality is defined, the determination of educational quality

involves decision making by program administrators, which requires the

use of some information about the program being evaluated. This is con-

sonant with the theory of evaluation developed by Stufflebeam, Foley,

Gephart, Guba, Hammond, Merriman, and Provus (1971). They defined eval-

uation as "the process of delineating, obtaining, and providing useful

information for judging decision alternatives" (p. 4). Thus, the making

of quality evaluations about educational programs may be described as a

process of delineating what information about programs is useful to ad-

ministrators making quality-evaluation decisions, obtaining that infor-

mation, and providing it in a format useful to those administrators.

This definition of program quality evaluation formed the basis of the

rationale for this study.









Rationale

Theoretical Rationale

Delineation, the first operational step in program quality-evaluation

decision making, involves "the identification of the most useful infor-

mation" (Stufflebeam et al., 1971, p. 41). Although Stufflebeam et al.

did not specify a methodology for accomplishing this step, they did spec-

ify that it could be accomplished successfully "by the evaluator only in

interaction with his client [the decision maker]" (p. 41). The second

step, obtaining, was described as "the more technical aspect of evalua-

tion" (p. 42) and consists of "collecting, organizing, and analyzing

[the data delineated as most useful]" (p. 42). The providing phase of

evaluation involves reporting the delineated and obtained data to the

decision maker "in ways that he finds credible and helpful" (p. 17).

According to Stufflebeam et al. (1971), although there existed much

knowledge and many methodologies for collecting data, "the interface role

of delineating information needs with the decision makers and the simi-

lar interface role of providing information to audiences are not so well

developed in theory or practice" (pp. 139-140). Furthermore, they stated

that "a most glaring and conspicuous omission in this [their] book is the

failure to provide operational guidance for the evaluator as he plays

this interface role [of providing information]" (p. 336). It was the

theory and methodology of the providing phase of evaluation as defined

by Stufflebeam et al. (1971) with which this study was concerned.

Craven (1980) indicated that evaluation processes for the 1980s must

be capable of "providing the desired information in an appropriate for-

mat" (p. 111). How might an evaluator determine an appropriate format

for providing the desired information to decision makers when multiple









items of information have been identified as highly useful in a particu-

lar decision situation? A theoretical basis for resolving this problem

was suggested, but not developed, by Stufflebeam et al. (1971) in their

discussion of the relationship between the items of information identi-

fied as most useful in a defined decision situation and the values of

the decision maker in interaction with whom the items have been deline-

ated. They stated that it is the value system of the decision maker,

especially those aspects of his value system related to a particular de-

cision situation, that determines whether an item of information is rele-

vant to that decision situation (pp. 108-109). The items of information

or variables identified as most useful in a defined decision situation

are not themselves the criteria used to assess the decision situation,

but they are the variables to which the decision maker applies his cri-

teria. On the one hand, the criteria are statements of the means of

measuring the variables and, on the other hand, they are "yardsticks for

values" (p. 109). Values were defined as "predefined states of certain

variables" (p. 108). Presumably, when translated into a means of assess-

ment, "predefined states" equal "criteria" and "certain variables" equal

the information identified as most useful in the defined decision situ-

ation in interaction with the relevant decision maker.

For the purpose of this study, the important point was that the

items of information (variables) identified as most useful in interaction

with the relevant decision maker reflect those aspects of his value sys-

tem that are related to the defined decision situation. If this is true,

as theorized by Stufflebeam et al. (1971), it forms a basis for an

approach that an evaluator may use in determining how to provide multi-

ple items of information in a format that a decision maker should find

"credible and helpful" (p. 17).









The problem is similar to that encountered by psychologists in

attempting to describe human personality (Cattell, 1950). With hundreds

of terms defining traits by which persons could be described, there was

a search for "dimensions of personality" (p. 26) that would facilitate

the description of personality (pp. 26-27). Cattell theorized that the

multiple descriptors of personality, which he labeled "surface traits"

(pp. 21-22), could be accounted for by considerably fewer dimensions,

which he labeled "source traits" (p. 27). Additionally, he theorized

that the source traits were "the real structural influences underlying

personality" (p. 27).

Similarly, it was theorized in this study that for a set of multiple

items of information identified in the delineation phase of an evaluation

process, based on the theory of evaluation developed by Stufflebeam et

al. (1971), there are considerably fewer underlying dimensions that may

be identified and used in developing guidelines for providing information

in a format that decision makers should find useful in a defined decision

situation. If it is true that the items of information identified in the

delineation phase reflect those aspects of a decision maker's value sys-

tem relevant to a defined decision situation, then the underlying dimen-

sions of those items should reflect the dimensions of a decision maker's

value system relevant to that decision situation. If the latter is true,

then utilizing those underlying dimensions to organize those items should

result in providing information in a format that a decision maker should

find credible and helpful, since that format should approximate closely

the dimensions of those aspects of his value system being used in the

decision-making process in the defined decision situation.









This theory may be extended to a decision situation where multiple

decision makers are involved. The identified items of information in

such a decision situation would reflect a hypothetical value system of

"aggregate values" (Stufflebeam et al., 1971, p. 113) of the relevant

decision makers. In such a decision situation, the underlying dimensions

of the identified items of information should reflect the dimensions of

the hypothetical aggregate value system. They should reflect the dimen-

sions of the relevant aspects of an individual decision maker's value

system only to the degree that these dimensions are reflected in the

aggregate value system. Therefore, it may be expected that utilizing

those underlying dimensions to organize the identified items should re-

sult in providing information to the decision makers in a format more or

less credible and helpful to an individual decision maker to the degree

that relevant dimensions of his value system are reflected in the aggre-

gate value system.

Based upon this theory, an appropriate methodology for determining

the underlying dimensions of a set of multiple items of information iden-

tified as most useful in a defined decision situation would be the same

as that used by Cattell (1950) for identifying the underlying dimensions

of personality: the multi-variate technique of factor analysis. For a

set of variables that individuals can rate or in some manner assess, the

technique of multi-factor analysis can be used to determine the dimen-

sions of any underlying pattern of the ratings or other measurements of

that set of variables. For example, multiple items of information iden-

tified as useful in a defined decision situation may be rated by the

relevant decision makers for varying degrees of usefulness. Subsequently,

these ratings can be factor analyzed to identify underlying dimensions of









the degree of usefulness of the items. The results of such an analysis

should provide the evaluator with some guidelines for organizing the

items to increase the probability that the decision makers will find the

format of the provided information credible and helpful, i.e., useful in

the decision-making process in the defined decision situation. This ex-

tension of the theory of evaluation proposed by Stufflebeam et al. (1971)

and the suggested methodology should supply evaluators the needed guid-

ance in their role of providing information in a format useful to deci-

sion makers in a defined decision situation.

Operational Rationale

This study involved the application of this theory and methodology

to an appropriate set of items of information identified as useful in a

defined decision situation in order to identify the underlying dimensions

of those items and to utilize the identified dimensions to develop guide-

lines for organizing these items into a format that should be useful to

the relevant decision makers in the defined decision situation.

Since the quality of programs has been cited as the premier concern

in higher education for the 1980s, the decision situation selected for

this study was the making of quality-evaluation decisions about programs

in Florida's public community/junior colleges. In Florida, Governor

Graham's program for education contained a commitment to assure the cit-

izens of Florida the opportunity to obtain a quality education at every

level of public education including higher education. This commitment

was reflected in a resolution adopted by the Florida State Board of Edu-

cation in January, 1981, that included the following statement:

On a statewide average, educational achievement in the state of
Florida will equal that of the upper quartile of states within
five years, as indicated by commonly accepted criteria of attain-
ment. (State Board of Education, 1981)








The Division of Community Colleges in Florida is under a mandate from the

State Department of Education to identify "certain indicators of quality

which can be used system-wide to give evidence of quality improvement"

(Division of Community Colleges, 1982, p. 1).

The members of the Florida Community/Junior College Inter-Institu-

tional Research Council (IRC), a research consortium of Florida public

community/junior colleges, conducted a project that addressed the problem

of identifying indicators of quality useful in program quality-evaluation

decision-making in Florida public community/junior colleges (Florida Com-

munity/Junior College Inter-Institutional Research Council, 1981). This

project was based upon the theory of evaluation developed by Stufflebeam

et al. (1971). In interaction with the relevant administrators, the

project identified more than 100 indicators of quality as highly useful

in making program quality-evaluation decisions. The indicators of qual-

ity identified were representative of many of those identified in other

studies. A large number of administrators (450 respondents) were in-

volved in this project, representing almost all of the public community/

junior colleges in Florida. Although multiple indicators of quality were

identified as highly useful, there was no attempt in this project to iden-

tify any underlying dimensions of these multiple indicators to utilize in

developing guidelines for providing the desired information to the rele-

vant administrators in a useful format.

All of the aspects of the IRC project described previously supported

the use of the data from that project to test the theory that for a set

of multiple items of information identified in the delineation phase of

an evaluation process, there are considerably fewer underlying dimensions

that may be identified and used in developing guidelines for providing








information in a format that decision makers should find useful in a de-

fined decision situation. Also, because that project found considerable

variability in the information rated as highly useful by respondents

classified in various program and administrative areas, there was an op-

portunity to investigate whether there were any significant differences

between these classifications within any identified underlying dimension

of the multiple indicators of quality.
The Problem

Based on the theory of evaluation developed by Stufflebeam et al.

(1971) and extended in this study, it was expected that multiple items

of information identified by the relevant decision makers as useful in a

defined decision situation would contain underlying dimensions that could

be identified through the use of the technique of factor analysis.

Specifically, this study proposed:

1. To determine any underlying dimensions of the multiple items of

information rated as highly useful in program quality-evaluation

decision making by administrators involved in such decision mak-

ing in Florida public community/junior colleges.

2. To determine if there were any significant differences in the

degree of emphasis within any identified underlying dimension

between the administrators classified within the Advanced and

Professional, Occupational, Developmental, Community Instruc-

tional Services, and Student Services program areas.

3. To determine if there were any significant differences in the

degree of emphasis within any identified underlying dimension

between the administrators classified within the administrative

areas of General Administration, Academic Affairs, Business









Affairs, Student Affairs, Community Instructional Services, and

Presidents.

4. From the results of these analyses, to develop guidelines for or-

ganizing the identified multiple indicators of quality into a

format that should be useful to the administrators involved in

making program quality-evaluation decisions in Florida public

community/junior colleges.
Need for the Study

There was a need to develop further that aspect of the theory of eval-

uation proposed by Stufflebeam et al. (1971) that related to an evalua-

tor's role of providing information (pp. 139-140, 336). In relation to

the developed theory, there was a need "to provide operational guidance

for the evaluator" (p. 336) in the role of providing information in a

format that a decision maker should find "credible and helpful" (p. 17).

Craven (1980) stated that to address effectively the higher education

issues of the 1980s, there was a need for evaluation processes to provide

the desired information in an appropriate format (p. 111). Since only

one study relating to quality evaluation in higher education was found

that used the technique of factor analysis to determine underlying dimen-

sions (Astin & Solmon, 1981), there appeared to be a need for studies to

demonstrate the methodology for determining guidelines for organizing the

considerable amount of information desired by administrators for evaluat-

ing program quality into formats useful in the decision-making process.

Also, due to the large amount of information identified as useful in pro-

gram quality-evaluation decision making by administrators in Florida pub-

lic community/junior colleges, there was a need to determine guidelines

for organizing that specific information into a format that should be









useful to the administrators involved in the quality-evaluation decision

process in Florida public community/junior colleges (Steuart & Rathburn,

1982, p. 185).

Delimitations and Limitations

This study was confined to administrators in Florida public commun-

ity/junior colleges who were classified by their institutions as execu-

tive, administrative, or managerial personnel under part three of the

"Personnel and Salary Report (SA-1)" as defined in the Community College

Management Information Systems Procedures Manual of the State of Florida

(Division of Community Colleges, 1980, pp. 10.1-10.2). Of the 631 admin-

istrators identified and surveyed, 450 responded for a response rate of

71.3% (Steuart & Rathburn, 1982, p. 45). Although a response rate of

this magnitude is generally considered acceptable, it may still be

assumed that the respondents may have been different from the nonrespon-

dents in ways that affected their responses. Thus, the responses might

not be representative of the identified population. Since the study was

confined to administrators in community colleges, the results are gener-

alizable to administrators in other types of colleges only to the extent

that they share attitudes toward program quality evaluation similar to

the respondents in this study. The results are not generalizable to ad-

ministrators in community college systems in other states except to the

degree that they share attitudes toward program quality evaluation simi-

lar to respondents in this study.

The data used in this study were collected by means of a survey ques-

tionnaire. Although face validity was established for the questionnaire

through the use of a review panel, reliability of the questionnaire was

not established. Therefore, it is not known if similar results would be









obtained from the same respondents if they were surveyed again. The re-

sults can be taken only as descriptive of the opinions of the administra-

tors at the time the questionnaire was administered. Also, although the

questionnaire was designed to be comprehensive in relation to the descrip-

tive information it contained about programs offered by the community col-

leges, some information that might be related to quality-evaluation deci-

sion making might have been excluded.

The analytic technique of factor analysis used in this study has sev-

eral limitations associated with it. There are no hard and fast guide-

lines for determining the number of factors to rotate in attempting to

achieve a simple factor pattern. Another researcher might choose differ-

ent criteria and rotate a different number of factors and would, there-

fore, obtain different results. Also, factor analysis assumes a linear

relationship between the variables involved in the analysis. Any other

relationship would be inaccurately represented by a factor-analytic pat-

tern.

Definition of Terms

Administrative Areas. The basic divisions of responsibility for ad-

ministrators in a comprehensive community college in Florida including

General Administration, Academic Affairs, Business Affairs, Student

Affairs, Community Instructional Services, and Presidents. Each of

these areas is operationally defined in Appendix A.

Dimension. A cluster of program characteristics the ratings of which

by the respondents tend to vary in similar ways. Each factor identified

from the factor analysis in this study represents a dimension of the un-

derlying interrelationships of the ratings of the program characteristics.









Evaluation. The process of delineating, obtaining, and providing

useful information for decision making in a defined decision situation.

Program Areas. The five basic operational areas of a comprehensive

community college in Florida including the Advanced and Professional,

Occupational, Developmental, Community Instructional Services, and Stu-

dent Services areas (Division of Community Colleges, 1981, p. 6). Each

of these areas is operationally defined in Appendix A.

Program Characteristics. Any information relating to or describing

a program offered by a college.

Program Quality-Evaluation Decision Making. The evaluation process,

involving the use of relevant information, leading to a judgment by the

responsible administrators of the quality of a program.

Underlying Pattern. The interrelationships of the correlations of

the ratings by respondents among the program characteristics identified

as highly useful in quality-evaluation decision making.

Usefulness. The determination of the serviceability or utility of

a program characteristic in making judgments about the quality of a pro-

gram.

Organization of the Research Report

The chapters in the remainder of this report are organized as follows.

Chapter II presents a review of selected literature relevant to this

study. Chapter III describes the methodology used in this study. Chap-

ter IV presents the results of this study. Chapter V summarizes and dis-

cusses the results with conclusions and recommendations drawn from the

results.















CHAPTER II
REVIEW OF RELATED LITERATURE


Since the evaluation of the quality of programs or services offered

by higher education institutions occurs within the general framework of

educational evaluation, the first section of this chapter is a discussion

of concepts of educational evaluation. The decision-oriented approach to

educational evaluation is emphasized because it was the theoretical basis

of this study. The second section of this chapter reviews selected

attempts in higher education to address the issue of quality. The third

section of this chapter is a discussion of factor analysis related to

discovering underlying dimensions in multi-variate assessments.

Educational Evaluation

During the past decade, evaluation in education has become a topic

wide in scope. It has been the failure of many educators to recognize

that evaluation is a complex process requiring a broad perspective (Alkin,

1969). Pyatte (1970) emphasized the importance of evaluators in educa-

tion looking beyond the immediate problems and contemplating the intri-

cate meanings and legitimate functions embodied in evaluation theory.

The dynamics of evaluation compel attention from many perspectives.

This section of the literature review is presented in three parts. The

initial part introduces the concept of educational evaluation through a

discussion of various definitions of educational evaluation. The second

part provides a brief review of educational evaluation with emphasis on

contemporary models of educational evaluation. The final part discusses









the decision-oriented model of educational evaluation--the basis for

this study's approach to the quality issue in higher education.

Toward a Definition of Educational Evaluation

Many definitions of educational evaluation have been proposed stem-

ming from the fact that three different schools of thought regarding ed-

ucational evaluation have coexisted for more than 30 years (Worthen &

Sanders, 1973). Stufflebeam et al. (1971) provided an excellent review

of the three basic approaches to educational evaluation from which most

of the definitions have developed. The first approach was an early one

equating evaluation with measurement (p. 10). The second approach in-

volved the determination of the congruence between performance and objec-

tives, especially behavioral objectives (p. 11). The third approach was

the process commonly referred to as professional judgment (p. 13).

From these basic approaches, various definitions of educational eval-

uation have emerged. These definitions differ in level of abstraction

and often reflect the specific concerns of the persons who formulated

them. At a basic level, evaluation has been defined as "an assessment

of worth" (Popham, 1975, p. 8). Wolf (1979) found this definition need-

ing clarification regarding the meaning of the terms "assessment" and

"worth."

A more descriptive definition was offered by Cronbach (1963), who de-

fined evaluation as "the collection and use of information to make deci-

sions about an educational program" (p. 675). This definition was pro-

posed initially during the curriculum development era of the late fifties.

Cronbach's studies suggested various kinds of information that could be

examined within the evaluation framework and later analyzed and used in

decision making designed for course improvement (Wolf, 1979).









Doll (1970) defined educational evaluation as "a broad and continuous

effort to inquire into the effects of utilizing educational content and

process according to clearly defined goals" (p. 379). In terms of this

definition, educational evaluation must transcend the levels of simple

measurement techniques or the primary application of the evaluator's

values and beliefs. If evaluation is to be a vast and continuous effort,

it must depend on "a variety of instruments which are used according to

carefully ascribed purposes" (Doll, 1970, p. 380).

Beeby proposed an extended definition of evaluation as "the system-

atic collection and interpretation of evidence, leading, as a part of the

process, to a judgment of value with a view to action (in Wolf, 1979, p.

117). Wolf (1979) developed the important elements of the definition.

First, the term systematic implied that the information needed would be

defined with precision and obtained in an organized fashion. The second

element, the interpretation of evidence, emphasized the role of critical

judgment in the evaluation process. Wolf stated that this element was

often neglected in evaluation activities. The third element of Beeby's

definition involved the judgment of value. This required the evaluator

to be responsible for making judgments from his evaluative work about the

worth of an educational endeavor. The last element, with a view to ac-

tion, introduced the notion that an evaluative undertaking should be de-

signed for the sake of future action (pp. 117-124).

Pyatte (1970) emphasized the importance of a rational plan element in

the definition of educational evaluation. He stated that "evaluation is

the deliberate act of gathering and processing information according to

some rational plan the purpose of which is to render, at some point in

time, a judgment about the worth of that on which the information is









gathered" (p. 306). According to Pyatte, six elements are included:

the agent, the object, the inputs, the plan, the time, and the product.

Bloom, Hastings, and Madaus (1971) defined educational evaluation as:

1. A method of acquiring and processing the evidence needed
to improve the student's learning and the teaching;
2. Including a great variety of evidence beyond the usual
final paper and pencil examination;
3. An aid in clarifying the significant goals and objectives
of education and as a process for determining the extent
to which students are developing in these desired ways;
4. A system of quality control in which it may be determined
at each step in the teaching-learning process whether the
process is effective or not, and if not, what changes must
be made to ensure its effectiveness before it is too late;
5. A tool in educational practice for ascertaining whether
alternative procedures are equally effective or not in
achieving a set of educational ends. (p. 8)

In recent years, the most popular definitions have viewed evaluation

as "a process of identifying and collecting information to assist deci-

sion makers in choosing among available decision alternatives" (Worthen

& Sanders, 1973, p. 20). Since this perspective of evaluation was the

one used in this study, an expanded discussion of it is presented in the

final part of this section of the literature review.

Contemporary Models of Educational Evaluation

With the increased call for accountability in educational institu-

tions, the body of literature on educational evaluation has expanded rap-

idly in recent years. Many models of educational evaluation have emerged.

There have been numerous attempts to categorize the array of models, the

most comprehensive of which were done by Anderson, Ball, and Murphy

(1975), Gardner (1977), Stufflebeam et al. (1971), and Worthen and

Sanders (1973). The more prominent educational evaluation models in-

cluded the measurement model, the congruence model, the professional

judgment model, the goal-free model, and the decision-oriented model

(Gardner, 1977).









The measurement model of evaluation, as described by Gardner (1977),

equated evaluation with measurement (p. 575). In this model, evaluation

is viewed as the science of instrument development and interpretation

(p. 576). The use of measurement instruments results in scores or other

indices which are mathematically and statistically manipulated so masses

of data can be handled and an individual's or a group's score can be com-

pared with established norms (Stufflebeam et al., pp. 10-11). The model

has been widely used and is illustrated by the use of SAT and GRE scores.

Gardner (1977) stated that the model was based on the assumptions that

the phenomena to be evaluated have significant measurable attributes and

that instruments can be designed which are capable of measuring these

attributes.

Perhaps no other model has received more attention in recent evalua-

tion literature, especially in its application to the classroom, than the

congruence model. The origin of this model is most closely associated

with the work of Tyler (1950). Tyler stated that educational objectives

were essentially defined in terms of expected changes in human behavior.

It followed that evaluation is the process for determining the degree to

which changes in behavior actually take place. Gardner (1977) described

this model as

the process of specifying or identifying goals, objectives or
standards of performance; identifying or developing tools to
measure performance; and comparing the measurement data col-
lected with the previously identified objectives or standards
to determine the degree of discrepancy or congruence which
exists. (p. 577)

Probably the most widely used but least discussed model of evaluation

is the professional judgment model (Stufflebeam et al., 1971, p. 3). In

this model, evaluation is professional judgment. Values or criteria that









form the basis of the judgment may or may not be explicitly stated.

Often a commonly shared value system is assumed (Gardner, 1977, p. 574).

Examples of the uses of this model include the judgments of visiting

teams of professionals in the accreditation process, the use of peer re-

view panels for evaluating various programs, and faculty committees pass-

ing judgments on promotion or tenure (Worthen & Sanders, 1973, pp. 126-

127).

A recent addition to the models of educational evaluation is the goal-

free model. Originally proposed by Scriven (1972, 1973), this model is

based on the argument that if the main objective of evaluation is to

assess the worth of outcomes, then no distinction should be made between

intended versus unintended outcomes and that an evaluation should be con-

ducted without reference to a program's goals or objectives (Gardner,

1977, p. 583). In this model, evaluation is not totally goal free, but

standards for comparison can be chosen from a wider range of possibili-

ties than those that might be prescribed by a program's objectives (p.

584). The final outcome of the evaluation "should be accurate, descrip-

tive, and interpretative information relative to the most important as-

pects of the actual performance, effects, and attainments of the program

being evaluated" (p. 585).

All of the previously discussed models are similar in that they in-

clude reference to the use of some information in making some judgment.

The models vary in the degree to which the role of information or the role

of judgment is emphasized. In the next model to be discussed, where eval-

uation is defined as "the process of delineating, obtaining, and provid-

ing useful information for judging decision alternatives" (Stufflebeam et

al., 1971, p. 4), the emphasis is on the role of information.









Decision-Oriented Model of Educational Evaluation

Stufflebeam and the Phi Delta Kappa National Study Committee have

been credited with the refinement of what Gardner (1977) referred to as

the decision-oriented model of educational evaluation. According to

this model, "evaluators collect information and communicate this infor-

mation to someone else" (Alkin & Fitz-Gibbon, 1975, p. 1). The process

by which this information is collected is systematic and deliberate, an

attempt to obtain an unbiased assessment upon which to base an evaluation

(Alkin & Fitz-Gibbon, 1975; Guba, 1975; Stufflebeam, 1969).

In this model, the results of evaluation are directed toward those

individuals who are "intimately connected with the program being evalu-

ated" (Alkin & Fitz-Gibbon, 1975, p. 1) or the administrative decision

makers (Gardner, 1977; Guba, 1975; Stufflebeam, 1969; Stufflebeam et al.,

1971). The model was designed to benefit decision makers. In this con-

text, the role of the evaluator is to collect and present summary infor-

mation to decision makers (Alkin & Fitz-Gibbon, 1975, p. 5). The evalu-

ators collect and present the information needed by someone else who de-

termines its worth. "Decision-facilitation evaluators view the final de-

termination of merit as the decision maker's province, not theirs"

(Popham, 1975, p. 25). In contrast, Alkin and Fitz-Gibbon (1975) sug-

gested that the information from a well-designed evaluation would pass

judgment, not a person (p. 5).

Stufflebeam (1969) viewed evaluation as the science of providing in-

formation for decision making. The assumption was made that the ultimate

goal of the decision-making process was educational improvement. Educa-

tional improvement implied changes resulting from choices selected by de-

cision makers from various alternatives. The process of decision making









or choosing among options is firmly rooted in the decision maker's and

the organization's value systems. In this framework, valid and reliable

information is necessary to facilitate the decision maker's judgment of

the degree to which various options measure up against a personal or or-

ganizational value system (Stufflebeam et al., 1971, p. 38).

Stufflebeam (1968) summarized the rationale for the model in the fol-

lowing statements:

1. The quality of programs depends upon the quality of de-
cisions in and about the program.
2. The quality of decisions depends upon the decision mak-
er's abilities to identify the alternatives which com-
prise decision situations and to make sound judgments
about these alternatives.
3. Making sound judgments requires timely access to valid
and reliable information pertaining to the alternatives.
4. The availability of such information requires system-
atic means to provide it.
5. The processes necessary for providing this information
for decision making collectively comprise the concept
of evaluation. (p. 6)

Alkin (1969) expressed a similar view of evaluation. He stated that

the steps in the process of evaluation included determining the areas of

concern for possible decisions, determining the appropriate data, col-

lecting and analyzing the data, and reporting the summary information in

a form useful for the decision makers. These steps were condensed and

described by Stufflebeam et al. (1971) in their definition of educational

evaluation as "the (process) of (delineating), (obtaining), and (provid-

ing)(useful)(information) for (judging)(decision alternatives)" (p. 40).

Each of the eight elements, set off by parentheses in the definition, has

significant implications for the process and techniques of evaluation.

These elements of evaluation were defined as follows:

1. Process. A particular and continuing activity sub-
suming many methods and involving a number of steps
and operations.









2. Decision alternatives. Two or more different actions that
might be taken in response to some situation requiring
altered action.
3. Information. Descriptive or interpretive data about enti-
ties (tangible or intangible) and their relationships, in
terms of some purpose.
4. Delineating. Identifying evaluative information required
through an inventory of the decision alternatives to be
weighed and the criteria to be applied in weighing them.
5. Obtaining. Making information available through such pro-
cesses as collecting, organizing, and analyzing and through
such formal means as measurement, data processing, and
statistical analysis.
6. Providing. Fitting information together into systems or
subsystems that best serve the purposes of the evaluation,
and reporting the information to the decision maker.
7. Useful. Satisfying the scientific, practical, and pruden-
tial criteria of Chapter I [internal validity, external
validity, reliability, objectivity, relevance, importance,
scope, credibility, timeliness, pervasiveness, and effi-
ciency] and pertaining to the judgmental criteria to be
employed in choosing among the decision alternatives.
8. Judging. The act of choosing among the several decision
alternatives; the act of decision making. (Stufflebeam
et al., 1971, pp. 40-43)

Stufflebeam et al. (1971) contended that evaluation is an extension

of the decision-making process. In this process, the evaluator assists

the decision maker by helping to delineate, in interaction with the de-

cision maker, the information which is needed; by providing that informa-

tion in a useful format to the decision maker; and by assisting the deci-

sion maker in the interpretation of the information. This conceptualiza-

tion of evaluation was used in this study where the making of quality

evaluations about educational programs was defined as the process of

identifying what information about programs is useful to administrators

in making that type of evaluation decision and providing that information

to administrators in a format that facilitates the interpretation of the

information by administrators making such decisions.

While identifying what information is useful for making quality-eval-

uations may be difficult, the presentation of the identified information









in a useful format is equally difficult when multiple items of informa-

tion are involved. This task requires the aggregation of the identified

information into profiles or indices or similar formats useful to admin-

istrators involved in quality-evaluation decision making. Stufflebeam

et al. (1971) pointed out that their theory offered little guidance for

the evaluator in deciding how to provide information in a useful format

(p. 336). Craven (1975) emphasized the information-providing role of an

evaluator in his description of information systems as "any method that

provides the right decision maker with the right information in the right

form at the right time so as to facilitate the decision-making process"

(p. 127). Craven (1975) summarized the importance of an evaluator's in-

formation-providing role with the following statement:

Information that responds to those decision-making needs in a
valid, reliable, and timely manner will assist higher educa-
tional institutions during this period in making decisions that
will maintain and strengthen the quality of its programs and
faculty and will enable them to meet the future educational
needs of students, society, and scholarship. (p. 138)

Selected studies illustrative of these major approaches to evaluation,

including decision-oriented approaches, that have been used in the assess-

ment of quality in higher education are reviewed in the next section of

this chapter.

Quality Assessment in Higher Education

An appropriate summary of a basic problem in assessing quality in

higher education or any other field is provided by the following state-

ment from Pirsig (1974):

Quality . you know what it is, yet you don't know what it
is. But that's self-contradictory. But some things are bet-
ter than others, that is, they have more quality. But when
you try to say what the quality is, apart from the things that
have it, it all goes poof! There's nothing to talk about.
But if you can't say what Quality is, how do you know what it









is, or how do you know that it even exists? If no one knows
what it is, then for all practical purposes it doesn't exist
at all. But for all practical purposes it really does exist.
What else are the grades based on? Why else would people
pay fortunes for some things and throw others in the trash
pile? Obviously some things are better than others ..
but what's the betternesss?" So round and round you go,
spinning mental wheels and nowhere finding anyplace to get
traction. What the hell is Quality? What is it? (p. 184)

During a recent Southern Regional Educational Board Symposium, SREB

President Godwin addressed the problem of defining quality as follows:

Part of our problem in higher education is that too often we
have confused quality with prestige. We need to increase the
understanding that quality education is not a monopoly of a
few dozen major universities in the nation, but is attainable
by all types of higher education institutions. (Legislators
stress quality improvements, 1980, p. 3)

The president of Brevard Community College in Florida, in a recent mes-

sage to his faculty, had the following comments on educational quality:

Quality in education is not an absolute. It can only be
evaluated in terms of arbitrarily determined standards,
and these in turn depend partly on subjectively formulated
aims and partly on objective statistical procedures. . .
Education is quality education to the extent that it meets
the needs of the people being served. (King, 1981, p. 1)

These two quotes are representative of the general view of quality

in higher education. That view is vague, subjective, and broad. On one

hand, such a view has limited use in that it provides little guidance for

educational improvement. On the other hand, it is a workable approach to

the quality issue, maintaining maximum flexibility to serve the diversity

found in higher education. If by no other means, educators intuitively

recognize a substantial variance in program and institutional quality

among the diverse institutions that comprise the American system of

higher education. Various studies conducted by different researchers for

different reasons in different settings using different methodologies have

resulted in a variety of quality attributes that provide little assistance

in defining quality (Lawrence & Green, 1980).









Selected studies illustrative of the major approaches to quality

assessment in higher education are reviewed in this section of the liter-

ature review. This section is presented in three parts. First, the

major reputational assessments of graduate programs are reviewed. These

studies have formed the basis of attempts to investigate the quality

issue in higher education. Second, an overview is presented of quality

assessment at the undergraduate and two-year college level. Third, se-

lected studies designed to identify quantifiable indicators of quality

are reviewed.

Graduate Education

Beginning with Hughes (1925) and continuing through the prestigious

American Council on Education (ACE) sponsored studies (Cartter, 1966;

Roose & Andersen, 1970), reputational ratings of graduate programs have

constituted the basis of attempts to address the issue of quality in

higher education. The methodology incorporated in a majority of these

studies involved a peer review, in which programs were rated by eminent

faculty in the same discipline. Their ratings reflected the quality of

graduate education and research in the system. These studies attempted

to identify the outstanding research and teaching institutions by program

and they have consistently identified 20 or 30 institutions, virtually

ignoring the balance of the system (Lawrence & Green, 1980, p. 2).

Using a panel of distinguished scholars from each field, Hughes (1925)

conducted the first comprehensive reputational study of graduate programs

in American higher education. At the time of his study, only 65 Ameri-

can universities awarded the doctoral degree. Hughes ranked 38 of these

universities in 20 disciplines according to the number of outstanding

scholars each employed. During the next decade, the number of American









universities awarding the doctoral degree nearly doubled. This prompted

a second study by Hughes (1934) in which 59 universities were ranked in

35 disciplines according to the quality of facilities and staff for the

preparation of doctoral candidates. The stated purpose of both of

Hughes' studies was to educate undergraduate students about various grad-

uate programs. These studies went well beyond this purpose in establish-

ing procedures for quality ratings of the nation's leading institutions

through numerical ranks based upon the informal opinions of academicians.

For the next 20 years, the Hughes studies were regarded as authori-

tative. It was not until Keniston's (1959) work that an attempt was made

to update the Hughes studies. Using department chairmen selected from

the institutional members of the American Association of Universities as

raters, Keniston ranked 24 graduate programs based upon a combined meas-

ure of doctoral program quality and faculty quality. These rankings were

used to produce a rank-ordered list of the top 20 institutions which were

compared with Hughes' results.

The major weakness of the Hughes and Keniston studies, according to

Cartter (1966), was the uncontrolled geographical and rater biases.

Other flaws in these studies noted by Cartter included the failure to

distinguish measures of faculty quality from measures of educational

quality, the failure to account for the biases of raters toward their

alma maters, and the choice of department chairmen as raters. It was

Cartter's opinion that the department chairmen were not necessarily the

most distinguished scholars nor typical of their peers in age, speciali-

zation, or rank. They tended to be more conservative and thus to favor

the traditional institutions.









Cartter's design of the ACE studies accounted for these criticisms.

He took great care to assure the representation of various institutions

and raters from all geographic areas. Cartter surveyed 106 institutions

representing more than 1,000 graduate programs in 29 disciplines. The

more than 4,000 survey respondents included senior and junior scholars as

well as department chairmen. From a list of the institutions in alpha-

betical order, the respondents were requested to rate each doctoral pro-

gram in their area of study on two components: quality of graduate fac-

ulty and effectiveness of the doctoral program. To support the represen-

tativeness of the raters, the respondents were requested to supply basic

biographical information. The leading departments were ranked separately

on the basis of the raters' responses on each of the components. In most

disciplines, the rankings by each component were very similar. Where the

discipline areas overlapped, Cartter compared his rankings with those of

Hughes (1925) and Keniston (1959). Cartter found a high correlation be-

tween his rankings and objective institutional measures such as faculty

salaries, library resources, and publication indices. His rankings cor-

related highly with Bowker's (1964), who used enrollment of graduate

award recipients in institutional programs as a criterion. Cartter used

these relationships as a primary point in his support of peer ratings for

quality assessment.

The 1970 ACE-sponsored Roose-Andersen study essentially replicated

Cartter's study. The Roose-Andersen study included 130 institutions

across 29 disciplines. The ratings were based upon the same two compon-

ents Cartter used in 1966: quality of graduate faculty and effectiveness

of the doctoral program. The Roose-Andersen report presented ranges of

raters' scores rather than absolute raw departmental ratings and ranges









of quality instead of specific institutional rankings. Even with these

changes, the results of the Roose-Andersen study were very similar to

those of the Cartter study (1966). Using the reputational rating pro-

cedures refined by the ACE studies, other researchers produced similar

program or institutional rankings based on the two ACE criteria or simi-

lar criteria (Carpenter & Carpenter, 1970; Cartter & Solmon, 1977; Cole

& Lipton, 1977; Cox & Catt, 1977; Gregg & Sims, 1972; Margulies & Blau,

1973; Munson & Nelson, 1977).

Lawrence and Green (1980) discussed the weaknesses in reputational

ratings, the most apparent being their lack of agreement on the meaning

of quality. The definition of quality varied according to disciplines,

program areas, and individual raters. The lack of agreement on a defini-

tion of quality made program or institutional comparisons nonsensical.

Lawrence and Green expressed the opinion that higher education was far

too complex to rate on the basis of one or two dimensions. They stated

that

the ratings represent the subjective judgments of faculty and
that they probably reflect prestige rather than quality. .
and high prestige is translated to mean educational excellence.
As a result, research and scholarly productivity are emphasized
to the exclusion of teaching effectiveness, community service,
and other possible functions; undergraduate education is deni-
grated; and the vast number of institutions lower down in the
pyramid are treated as mediocrities, whatever their actual
strengths and weaknesses. (pp. 15-16)

Dolan (1976) criticized the reputational approach because it tended

to maintain the status quo. Dolan expressed the opinion that subjective

ratings of program quality reflected elitist and traditionalist views of

higher education that stifled or restricted change and innovation. Dolan

believed that increasing consumer awareness in higher education demanded

student involvement in any attempt to rate graduate programs.









Blackburn and Lingenfelter (1973) defended the ACE reputational rat-

ings on the following grounds:

(1) Panel bias has been largely eliminated by the careful se-
lection procedures of the ACE studies; (2) subjectivity cannot
be escaped in evaluation no matter what technique is used; (3)
professional peers are competent to evaluate scholarly work,
the central criterion in reputational studies; and (4) although
not a sufficient condition of general excellence, scholarly
ability is necessary for a good doctoral program. (p. 25)

Webster (1981) pointed out that the process usually produced results with

face validity in that those programs or institutions considered to be of

high quality by the educated general public were often rated highly.

Regardless of the criticisms or defenses of the reputational rating

approach, none of the studies that have been cited have investigated spe-

cifically what information was useful for assessing the quality of gradu-

ate programs. Only one study of graduate education quality was found

that investigated this topic. The Council of Graduate Schools (CGS) and

the Educational Testing Service (ETS) sponsored a study that involved 73

departments divided among three fields--psychology, chemistry, and his-

tory--that were surveyed with the purpose of determining what information

to use to assess quality (Clark, Hartnett, & Baird, 1976). Four major

conclusions resulted from this study. First, it was determined that

timely, relevant, and useful information (program characteristics) re-

lated to educational quality could be reasonably obtained. Second,

approximately 30 program characteristics were identified as especially

useful. Third, these program characteristics appeared to be applicable

across diverse program areas. Fourth, two clusters of program character-

istics were identified: research-oriented indicators and educational-

experience indicators. The research-oriented indicators included depart-

ment size, reputation, physical and financial resources, student ability,









and faculty publications. The educational-experience indicators were

concerned with the educational process and academic climate, faculty in-

terpersonal relations, and alumni ratings of dissertation experiences.

The CGS-ETS study used faculty, students, and alumni input in a sep-

arate peer-rating component of the study similar in approach to the ACE

studies. One finding of this component of the study was that reputa-

tional ratings of graduate programs had little relationship to teaching

and educational effectiveness as measured by the input of students and

alumni. Clark et al. (1976) concluded that the peer ratings were based

primarily on scholarly publications with little or no emphasis on the

quality of instruction.

The CGS-ETS study demonstrated that information useful for determin-

ing educational quality could be identified. Furthermore, that study

demonstrated that the information identified as useful consisted of mul-

tiple indicators of quality that appeared to be applicable across program

areas. This is supportive of the view taken in this study that the mul-

tiple indicators of quality identified in the IRC project (Steuart &

Rathburn, 1982) were representative of some underlying structure of the

multiple indicators of quality, the dimensions of which should remain in-

variant across program areas. The Clark et al. (1976) study and the IRC

study (Steuart & Rathburn, 1982) defined several dimensions of quality

based upon the program characteristics identified in the respective stud-

ies as useful in assessing program quality. However, the dimensions were

defined in both studies on the basis of the perceived similarity of the

content of clusters of program characteristics and were not defined by the

utilization of the technique of factor analysis as was done in this study.









Undergraduate Education

Although considerably fewer studies have been conducted to assess

quality at the undergraduate level than at the graduate level, the stud-

ies rating undergraduate education have demonstrated that colleges differ

substantially in traditional measures of quality. Jordan (1963), in a

study involving undergraduate programs, found that those institutions

that spent more on salaries for library staff and had higher numbers of

library volumes per student tended to score higher on a quality index

based upon multiple weighted factors. Brown's (1967) study of undergrad-

uate education ranked colleges on the basis of eight criteria including

total current income per student, proportion of students entering gradu-

ate school, proportion of graduate students, number of library volumes

per student, total number of full-time faculty, faculty-student ratio,

proportion of faculty with doctorate, and average faculty compensation.

These two studies represented approaches to undergraduate quality assess-

ment similar to those utilized for graduate programs. Lawrence and Green

(1980) expressed the opinion that these and similar studies (Dube, 1974;

Krause & Krause, 1970; Tidball & Kistiakowski, 1976) that used quality

measures more typically associated with graduate quality assessment (e.g.,

publication record of students, percent of students who finish profes-

sional schools or terminal graduate degrees, etc.) failed in their pur-

pose because they did not take into account the "special nature of the

undergraduate experience" (p. 33).

Astin, through a series of studies (1965, 1971; Astin & Henson, 1977)

approached one specific aspect of undergraduate quality that he termed

the selectivity index. Astin (1971) defined the selectivity index as a

relative measure of the academic ability of a college's entering freshmen.









In another study involving the selectivity index, Astin and Henson (1977)

used ACT and SAT scores to approximate the selectivity of all accredited

two- and four-year institutions. Astin and Henson defended their approach

on the basis of its acceptance by the mainstream of faculty and administra-

tors in higher education (p. 2). The validity of the approach was sup-

ported by its positive correlations with selected institutional character-

istics such as student-faculty ratios (Astin & Solmon, 1979).

In a related study, Astin developed further the selectivity index by

examining the preferences of academically talented students for various

institutions (Astin & Solmon, 1979). Although they realized that this

measure was confounded by a number of variables such as institutional pop-

ularity and regionalism, Astin and Solmon maintained that a measure of an

institution's drawing power for highly able students was a valid quality

measure (p. 47).

In a later study of undergraduate education quality, Astin and Solmon

(Astin & Solmon, 1981; Solmon & Astin, 1981) expanded their view of qual-

ity. They utilized faculty members representing seven disciplines from

institutions in four states (California, Illinois, New York, and North

Carolina) to rate institutions from a national list and a state list.

The state list included those institutions in a rater's state that

awarded a minimum of five undergraduate degrees in a rater's field during

1977. The national list was composed of 100 of the "most visible insti-

tutions in the rater's field" (Astin & Solmon, p. 14). Each rater was

asked to evaluate each institution from both lists according to six qual-

ity criteria including overall quality of undergraduate education, prep-

aration of students for graduate and professional school, preparation of

students for employment after college, faculty commitment to undergraduate









teaching, scholarly or professional accomplishments of faculty, and inno-

vativeness of curriculum and pedagogy (p. 24).

Utilizing a factor analysis of the mean ratings on each of the qual-

ity criteria for each of the undergraduate disciplines, Astin and Solmon

(1981) concluded that

these ratings showed that the seven fields form a single "overall
qualityn dimension. In practical terms, this means that quality
differences among fields at a given institution tend to be mini-
mal, and that ratings of one department may suffice as an estimate
of the quality in the other departments at the institution. (pp.
14-15)

Considering that only six quality criteria were used in the study, the

conclusion appeared warranted.

Probably the best known studies of undergraduate quality, the Gourman

studies (1967, 1977), provided little explanation of the procedures used

to arrive at the reported ratings. Scores on two sets of variables--

strength of an institution's academic departments and quality of nonde-

partmental areas--were used to produce an average academic department

rating, an average nondepartmental rating, and an overall "Gourman rating"

for each institution.

Although the Gourman ratings were accepted as a viable measure of un-

dergraduate quality, several of the assumptions used in the ratings were

questionable. Gourman assumed that, at minimum, 10 years were required

following graduation to produce an excellent classroom teacher and thus

rated older faculty higher. Gourman gave equal weight to faculty effec-

tiveness, public relations, library, a college's alumni association, and

the athletic-academic balance as measures of institutional quality.

Gourman held a bias toward larger institutions, consistently rating them

higher than smaller liberal arts colleges (Lawrence & Green, 1980). In










1977, Gourman changed the format of his ratings, making it similar to

that of the 1970 Roose-Andersen study. Gourman rated 68 undergraduate

programs in 1977, again providing no information on the procedures used

in developing the ratings.

Utilizing approaches such as those discussed, other researchers have

addressed the issue of quality in undergraduate education (Johnson, 1973;

Nichols, 1966; Solmon, 1975). Other, possibly less academic, attempts to

evaluate undergraduate quality included the popular college guides (e.g.,

Hawes Comprehensive Guide to Colleges, 1978). Webster (1981) criticized

many of these attempts on the basis of their limited view of the under-

graduate experience. Central to his criticism was the lack of emphasis

on undergraduate teaching in preparation for the job market and the over-

riding view of undergraduate programs serving primarily as preparatory

periods for graduate study.

Very little research has been conducted in the community/junior col-

lege setting in relation to the quality issue. In general, many of the

premises underlying traditional views of quality in higher education run

in opposition to the basic principles of the community college philosophy.

An example of this is the discrepancy between the selectivity index (Astin

& Solmon, 1979) and the open door admission policy of most community col-

leges.

One of the more quoted studies of educational quality in the community

college setting involved the identification of quality indicators from

peer opinions expressed in evaluations of selected junior colleges during

accreditation team visits (Walters, 1970). Walters identified 58 specific

indicators from a list of more than 500 recommendations made in accredita-

tion team reports on 126 public junior colleges from 1960 to 1968. Most









of the indicators related to college procedures, the efficiency of oper-

ations, staffing levels, and organizational structure. Walters postu-

lated that the 58 indicators, taken collectively, described a quality

public junior college. Only two of the indicators were based on any

specific quantitative measures. Another study of educational quality

in the two-year college, the Pike study (1963), involved an analysis of

the relationship of current expenditures, enrollment, and expenditures

per student to certain variables associated with educational quality in

junior colleges in Texas.

The IRC project (Steuart & Rathburn, 1982), which generated the data

used in the present study, surveyed 631 administrators representing 24

of Florida's public community colleges to determine what information

was perceived as useful in making decisions about the quality of pro-

grams or services offered by their colleges. In that project, the ad-

ministrators rated 434 program characteristics for degree of usefulness

in quality-evaluation decision making. More than 100 program character-

istics were identified as highly useful. The program characteristics

identified as highly useful were organized on the basis of perceived

similarity of content into 12 types of information including the need

for and structure of a program, program size, program costs, program

utilization rates, support services related to a program, information

on students entering a program, information on students currently en-

rolled in a program, information on faculty or staff associated with a

program, information from external or internal evaluations of a program,

quantitative outputs of a program, ratings of a program by various types

of raters, and information on students transferring from a program to

upper division (pp. 68, 145-146).









Similar to most of the studies of quality in graduate education,

none of the studies of quality in undergraduate education except the

IRC project (Steuart & Rathburn, 1982) investigated the usefulness in

quality-evaluation decision making of the various quality indicators

used in the studies. Also, although multiple program characteristics

have been used as indicators of quality, no study has attempted to iden-

tify any underlying dimensions for the multiple indicators except Astin

and Solmon (1981). Although the indicators of quality in the Astin and

Solmon study were so broad and so few that the dimensions identified are

probably spurious, they did demonstrate the use of the factor-analytic

technique in identifying underlying dimensions of indicators of quality.

Quantifiable Approaches to quality

In recent years, higher education researchers have explored numerous

ways of providing objective measures of educational quality. Many of

these attempts have involved correlating various quantifiable measures

with established rankings of institutional quality. These measures in-

cluded, among others, institutional size (Elton & Rose, 1972; Hagstrom,

1971), research productivity (Drew, 1975; Wispe, 1969), publication pro-

ductivity (Lewis, 1968), amount of money spent (Ousiew & Castetter,

1960), and number of library volumes (Lazarsfield & Thielens, 1958).

Many of these "correlates of prestige" (Lawrence & Green, 1980, p. 23)

used the popular ACE ratings as their basis for comparison. Cartter

(1966), anticipating the identification of quantifiable quality indica-

tors in his ratings, stated that such indicators "are for the most part

'subjective' measures once removed" (p. 4).

The list of factors was lengthy that positively correlated with rep-

utational quality ratings. Blackburn and Lingenfelter (1973) listed the









following items as being positively correlated with the 1966 ACE rat-

ings:

1. Magnitude of the doctoral program.
2. Amount of federal funding for academic research and de-
velopment.
3. Non-federal current fund income for educational and gen-
eral purposes.
4. Baccalaureate origins of graduate fellowship recipients.
5. Baccalaureate origins of doctorates.
6. Freshman admissions selectivity.
7. Selection of institutions by recipients of graduate
fellowships.
8. Postdoctoral students in science and engineering.
9. Doctoral awards per faculty member.
10. Doctoral awards per graduate student.
11. Ratio of doctorate to baccalaureate degrees.
12. Compensation of full professors.
13. The proportion of full professors on a faculty.
14. Higher graduate student/faculty ratios.
15. Departmental size of seven faculty members or more.
(p. 11)

Fotheringham (1978) described traditional quality indicators as in-

cluding context, faculty input, faculty-student interaction, and student

input. Fotheringham defined context as "the setting for the educational

process" (p. 17). The context variables included number of library vol-

umes, administrative policies, physical facilities, and similar varia-

bles. Pike (1963), in his study of the relationship between 72 varia-

bles associated with educational quality including enrollment, current

expenditures, and expenditure per student, found expenditures to be the

most important measure of context. Banghart, Kraprayoon, and Clewell

(1978) identified other context variables including curriculum, admini-

strative practices, and amount of external funding.

Meder (1955) defined faculty input as including an instructor's

training, skill, ability, and morale. Blackburn and Lingenfelter (1973)

included degrees, awards, faculty compensation, and post-doctoral stud-

ies as indicators of faculty input. Other faculty input indicators









included research productivity (Hagstrom, 1971), publication productiv-

ity (Somit & Tanenhaos;.1964) and faculty size (Balderston, 1970). The

faculty input indicators identified as most difficult to measure in-

cluded faculty morale, vigor, cohesion, and progressiveness that

Balderston (1974) suggested could only be measured subjectively.

Faculty-student interaction has been traditionally defined as the

faculty-student ratio (Meder, 1955). That definition has been expanded

to include the accessibility of the faculty (Roose & Andersen, 1970) as

well as the extent and nature of the faculty contact with students

(Fotheringham, 1978).

Student input indicators of quality have often been held as the most

valuable type of indicator. Fotheringham (1978) defined student input

as the characteristics of the student at the time of admission.

Blackburn and Lingenfelter (1973) proposed a more comprehensive defini-

tion simply as the students' quality. Many researchers concluded that

not enough has been done to control for variations in student input in-

dicators when measuring various outcome indicators of quality (Richards,

Holland, & Lutz, 1966; Rock, Centra, & Linn, 1969).

Fotheringham (1978) cited three more categories of quality indica-

tors that he labeled output, student change, and intellectual climate.

Output was described as including both faculty output (publications and

other productivity measures) and student output (accomplishments of stu-

dents following graduation). Variability in the specific measures used

to assess output indicators was reflected in the work of Keller (1969)

and Lawrence, Weathersby, and Patterson (1970).

The student change indicators related to the extent of learning that

took place during the students' enrollment (Turnball, 1971). Ostar









0973)described this as the value-added concept. It was his opinion

that in the assessment of the development of students, specific atten-

tion should be given to their initial abilities and their goals. Meas-

ures of student change included post-graduate employment, personal

achievements, motivation, and achievements in graduate school according

to Fotheringham (1978).

Fotheringham (1978) defined intellectual climate as "an attitude

toward learning and scholarship shared by students, faculty, and admin-

istration" (p. 26). Several researchers have expressed the opinion that

campus climate is of primary importance in assessing institutional qual-

ity (Astin, 1963; Boyer, 1964; Bowen, 1963). Indicators in this cate-

gory included both academic attributes, such as faculty concern for

scholarship, and non-academic attributes such as students' residential

experience, democratic participation of the students in campus affairs,

and counseling or other supplementary services.

Although multiple quantifiable indicators of quality have been iden-

tified in these studies, none of the studies investigated the possibil-

ity of identifying underlying dimensions of the multiple indicators to

facilitate providing information to decision makers in a format useful

in quality-evaluation decision making. The IRC study (Steuart &

Rathburn, 1982) included some program characteristics representative of

many of these quantifiable indicators of quality which is another rea-

son the data from that study provided an excellent opportunity for iden-

tifying underlying dimensions for information useful in program quality-

evaluation decision making. A discussion of the utility of the tech-

nique of factor analysis for identifying any underlying dimensions of a

multi-variate data set is presented in the next section of this chapter.









Determining Underlying Dimensions: Factor Analysis

In the decision-oriented model of evaluation as described by

Stufflebeam et al. (1971), once the information useful for making an

evaluation has been determined in interaction with the decision maker,

that information should be provided to the decision maker in a format

useful to the decision maker. If relatively few items of information

are involved, then the means of providing the information in a useful

format would appear relatively straightforward. However, from the re-

view of selected studies on quality evaluation in higher education, mul-

tiple indicators of quality have been identified. In the IRC study

(Steuart & Rathburn, 1982), more than 100 program characteristics were

identified as highly useful in making quality-evaluation decisions.

Providing such a wide array of information in a format useful to a de-

cision maker is a problem. Craven (1980) indicated that "providing the

desired information in an appropriate format" (p. 111) is a major con-

cern if evaluation processes are to effectively address the higher edu-

cation issues of the 1980s.

Applicability of Factor Analysis

The situation of administrators in higher education attempting to

use multiple indicators of quality when making quality judgments about

programs or services is similar to the situation psychologists faced

when evaluating human personality: interpreting multiple measures to

describe or evaluate a person (Harman, 1976, p. 4). This was the con-

text for the origin of factor analysis in psychology. It was developed

as a technique to determine dimensions of personality that would facili-

tate the evaluation of personality (Cattell, 1950, pp. 26-27). Although

it was developed within the field of psychology, the mathematical









techniques involved are not limited to psychological applications

(Harman, 1976, p. 4). Cattell (1966) stated that the use of factor

analysis was particularly advantageous where "the number of variables

to be watched over and thought about is bewilderingly large . [and

where] there has been little success after several years in reaching

agreement on the major concepts [in the area of inquiry]" (p. 175).

Both of these criteria appear to apply to the field of quality evalua-

tion in higher education. Burt (in Cattell, 1966) has stated that the

primary aim of factor analysis is "to discover principles of classifica-

tion [of individuals or variables]" (p. 268).

Simply because the technique of factor analysis originated in the

field of psychology, applications of factor analysis were primarily in

that field up until increasing accessibility to computers facilitated

the use of the technique (Harman, 1976, p. 7). Harman (1976) has col-

lected more than 200 studies using factor analysis in fields other than

psychology including such diverse fields as economics, medicine, the

physical sciences, political science, sociology, and regional science

(p. 7). Also, he cited a number of taxonomic applications in fields
other than psychology (pp. 7-8). Harman stated that

Unlike the field of psychology, in which theory has been pri-
mary and the factor-analytic model has been used to test and
modify such theory, the application of factor analysis in the
areas noted has been exploratory, almost exclusively, in the
hope of bringing order out of the relationships among the many
variables that could now be investigated with the aid of the
computer. (p. 8)

Guertin and Bailey (1970) suggested numerous applications for fac-

tor analysis in the field of educational psychology (Chapter 14). The

pervasiveness of its use in research in higher education is indicated

by the numerous entries under the subject heading "factor analysis" in









each issue of Resources in Higher Education published by the Educational

Resources Information Center (ERIC). The following recent studies in

higher education are cited because, as in this study, factor analysis

was used for discovering dimensions or categories among a set of varia-

bles.

Smart (1975) used the technique in a survey of students, faculty,

and administrators to determine salient dimensions of 47 institutional

goals rated by respondents for degree of importance to a college. In a

survey of a stratified random sample of 722 Minnesota citizens, Biggs,

Brown, and Kingston (1977) used factor analysis to determine "categories

of educational values" (p. 157) from respondents' ratings of the impor-

tance of various university goals and activities, the importance of var-

ious academic fields, and the importance of various reasons for students

attending the University of Minnesota. During the development of a

model for evaluating educational innovations, Bess and Hayes (1970)

used factor analysis as a means "of assembling meaningful clusters of

student characteristics into subcultures" (p. 44) from students' re-

sponses to a questionnaire that was devised to measure a combination of

student personality characteristics, value orientations, attitudes,

goals, perceptions, and behaviors. In a study to investigate the pos-

sibility of clustering academic departments on dimensions that could

provide an equitable basis for departmental funding, Dressel and Simon

(1976) used factor analysis on 35 descriptive variables representing

various characteristics of the instructional load and output of academic

departments to determine the dimensions for grouping the departments.

At the University of Toledo, a study was done with an objective

very similar to the objective in this study (Perry & Lind, 1976). In









the Perry and Lind study, factor analysis was used on the ratings by

140 department chairpersons and 272 program graduates of the importance

of 33 criteria in evaluating academic programs to determine "what latent

factors or dimensions were involved in the data" (p. 20). In their most

recent reputational study of undergraduate educational quality, Solmon

and Astin (1981) used factor analysis to determine patterns among the

ratings of seven discipline areas in selected American undergraduate in-

stitutions by faculty representing undergraduate institutions in four

selected states.

Each of these studies is illustrative of the use of factor analysis

for discovering categories or dimensions of an underlying pattern within

a set of variables. It appeared appropriate in this study to use fac-

tor analysis to determine the underlying dimensions of the multiple in-

dicators of quality identified in the IRC project (Stuart & Rathburn,

1982) to use in developing guidelines for organizing the identified in-

formation into a format useful to administrators in making quality-eval-

uation decisions about programs.

Definition of Factor Analysis

Spearman is generally credited with the origin of factor analysis

in his development of a psychological theory involving the specification

of a general factor and a number of specific factors related to describ-

ing general intelligence: the two-factor theory (Harman, 1976, p. 3).

Finding Spearman's theory insufficient to describe a battery of psycho-

logical tests, other psychologists explored the possibility of extract-

ing several general or common factors from a matrix of correlations

among tests. These explorations led to the development of multiple-

factor analysis (Harman, p. 4).









The principal concern of factor analysis is the resolution of a set

of variables into a smaller number of categories or "factors." The

resolution is accomplished by analysis of the correlations among the

variables within the set. A satisfactory resolution produces a set of

factors (or categories or dimensions or variables) smaller than the

original set of variables that conveys the essential information of the

original set of variables. Thus, "the chief aim [of factor analysis]

is to attain scientific parsimony or economy of description" (Harmon, p.

4). Economy of description is precisely the goal in providing to deci-

sion makers in a useful format the information represented by multiple

indicators of quality. As Fox (1969) stated, factor analysis is a pro-

cedure for "identifying the underlying structure of the interrelation-

ships expressed in the correlational matrix [of a set of variables]"

(p. 216). The procedure estimates the minimum number of separate vari-

ables or dimensions, called factors, necessary to provide the informa-

tion contained in the correlation matrix (Fox, p. 216).

Steps in Factor Analysis

Fox (1969) described the procedure of factor analysis as typically

involving a five-step process (pp. 216-218). The first step is to iden-

tify the variables to be studied. The second step is to create a matrix

of correlations expressing the correlation between each pair of variables

in the set of variables being studied. The third step is "to put this

matrix through the first computational process of factor analysis that

produces what is called an unrotated matrix of principal components,

from which the minimum number of separate factors required to account

for the data can be identified" (p. 217). A full description of the

calculation procedures is presented in Harman (1976).









Harman (1976) described two basic approaches to the calculations in-

volved (pp. 14-15). Within the framework of the linear mathematical

model used in factor analysis, the calculations can either extract the

maximum variance or best reproduce the observed correlations (p. 14).

The method for the reduction of a large body of data so that the maxi-

mum variance is extracted was first proposed by Pearson and later de-

veloped as the method of principal components or component analysis

(p. 14). In contrast to the maximum variance approach is the classical

factor-analysis model developed to maximally reproduce the correlations.

It is generally called common-factor analysis because each of the ob-

served variables involved in the analysis is defined linearly in terms

of a number of common factors and a unique factor (p. 15). "The common

factors account for the correlations among the variables, while each

unique factor accounts for the remaining variance (including error) of

that variable" (p. 15). The common-factor analysis approach was used

in this study because the intent was to determine as clearly as possible

the dimensions (interrelationships) among the variables involved and

not to determine the amount of variance attributable to a variable or a

group of variables (See Guertin & Bailey, 1970, pp. 82-83).

The method of calculation generally used for common-factor analysis

was described by Thurstone and has been labeled the "principal axes so-

lution" (in Guertin & Bailey, 1970, p. 61). The essential difference

between the methods is whether in the mathematical computations unities

are inserted in the diagonal of the correlational matrix (component

analysis) or whether "communalities" are inserted (common-factor analy-

sis) (Harman, 1976, p. 70). According to Guertin and Bailey (1970),

the use of unities in the diagonal of the correlation matrix causes the









intercorrelation matrix to take on a higher rank than it would with val-

ues less than unity in the diagonal (p. 33). Since the object in fac-

tor analysis is to find the minimum number of factors or dimensions or

variables necessary for economy of description of the total set of var-

iables, values less than one are desired in the diagonal (Guertin &

Bailey, p. 33). The values less than one in the diagonal are called

"communalities." The communalities express the amount of the common-

factor variance (the variance shared with all the other variables in

the analysis) (Guertin & Bailey, p. 33). The correlation matrix with

communalities rather than unities in the diagonal is called the reduced

intercorrelation matrix (Guertin & Bailey, p. 33).

One of the problems encountered in common-factor analysis is that

the appropriate communalities are not easily computed with precision

and various methods of estimating them have been developed. The best

estimate of the communalities appears to be the squared multiple corre-

lations of each variable with the remaining variables (Guertin, 1977, p.

21). On the other hand, Harman (1976) stated that "it matters little

what values are placed in the principal diagonal of the correlation ma-

trix when the number of variables is large (say, n> 20)" (p. 86), be-

cause the number of values in the diagonal is relatively small compared

to the many values off the diagonal so the factorial results are little

affected (p. 86). However, the use of communalities in the diagonal

prior to factor extraction makes possible the obtaining of the maximum

amount of common-factor variance, a chief emphasis of common-factor

analysis (Guertin, 1977, p. 22).

Once the principal axes factors have been extracted from the reduced

intercorrelation matrix through the processes involved in step three,










they can then be rotated to gain the clearest view of the common-factor

space or configuration. This is step four of the factor-analysis pro-

cess described by Fox (1969, p. 217). Rotation is performed mathema-

tically, but the concept of rotation is based upon geometry. A clear

description of the relationship may be found in Guertin and Bailey

(1970, pp. 26-34 and Chapter 6). The reason for rotation is that al-

though the initial factors may be mathematically satisfactory solutions,

the factors themselves may have little meaning relative to determining

constructs or principles of concern to the investigator (Guertin &

Bailey, 1970, pp. 87-88).

Since the principal axes method extracts the maximum possible common

variance, the primary decision in rotation becomes that of determining

the number of principal axes to carry into rotation to gain the clearest

picture of the common factors (Guertin, 1977, p. 22). At this point in

the factor-analysis process, there is encountered another major problem:

what criterion or criteria to use to decide what number of factors to

carry into rotation (Guertin & Bailey, 1970, Chapter 7). Guertin (1977)

stated that the universally accepted criterion that is followed is

Thurstone's principle of simple structure that yields factors that are

relatively invariant across studies (p. 22). Guertin and Bailey (1970)

asserted that the simple structure criteria not only provide a unique

solution but at the same time assure meaningful factors (p. 42). In

simplest terms, the concept of simple structure dictates that both var-

iables and factors be described by a minimum number of sizable loadings

(Guertin, 1977, p. 22). In reference to the matrix representation of

factors (columns) and variables (rows), the concept of simple structure

specifies that the columns (factors) should have the largest possible









number of zero or negligible loadings (values), the rows (variables)

should have the largest possible number of zero or negligible loadings

(values), and every pair of columns (factors) should have the largest

possible number of values approaching zero in one column (factor)

(Guertin & Bailey, 1970, p. 99). The ideal situation would be to have

each variable have a high loading on only one of the factors and for

each factor to have only a few variables with high loadings with all the

other variables having loadings approaching zero on that factor (Guertin

& Bailey, 1970, p. 98).

To approximate the ideal of simple structure for a given factor ma-

trix, the factors may be rotated in either an oblique or an orthogonal

fashion (Guertin, 1977, p. 22). As with the term rotation, these terms

reference a geometric perspective. Conceiving of the factors as dimen-

sions (vectors), an orthogonal rotation assumes that the factors are un-

related and places the factors (vectors) in relation to each other at

900 angles. An oblique rotation is not held to that criterion. Accord-

ing to Guertin and Bailey (1970), with the use of real data, true simple

structure must provide for correlated factors so an orthogonal represen-

tation of factor space is unsatisfactory (p. 100). They recommend the

use of the oblique rotation procedures and if that results in factors

that are only slightly correlated, then an orthogonal rotation may be

performed (p. 101). It is their opinion that it is necessary to use

oblique rotation procedures to properly represent underlying dimensions

or factors of a set of variables (p. 89).

The utilization of rotation to identify simple structure completes

step four of the factor-analysis process as outlined by Fox (1969, p.

217). The resulting matrix is the factor pattern and the values forming









this matrix are called the factor loadings (Fox, 1968, p. 217; Harman,

1976, p. 15). The loadings have the same characteristics as correla-

tion coefficients in that they are two-digit decimal numbers in the

range of +1.00 to -1.00 through a midpoint of zero. A variable can have

a positive or negative loading on a factor and the sign indicates

whether the factor operates to raise or lower the value of that particu-

lar variable (Fox, 1969, pp. 217-218). The magnitude of the loading in-

dicates the importance of the factor on each variable (p. 218).

The fifth and final step in the factor-analysis process as outlined

by Fox (1969) is for the researcher to label the factors (p. 218). Gen-

erally, this involves determining the variables that have relatively

high loadings on a factor and then abstracting a term or concept that

reflects the content of these variables (p. 218; see also Guertin &

Bailey, 1970, p. 87).

This description of factor analysis has presented only the salient

features of the process related to this study. A thorough discussion

of factor analysis may be found in Harman (1976). For the less mathe-

matically inclined person, Guertin and Bailey (1970) present an excel-

lent description of factor analysis.















CHAPTER III
METHODOLOGY


The Problem

The problem in this study was the identification of any underlying

dimensions within the multiple quality indicators rated by administra-

tors in Florida public community/junior colleges as highly useful in

making program quality-evaluation decisions. The research questions

were: (1) What is the "best" factor structure for the usefulness rat-

ings? (2) For the identified "best" factor structure, are there signif-

icant differences in the mean factor scores between classifications of

respondents by program area and between classifications of respondents

by administrative area?

Description of Data Used

The data used in this study were generated in the IRC project

(Steuart & Rathburn, 1982). A full description of the methodology used

in that project is in Appendix B.

The survey population consisted of all administrators in Florida

public community/junior colleges who were classified by their institu-

tions as executive, administrative, or managerial personnel under part

three of the "Personnel and Salary Report (SA-1)" as defined in the

Community College Management Information System Procedures Manual of the

State of Florida (Division of Community Colleges, 1980, pp. 10.1-10.2).

There were 631 administrators identified and 450 respondents represent-

ing 24 of Florida's 28 public community/junior colleges for a response

rate of 71.3% (Steuart & Rathburn, 1982, p. 49).
50









The responding administrators rated 434 program characteristics,

contained in a survey questionnaire (Appendix C), for degree of useful-

ness in program quality-evaluation decision making. The rating scale

was



1 = ESSENTIAL ("I do not see how I could make a judgment about
the quality of a program without considering this charac-
teristic.")

2 = VERY USEFUL ("I would feel hindered in making a judgment
about the quality of a program without considering this
characteristic, but I would make a judgment without it.")

3 = SOME USEFULNESS ("Although I would like to consider this
characteristic in making a judgment about the quality of
a program, I would not feel hindered in making a judgment
without it.")

4 = LITTLE OR NO USEFULNESS ("I probably would not consider
this characteristic in arriving at a judgment of the
quality of a program.") (Steuart & Rathburn, p. 144)

Also, any program characteristics that were considered "not applicable"

by the raters were rated with a "4" (Steuart & Rathburn, p. 144).

Each respondent was assigned a "position code" (p. 46) based upon a

self-reported position title on each questionnaire. The position codes,

a description of the position titles associated with each code, and fre-

quencies of respondents for each code are reported in Appendix D.

The program areas and administrative areas used to classify the re-

sponding administrators were defined as follows:

Program Areas
Advanced and Professional Program Area--commonly referred to as
university parallel, the first two years of a baccalaureate pro-
gram.
Occupational Program Area--or vocational-technical education,
terminal certificate or degree programs preparing students for
employment in a specific trade or field.
Community Instructional Services Program Area--programs of
short, credit or noncredit classes designed to provide enrich-
ment for students.









Developmental Program Area--of compensatory education, designed
to assist students in improving deficient basic skills necessary
for program-required work.
Student Services Program Area--various auxilliary services pro-
vided to students facilitating their progress through one of
the program areas including such services as counseling, student
activities, admissions, financial aid, etc.
Administrative Areas
General Administration--respondents with responsibilities of a
general nature in the operation of the college's programs or
services.
Academic Affairs--respondents with responsibilities of adminis-
tering one or more of the college's academic programs.
Student Affairs--respondents with responsibilities of adminis-
tering one or more of the college's student services programs.
Community Instructional Services--respondents with responsibil-
ities of administering the college's adult and continuing edu-
cation or community instructional services programs.
Business Affairs--respondents associated with the operation of
the business offices (budget, accounting, personnel, etc.) of
the college.
President--the chief executive officer of the college. (Steuart
& Rathburn, 1982, Appendix A)

Based upon their position titles, only respondents who were perceived as

having major responsibility in one of the five program areas were in-

cluded in the analysis by program areas. For example, presidents, vice

presidents, research and planning directors, and other administrators

with responsibilities across program areas were not included in the

analysis by program areas. All respondents were included in the anal-

ysis by administrative areas. Operational definitions for these classi-

fications are given in Appendix A.

Mean ratings were calculated for each program characteristic in the

questionnaire. Using these means, ranks were calculated for the program

characteristics based upon the responses of all respondents (N = 450).

When the ranks for two or more program characteristics were tied, the

tied values received the mean of the ranks that would have been assigned

had the ranks not tied (Steuart & Rathburn, 1982, p. 54).









Only those program characteristics that were in the top quarter of

the ranked mean ratings were discussed in the presentation of results

for all respondents (N = 450) in the IRC report (Steuart & Rathburn,

1982, p. 54). All 108 program characteristics in the top quarter had a

mean rating on the "essential" side of the rating scale (p. 62). The

mean ratings of these 108 program characteristics ranged from 1.38 to

2.05 (p. 54). The analyses in this study included only these 108 pro-

gram characteristics. The means for these program characteristics are

reported in Appendix E.

Analysis of the Data

Research Question One

To discover the best factor structures for the usefulness ratings,

two sets of data were used for analysis. An analysis was performed

based upon those respondents who rated all 108 program characteristics

(i.e., respondents with missing data were excluded). There were 315

such cases. The same analysis was performed using the ratings of all

450 respondents by changing any missing ratings for an item to the mean

ratings for respondents rating that item. The use of all 450 respond-

ents was desirable so that all respondents could be included in the com-

parisons of factor scores between program areas and between administra-

tive areas (research question two). The following procedures for obtain-

ing the best factor structure were performed on each of these sets of

data and the results compared through use of the coefficient of congru-

ence for matching factors, inspection of the difference in the root-

mean-square values, and the criteria for simple structure (Guertin &

Bailey, 1970, p. 99; Harman, 1976, pp. 343-344).









The first step in the analysis was the production of the correlation

matrices representing the correlations between the ratings of all possi-

ble pairs of the 108 program characteristics. These correlation ma-

trices constituted the basis for what has been defined as an R analysis

(Cattell, 1950, p. 28). An R analysis consists of looking at the inter-

relationships of variables (program characteristics) rather than cases

(respondents)(Cattell, 1950, pp. 30-31). The correlation coefficients

represented the degree of similarity in the ratings by the respondents

of any pair of program characteristics.

The correlation matrices were factor analyzed using the principal

axes method with iterations. It has been described as the most widely

used technique in determining the initial principal axes (Guertin &

Bailey, 1970, p. 62; Harman, 1976, p. 133). Following Guertin and

Bailey's (1970, p. 101) suggestion, the principal axes matrices were

submitted initially to an oblique rotation to determine whether the fac-

tors were essentially uncorrelated. The direct oblimin rotation proce-

dure (Jennrich & Sampson, 1966) was used with gamma equal to zero. Pro-

gram P4M was used in the BMDP Biomedical Computer Programs P-Series 1979

(Dixon & Brown, 1979). The squared multiple correlations were used as

the initial estimates of the communalities (Guertin & Bailey, 1970, pp.

147, 163). The number of factors to carry into successive rotations was

determined by inspecting the results for decrements in the latent roots,

the cumulative percentages of common variance for which successive fac-

tors accounted, and the criteria for simple structure (Guertin & Bailey,

1970, pp. 115-120). Since the factors proved to be essentially uncorre-

lated, the principal axes matrices were then submitted to an orthogonal

rotation. The varimax method for orthogonal rotations was used since









there appeared to be general agreement that this method was preferred

with regard to giving the closest approximation to simple structure

(Guertin & Bailey, 1970, pp. 98-99; Harman, 1976, Chapter 14). Again

the number of factors to carry into successive rotations was determined

by inspecting the results for decrements in the latent roots (eigenval-

ues), the size of the latent roots, the cumulative percentages of common

variance for which successive factors accounted, and the criteria for

simple structure (Guertin & Bailey, 1970, pp. 115-120). The factor pro-

cedure in the SAS computerized package was used for the orthogonal rota-

tions (SAS Institute, Inc., 1979, pp. 203-210).

The resulting factor solutions from both sets of respondents (N =

315 and N = 450) were compared for congruence using the coefficient of

congruence (Harman, 1976, pp. 343-346). If the coefficient of congru-

ence between any pair of factors was .90 or greater, the factors were

considered congruent (Mulaik, 1972, p. 355). Since the factor structures

were congruent, the factor structure based upon the set of 450 respond-

ents (with missing values set equal to the mean value for that variable)

was selected as the best representation of the underlying dimensions of

the 108 indicators of quality.

The loadings of the variables on each factor in this factor struc-

ture were inspected. Any variable having a loading of .50 or greater

was considered in determining the meaning of a factor (Guertin & Bailey,

1970, pp. 78, 81). Based upon the nature of the program characteristics

with a .50 or greater loading, each factor was described. With the de-

scription of the factor structure, the methodology involved in the first

research question was completed.









Research Question Two

For the second research question, the best factor structure was used,

as determined by the methodology for the first research question, to cal-

culate factor scores for the respondents. The regression method was used

for the factor score computations (SAS Institute, Inc., 1979, p. 204).

The score procedure in the SAS computerized package was used (SAS Insti-

tute, Inc., 1979, pp. 371-372). Mean factor scores were determined for

the respondents classified by the described program and administrative

areas.

The differences in mean factor scores between the program areas and

between the administrative areas were tested for significance using the

t statistic at .10 level of significance. Since the variances of the

factor scores for some of the program areas and some of the administra-

tive areas were significantly unequal, as tested by use of the F statis-

tic at the .05 level of significance, it was inappropriate to perform

an analysis of variance prior to testing for significant differences be-

tween mean factor scores. Also, since the likelihood of a Type I error

increases as the number of contrasts tested increases, the Bonferroni

correction for the t statistic was used (Myers, 1979, pp. 298-300).

Essentially, this correction results in rejection of the null hypothesis

(i.e., there is no significant difference in the means) when the obtained

t exceeds the value of t in the standard t table at a level of signifi-

cance equal to the selected level of significance for the comparisons

(.10 in this study) divided by the number of comparisons. Since there

were 10 comparisons between the program areas and 15 comparisons between

the administrative areas, the obtained t for these comparisons had to

exceed the value in the t table at .01 (.10 divided by 10) and .007 (.10






57


divided by 15) levels of significance, respectively, for rejection of

the null hypothesis. Where the variances were significantly different

for the factor scores being compared, the t value was calculated on the

assumption of unequal variances (SAS Institute, Inc., 1979, p. 425).

The t-test procedure in the SAS computerized package was used (SAS In-

stitute, Inc., 1979, pp. 425-426).

Using the results of these analyses, guidelines were forumlated for

organizing the multiple indicators of quality into a format useful to

administrators in Florida public community/junior colleges in making

quality-evaluation decisions about programs offered by their colleges.















CHAPTER IV
RESULTS AND DISCUSSION


There were 450 administrators representing 24 of Florida's 28 public

community colleges who rated 434 program characteristics, contained in

a survey questionnaire (Appendix C), for degree of usefulness in program

quality-evaluation decision making. The rating scale ranged from 1

(essential) to 4 (little or no usefulness). Only the 108 program char-

acteristics in the top quarter of ranked mean ratings were included in

the factor analysis. Based upon all 450 respondents, the mean ratings

for each of these 108 program characteristics are presented in Appendix

E. All of the 108 program characteristics in the top quarter of ranked

mean ratings had a mean rating on the "essential" side of the rating

scale. The mean ratings of the 108 program characteristics ranged from

1.38 to 2.05. The mean ratings of each of the 108 program characteris-

tics, based upon the 315 respondents who rated all of them, are reported

in Appendix E. These mean ratings ranged from 1.36 to 2.13.

The Pearson product-moment correlation coefficients for the inter-

correlations of the 108 program characteristics, based upon the ratings

by all respondents (N = 450) with missing values for any program charac-

teristic set equal to the mean rating for that program characteristic,

are presented in Appendix F. The Pearson product-moment correlation co-

efficients for the intercorrelations of the 108 program characteristics,

based upon the ratings by respondents with no missing responses (N = 315),









are presented in Appendix G. These two sets of correlation coefficients

were used in the factor analysis.
Factor Analysis Results

The iterated principal axes factor-analytic method as applied to

both sets of correlation coefficients resulted in a solution with 21

principal axes. The principal axes solution based upon N = 450 is pre-

sented in Appendix H with the final communality estimates and eigenval-

ues. The principal axes solution based upon N = 315 is presented in

Appendix I with the final communality estimates and eigenvalues.

For the principal axes solution based upon N = 315, the latent roots

(eigenvalues), differences in the latent roots, cumulative variance for

which successive axes accounted, and the percentage of common variance

for which successive axes accounted are presented in Table 1. These

were the values that were examined to determine the number of factors to

carry into the initial rotations. In factor analyses, the latent roots

generally fall off rapidly at first because systematic common variance

is being extracted. The roots start decreasing almost linearly as

mostly error variance is being extracted. It is generally accepted that

one criterion for the cutoff point for the number of factors to rotate

comes just before this linear descent (Guertin & Bailey, 1970, p. 117).

Although the differences in the latent roots decreased greatly after fac-

tor 5, they did not become linear until after factor 10 (Table 1). Us-

ing the differences in the latent roots, the rotation of 10 factors was

indicated. The rotation of 10 factors accounted for 80.7% of the common

variance compared to 64.5% accounted for by the rotation of five factors.

Following the suggestion of Guertin and Bailey (1970, p. 117), one more

and one less than the indicated number of factors were rotated with the










Table 1

Variance Accounted for by Successive
Principal Axes for N=315


Principal
Axes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21


Cumulative
Differences Variance

-- 26.616
18.928 34.304
1.703 40.289
2.083 44.191
.790 47.303
.117 50.298
.353 52.940
.310 55.272
.219 57.385
.250 59.248
.041 61.070
.118 62.774
.151 64.327
.084 65.796
.192 67.073
.069 68.281
.053 69.436
.073 70.518
.066 71.534
.067 72.483
.055 73.377


results compared according to the criteria for simple structure.


Accord-


ing to Guertin and Bailey (1970), "the factors are best located when the

produced structure is as simple as possible" (p. 98). The three general

criteria for simple structure are: (1) the factors should have the

largest possible number of loadings approaching zero; (2) the variables

should have the largest possible number of loadings on the factors

approaching zero; and (3) every pair of factors should have the largest

possible number of loadings approaching zero on one factor but not the

other (Guertin & Bailey, 1970, p. 99). For all program characteristics

that had a factor loading of .50 or greater, the loadings that resulted

from rotating 9, 10, and 11 factors are presented in Table 2. The com-

plete factor structures for the three rotations are presented in Appendix J.


Percentage of
Common Variance

36.3
46.8
54.9
60.2
64.5
68.5
72.1
75.3
78.2
80.7
83.2
85.5
87.7
89.7
91.4
93.1
94.6
96.1
97.5
98.8
100.0


Eigenvalues

26.616
7.688
5.985
3.902
3.112
2.995
2.642
2.332
2.113
1.863
1.822
1.704
1.553
1.469
1.277
1.208
1.155
1.082
1.016
.949
.894






61


Table 2

Program Characteristics With Factor Loadings
of .50 or Greater in the Three Rotations of
the Principal Axes Solution Based Upon N=315
Rotations
Factors Characteristics 9 10 11
7 .52 .55 .54
15 .56 .58 .88
41 .66 .67 .66
89 .65 .66 .65
69 .74 .75 .74
63 .73 .74 .74
101 .63 .63 .63
95 .62 .62 .63
17 .56 .58 .57
31 .60 .61 .61
48 .69 .69 .68
1 75 .79 .78 .78
79 .76 .76 .76
96 .71 .70 .70
99 .67 .67 .67
1 .55 .57 .56
6 .54 .55 .56
13 .65 .65 .65
44 .70 .69 .69
72 .74 .73 .73
29 .76 .75 .76
28 .74 .73 .73
73 .65 .64 .64
51 .70 .67 .68
37 .59 .58 .59
--------------------T ------------ --- ----- -------
25 .77 .77 .76
16 .81 .80 .80
2 20 .82 .81 .81
60 .60 .61 .61
39 .81 .81 .81
36 .84 .84 .84
50 .84 .84 .84
---------------- 7T---------------. --------------5---------
87 .59 .58 .59
70 .72 .73 .73
56 .75 .77 .77
49 .73 .75 .75
3 46 .54 .56 .57
30 .60 .63 .63
80 .61 .65 .64
27 .70 .72 .72
22 .75 .78 .77
26 .73 .76 .75
.....................---------------------------------------------------






62


Table 2 (continued)

Rotations
Factors Characteristics 9 10 11

76 .71 .71 .72
78 .54 .55 .52
102 .54 .53 .54
74 .70 .70 .70
4 77 .57 .57 .57
98 .70 .70 .71
82 .58 .60 .57
92 .51 .52 .51
86 .66 .66 .66

59 .38a .52 .51
81 .43a .57 .54
66 .45a .59 .60
5 2 .48a .51 .52
19 .59 .60 .62
42 .60 .61 .61
34 .61 .62 .66
100 .57 .57 .60
-88 -- 2 74 ------- g2-52----- -----
4 .57 .57 .56
83 .82 .51 .50
6 40 .76 .76 .77
12 .72 .73 .75
35 .72 .73 .75
11 .50 .51 .49a
-------------------- --------- ---7T------ ----
90 .64 .68 .66
103 .60 .58 .60
7 33 .58 .55 .57
24 .72 .74 .74
32 .72 .73 .72
58 .64 .61 .63
9 .62 .56 .59
--- --- ------ -- 7------- ----------
18 .74 .73 .73
23 .72 .72 .74
8 65 .68 .68 .69
38 .69 .69 .69
53 .62 .62 .62
105 .49a .49a .50
-------- ------- ------ ---- ----------
45 .51 .60 .59
5 .46a .45a .46a
9 64 .62 .62 .63
84 .61 .66 .66
14 .65 .59 .59
8 .59 .51 .51
62 .50 .45a .45a
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .









Table 2 (continued)

Rotations
Factors Characteristics 9 10 11

10 None
-------------- --........................--------- -
11 91 .62
104 .68
aThe factor loading is included for comparison with the factor struc-
ture based upon N=450.


The most evident feature of the data represented in Table 2 was that,

regardless of the rotation examined, there was a relatively stable nine-

factor structure. For factors 1, 2, 3, 4, and 7, loadings of the varia-

bles on the factors were very similar within the three rotations. For

factor 5, the rotations of 10 or 11 factors produced a more clearly de-

fined structure. Although not evident in Table 2, from a comparison of

factors 5 and 9 for the three rotations in Appendix J, the 10 factor ro-

tation most closely approximated the criteria for simple structure be-

tween factors 5 and 9. Also, for factor 6 the 10-factor rotation pro-

duced the clearest structure. For factor 8, from Table 2 the rotation

of 11 factors was indicated as producing the clearest structure, but

from comparison of the loadings of other variables on factors 2, 5, and

8 for the three rotations presented in Appendix J, the 10-factor rota-

tion most closely approximated the criteria for simple structure. Also,

for factor 9, from a comparison of the loadings of other variables on

factors 5, 6, and 9 in the three rotations in Appendix J, the 10-factor

rotation most closely approximated the criteria for simple structure.

No variables had loadings of .50 or greater on factor 10 for either the

10- or 11-factor rotations. Three variables had a .50 or greater load-

ing on factor 11 for the 11-factor rotation.









For the three rotations, the rotation of 10 factors produced the

clearest common-factor structure. The rotation of 11 factors resulted

in the same nine interpretable factors as the 10-factor rotation but

with a slightly less clear structure. A trial rotation of 12 factors

confirmed this analysis. The 12-factor rotation resulted in the fis-

sion of factors 1 and 5 into more specific factors. Therefore, the 10-

factor rotation of the principal axes solution for N = 315 was chosen

as the rotation most closely approximating the criteria for simple

structure and producing the clearest picture of the common-factor struc-

ture for the ratings of the 108 program characteristics.

For the principal axes solution based upon N = 450, the latent roots,

differences in the latent roots, cumulative variance for which success-

ive axes accounted, and the percentage of common variance for which suc-

cessive axes accounted are presented in Table 3. Using the differences

in the latent roots, the rotation of 10, 11, and 12 factors was indi-

cated. For all the program characteristics that had factor loadings of

.50 or greater, the loadings that resulted from the three rotations are

presented in Table 4. The complete factor structures for the three ro-

tations are presented in Appendix K.

As in Table 2, the most evident feature of the data presented in

Table 4 was that, regardless of the rotation examined, there was a rela-

tively stable nine-factor structure. For factors 1, 2, 3, 7, 8, and 9,

the variables with loadings of .50 or greater on the factors were the

same for the three rotations. The variables loading .50 or greater on

factors 4 and 5 were the same for the three rotations with the exception

of one variable that loaded slightly less than .50 (.49) on factor 4 in

the 11-factor rotation and one variable that loaded less than .50 (.41)













Principal
Axes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21


Table 3
Variance Accounted for by Successive
Principal Axes for N=450

Cumulative
Eigenvalues Differences Variance

26.356 -- 26.616
7.731 18.625 34.087
5.522 2.209 39.609
3.477 2.045 43.086
3.038 .439 46.124
2.946 .092 49.070
2.561 .385 51.631
2.303 .258 53.934
2.246 .057 56.180
1.922 .324 58.102
1.718 .204 59.820
1.684 .034 61.504
1.514 .170 63.018
1.396 .118 64.414
1.314 .082 65.728
1.123 .191 66.851
1.101 .022 67.952
1.050 .051 69.002
.990 .06 69.992
.920 .07 70.912
.907 .013 71.819


on factor 5 in the 12-factor rotation. The variables with loadings on

factor 6 were the same with the exception of one variable that loaded .50

in the 11-factor rotation but less than .50 in the other rotations. For

factor 10, the 10-factor rotation produced no loadings of .50 or greater

on factor 10. The 11-factor rotation had two variables loading above .50

on factor 10. These two variables had higher loadings in the 12-factor

rotation. Also, one additional variable had a loading on factor 10 of

.50 or greater in the 12-factor rotation. No variable had loadings of

.50 or greater on factor 11 in the 11-factor and 12-factor rotations.

Two variables had loadings of at least .50 on factor 12 in the 12-factor

rotation.


Percent of
Common Variance

36.7
47.5
54.4
60.0
64.2
68.3
71.9
75.1
78.2
80.9
83.3
85.6
87.7
89.7
91.5
93.1
94.6
96.1
97.5
98.7
100.0






66


Table 4

Program Characteristics With Factor Loadings of .50 or Greater
in the Three Rotations of the Principal Axes Solution Based Upon N=450

Rotations
Factors Characteristics 10 11 12

7 .51 .52 .53
15 .56 .57 .58
41 .65 .65 .65
89 .64 .64 .64
69 .73 .73 .73
63 .74 .75 .75
101 .61 .59 .58
95 .59 .58 .58
17 .58 .58 .59
31 .63 .64 .65
48 .71 .70 .70
1 75 .80 .79 .79
79 .77 .77 .78
96 .69 .67 .66
99 .64 .63 .63
1 .58 .59 .59
6 .58 .60 .61
13 .69 .70 .69
44 .71 .71 .70
72 .74 .74 .73
29 .77 .77 .77
28 .75 .76 .76
73 .64 .64 .63
51 .66 .65 .63
37 .56 .56 .55
--------------------- T ----- --- ------
25 .76 .77 .79
16 .79 .80 .82
20 .79 .80 .82
2 60 .58 .58 .56
39 .81 .80 .78
36 .81 .80 .78
50 .80 .80 .78
..-----------------7T----------- ------------- ---
87 .56 .55 .56
70 .73 .73 .73
56 .75 .76 .76
49 .72 .72 .73
346 .55 .57 .56
30 .58 .59 .66
80 .60 .61 .63
27 .72 .73 .74
22 .76 .78 .79
26 .74 .75 .76
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .









Table 4 (continued)

Rotations
Factors Characteristics 10 11 12

76 .72 .72 .71
78 .50 .49a .51
102 .56 .56 .55
74 .64 .65 .64
77 .57 .60 .61
98 .73 .73 .73
82 .55 .54 .57
92 .51 .53 .54
86 .61 .62 .62
----------------------------------------------
59 .37a .37a .26a
81 .41a .42a .32a
66 .50 .50 .41a
2 .59 .59 .61
19 .68 .68 .69
42 .66 .66 .66
34 .71 .71 .72
100 .65 .64 .65
88 .49a .50 .47a
4 .59 .60 .54
83 .53 .55 .51
6 40 .77 .78 .82
12 .73 .74 .80
35 .73 .74 .79
11 48a .49 .46
85 .62 .60 .65
90 .61 .60 .64
103 .56 .61 .59
33 .53 .55 .53
24 .71 .70 .72
32 .71 .70 .72
58 .63 .66 .63
9 .61 .64 .59
52 .72 .72 .72
18 .75 .75 .75
23 .77 .77 .77
8 65 .73 .73 .73
38 .72 .72 .72
53 .62 .61 .61
105 .47a .47a .47a
54 .55 .57 .60
45 .58 .60 .64
5 .51 .52 .51
54 .64 .64 .62
84 .64 .64 .66
14 .64 .62 .63
8 .61 .60 .60
62 .50 .50 .50
---------------------------------------------------









Table 4 (continued)

Rotations
Factors Characteristics 10 11 12

10 .28a .46a .59a
10 91 .36a .60 .70
104 .41a .67 .79
- ---------------------------------------------------
------------^---------------------
94 .51
12 54 .53

aThe factor loading is included for comparison with the factor struc-
turebbased upon N=315.
This factor most closely corresponds with factor 11 in the 11-fac-
tor rotation based upon N=315.
CThis factor most closely corresponds with factor 10 in the 10-fac-
tor rotation based upon N=315.


Although not entirely evident from the data presented in Table 4, it

was evident from the entire factor structure presented in Appendix K

that the rotation of 10 factors produced a factor structure more closely

approximating the criteria for simple structure. The 11-factor rotation

resulted in the beginning of fission for factor 1 (factor 11). The 12-

factor rotation clarified the structure for the specific factor (factor

11), continued the fission of factor 1, and resulted in the fission of

factor 5, producing another specific factor (factor 12). Therefore, for

the analysis based upon N= 450, the factor structure that resulted from

the rotation of 10 factors was selected as the best representation of

the common-factor structure for the ratings of the 108 program character-

istics based upon N = 450.

To determine the intercorrelation of the factors for both factor

structures, the two principal axes solutions (N = 315 and N = 450) were

submitted to an oblique rotation using the direct oblimin rotation pro-

cedure as described by Jennrich and Sampson (1966). The correlation









coefficients for the intercorrelation of the factors for both N = 315

and N = 450 are presented in Table 5. Since the factors were essen-

tially uncorrelated with no correlation coefficient exceeding .42, then

the orthogonal rotation was accepted as producing the best solution for

the common-factor structure.

The next task was to determine whether the 10-factor structure from

the analysis based upon N = 315 was congruent with the 10-factor struc-

ture from the analysis based upon N = 450. The coefficients of congru-

ence between comparable factors in the two factor structures are pre-

sented in Table 6. Since all the coefficients were at least .90, the


Intercorrelations of
of the Principal


Table 5

the Factors for the 10-Factor Rotation
Axes Solutions for N=315 and N=450


Factors
1 2 3 4 5 6 7 8 9 10

1 1.00
2 .37 1.00
3 .17 .14 1.00
4 .28 .41 .06 1.00
N=315 5 .18 .24 .13 .24 1.00
6 .16 .33 .19 .24 .24 1.00
7 .20 .25 .30 .17 .15 .12 1.00
8 .23 .25 .18 .21 .23 .25 .21 1.00
9 .05 .18 .22 .11 .15 .13 .27 .23 1.00
10 .05 .08 .08 -.02 .04 .02 .03 .07 .01 1.00

1 1.00
2 .19 1.00
3 .16 .28 1.00
4 .18 .21 .18 1.00
5 .13 .27 .22 .18 1.00
N=450 6 .32 .14 .09 .27 .14 1.00
7 .22 .30 .23 .28 .29 .23 1.00
8 .38 .23 .13 .28 .24 .42 .29 1.00
9 .15 .07 .15 .22 .14 .24 .22 .28 1.00
10 .03 .04 .05 .03 -.02 -.01 .01 .02 .05 1.00









Table 6

Coefficients of Congruence Between Comparable
Factors for the 10-Factor Structures for N=315 and N=450


Factors 1 2 3 4 5 6 7 8 9 10

Coefficients .98 .97 .97 .96 .94 .96 .96 .95 .95 .93


factor structures were considered congruent. Therefore, the factor

structure based upon N = 450 was taken as the factor structure best rep-

resenting the common factor space for the ratings of the 108 program

characteristics.

Interpretation of the Factors

In the selected rotated 10-factor structure, there were nine inter-

pretable factors. The 10-factor structure represented the common-factor

space of the ratings of the 108 program characteristics. The nine inter-

pretable factors represented nine common dimensions underlying these rat-

ings. Each of these common dimensions is defined in the following discus-

sion.

The program characteristics with loadings of .50 or greater on fac-

tor 1 are listed in Table 7. These program characteristics concerned

total cost of a program, costs of various aspects of a program, usage of

equipment and space, and number of support staff. In addition to total

cost, other costs included cost of instructional personnel, program ad-

ministration, support services, materials, equipment maintenance, and

space utilized. For the majority of these program characteristics, the

administrators indicated that the information was desired per total pro-

gram, per number of program full-time equivalent (FTE) students, and per

program unduplicated headcount of students.









Table 7

Program Characteristics With .50 or
Greater Loadings on Factor 1


Number Loading Program Characteristics

1 .58 Total cost of a program
7 .51 Total cost of a program per FTE
17 .58 Total cost of program per unduplicated headcount
6 .69 Cost of instructional personnel per total program
15 .56 Cost of instructional personnel per program FTE
31 .63 Cost of instructional personnel per program undupli-
cated headcount
28 .75 Cost of program administration per total program
63 .74 Cost of program administration per program FTE
79 .77 Cost of program administration per program undupli-
cated headcount
29 .77 Cost of support services per total program
69 .73 Cost of support services per program FTE
75 .80 Cost of support services per program unduplicated
headcount
37 .56 Number of support staff per total program
95 .59 Number of support staff per program FTE
99 .64 Number of support staff per program unduplicated head-
count
41 .65 Cost of materials per program FTE
48 .71 Cost of materials per program unduplicated headcount
51 .66 Equipment utilization per total program
101 .61 Equipment utilization per program FTE
96 .69 Equipment utilization per program unduplicated head-
count
44 .71 Cost of equipment maintenance per total program
89 .64 Cost of equipment maintenance per program FTE
73 .64 Space utilization per total program
72 .74 Cost of space utilized per total program



Based upon the content of these program characteristics, factor 1

was interpreted as involving resources used in a program. Fiscal, physi-

cal (equipment and space), and human resources (support staff) were in-

cluded. Factor 1 was identified as one common dimension underlying the

ratings of the 108 program characteristics and was labeled the "Resources

Usage" dimension.









The program characteristics with loadings of .50 or greater on fac-

tor 2 are listed in Table 8. These program characteristics concerned

ratings of program support services and student services by students en-

rolled in a program and students who have completed a program. The rat-

ings of student services included ratings of the usefulness, accessibil-

ity, and ease of use of the services. Based upon the content of these

program characteristics, factor 2 was interpreted as involving student

ratings of support services, including student services. Factor 2 was

identified as another common dimension underlying the ratings of the 108

program characteristics and was labeled the "Student Ratings of Support

Services" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 3 are listed in Table 9. These program characteristics involved

Table 8

Program Characteristics With .50 or
Greater Loadings on Factor 2


Number Loading Program Characteristics

61 .56 Ratings of support services by currently enrolled
students
60 .58 Ratings of support services by program completers
25 .76 Ratings of usefulness of student services by cur-
rently enrolled students
39 .81 Ratings of usefulness of student services by program
completers
16 .79 Ratings of accessibility of student services by cur-
rently enrolled students
36 .81 Ratings of accessibility of student services by pro-
gram completers
20 .79 Ratings of ease of use of student services by cur-
rently enrolled students
50 .80 Ratings of ease of use of student services by program
completers









Table 9

Program Characteristics With .50 or
Greater Loadings on Factor 3


Number Loading Program Characteristics

46 .55 Number or percent of full-time faculty/staff by a
productivity ratio
71 .52 Number or percent of part-time faculty/staff by a
productivity ratio
30 .58 Number or percent of full-time faculty/staff by num-
ber of course hours taught per term
87 .56 Number or percent of part-time faculty/staff by num-
ber of course hours taught per term
27 .72 Number or percent of full-time faculty/staff by num-
ber of student contact hours per term
70 .73 Number or percent of part-time faculty/staff by num-
ber of student contact hours per term
22 .76 Number or percent of full-time faculty/staff by num-
ber of students per term
56 .75 Number or percent of part-time faculty/staff by num-
ber of students per term
26 .74 Number or percent of full-time faculty/staff by
average class size
49 .72 Number or percent of part-time faculty/staff by
average class size
80 .60 Number or percent of full-time faculty/staff by num-
ber of FTE students per term



information about full-time and part-time faculty or staff in a program.

The information included the number or percent of full-time and part-time

faculty or staff by (1) their rating on some productivity ratio, (2) the

number of course hours they taught per term, (3) the number of student

contact hours they had per term, (4) the number of students they had per

term, and (5) their average class size. Additionally, but for full-time

faculty or staff only, the information included the number of FTE students

they taught per term. Based upon the content of these program character-

istics, factor 3 was interpreted as involving information on the produc-

tivity of faculty or staff in a program. Factor 3 was identified as









another common dimension underlying the ratings of the 108 program char-

acteristics and was labeled the "Faculty/Staff Instructional Productiv-

ity" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 4 are listed in Table 10. These program characteristics involved

information about students entering a program and students currently en-

rolled in a program. For both entering and currently enrolled students,

the information included the number or percent of students by major area

of study, by type of handicap, and by types of developmental or remedial

assistance desired. For entering students only, the information included

the number or percent of students by level of previous academic

Table 10

Program Characteristics With .50 or
Greater Loadings on Factor 4


Number Loadings Program Characteristics

102 .56 Number or percent of entering students by level of
previous academic achievement
77 .57 Number or percent of entering students by academic
skills level as measured by local instruments
78 .50 Number or percent of entering students by major area
of study
82 .55 Number or percent of currently enrolled students by
major area of study
76 .72 Number or percent of entering students by type of
handicap
98 .73 Number or percent of currently enrolled students by
type of handicap
74 .64 Number or percent of entering students by types of
developmental or remedial assistance desired
86 .61 Number or percent of currently enrolled students by
types of developmental or remedial assistance
desired
92 .51 Number or percent of currently enrolled students by
number of hours with failing grade









achievement and by academic skills level as measured by local instru-

ments. For currently enrolled students only, the information included

the number or percent of students by number of hours with failing grade.

Based upon the content of these program characteristics, factor 4 was

interpreted as involving the identification of any physical or cognitive

needs of students relevant to their performance in their selected pro-

grams. Factor 4 was identified as another common dimension underlying

the ratings of the 108 program characteristics and was labeled the "Phys-

ical and Academic Skills Needs Assessment Enrolled Students" dimension.

The program characteristics with a loading of .50 or greater on fac-

tor 5 are listed in Table 11. These program characteristics involved

ratings of various aspects of a program by students who have completed

a program or who are currently enrolled in a program. The aspects of a

program to be rated by program completers included program staff, pro-

gram facilities and equipment, program instructional strategies, program

administration, and program curriculum. Also included were ratings of a


Table 11

Program Characteristics With .50 or
Greater Loadings on Factor 5


Number Loadings Program Characteristics

34 .71 Ratings of program staff by program completers
19 .68 Ratings of program facilities/equipment by program
completers
42 .66 Ratings of program instructional strategies by pro-
gram completers
100 .65 Ratings of program administrators by program com-
pleters
2 .59 Ratings of program curriculum by program completers
66 .50 Ratings of program staff by currently enrolled stu-
dents









program's staff by currently enrolled students. Based upon the content

of these program characteristics, factor 5 was interpreted as involving

student ratings, primarily ratings by program completers, of various as-

pects of a program. Factor 5 was identified as another common dimension

underlying the ratings of the 108 program characteristics and was la-

beled the "Student Ratings of Program" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 6 are listed in Table 12. These program characteristics concerned

information on the quantity of students completing a program and the

average time taken for completion, the number or percent of those com-

pleting a program who take state board or licensure exams, the number

passing those exams, and the type of license, certificate, or registra-

tion received. Based upon the content of these program characteristics,

factor 6 was interpreted as involving measures of the quantitative output

of a program and certain student follow-up information. Factor 6 was

identified as another common dimension underlying the ratings of the 108

Table 12

Program Characteristics With .50 or
Greater Loadings on Factor 6


Number Loadings Program Characteristics

4 .59 Number or percent of students completing a program
83 .53 Number or percent of program completers by average
time taken for completion of a program
40 .77 Number or percent of program completers taking state
board or licensure exams
12 .73 Number or percent of program completers passing state
board or licensure exams
35 .73 Number or percent of program completers by type of
license, certificate, or registration received









program characteristics and was labeled the "Program Student Output"

dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 7 are listed in Table 13. These program characteristics concerned

various attributes of both the full-time and part-time faculty or staff

in a program. The attributes included degrees held, total years taught

or served, years taught or served in a specific program, and type of

certification or rank held. Based upon the content of these program

characteristics, factor 7 was interpreted as involving indicators of the

level of preparedness of faculty or staff serving in a program. Factor

7 was identified as another common dimension underlying the ratings of

the 108 program characteristics and was labeled the "Faculty/Staff Pre-

paredness" dimension.


Table 13

Program Characteristics With .50 or
Greater Loadings on Factor 7


Number Loadings Program Characteristics

9 .61 Number or percent of full-time faculty/staff by de-
grees held
33 .51 Number or percent of part-time faculty/staff by de-
grees held
24 .71 Number or percent of full-time faculty/staff by years
taught or served
85 .62 Number or percent of part-time faculty/staff by years
taught or served
32 .71 Number or percent of full-time faculty/staff by
length of service in a program
90 .61 Number or percent of part-time faculty/staff by
length of service in a program
58 .66 .Number or percent of full-time faculty/staff by cer-
tification or rank held
103 .57 Number or percent of part-time faculty/staff by cer-
tification or rank held









The program characteristics with loadings of .50 or greater on fac-

tor 8 are listed in Table 14. These program characteristics involved

ratings of various aspects of a program by a program's faculty or staff.

The aspects of a program to be rated included instructional strategies,

facilities and equipment, staff, curriculum, administration, and support

services. Based upon the content of these program characteristics, fac-

tor 8 was interpreted as involving ratings of a program by a program's

faculty or staff. Factor 8 was identified as another common dimension

underlying the ratings of the 108 program characteristics and was la-

beled the "Faculty/Staff Program Ratings" dimension.

The program characteristics with loadings of .50 or greater on fac-

tor 9 are listed in Table 15. These program characteristics included

the number or types of changes in a program as a result of program eval-

uations or accreditation studies; ratings of a program by certification

boards or accreditation agencies; level of demand for a program in the

college's service area, by students, and in the college's state; and

clearly stated objectives for a program. Based upon the content of

Table 14

Program Characteristics With .50 or
Greater Loadings on Factor 8


Number Loadings Program Characteristics

23 .77 Ratings of program instructional strategies by fac-
ulty/staff
18 .75 Ratings of program facilities/equipment by faculty/
staff
65 .73 Ratings of program staff by faculty/staff
52 .72 Ratings of a program curriculum by faculty/staff
38 .72 Ratings of program administration by faculty/staff
53 .62 Ratings of support services by faculty/staff









Table 15

Program Characteristics With .50 or
Greater Loadings on Factor 9


Number Loadings Program Characteristics

64 .64 Number/types of changes as a result of program evalu-
ations
84 .64 Number/types of changes as a result of accreditation
studies
54 .55 Ratings by certification boards
45 .58 Ratings by accreditation agencies
8 .61 Level of demand for program or service in a college's
service area
14 .64 Level of demand for program or service by students
62 .50 Level of demand for program or service in college's
state
5 .51 Clearly stated program objectives



these program characteristics, factor 9 was interpreted as involving the

responsiveness of a program to program evaluations, certification boards,

accreditation agencies, the community it serves, the students it serves,

and the state it serves. Although not an object of a program's respon-

siveness, program objectives clearly related to assessing that respon-

siveness. Factor 9 was identified as another common dimension underly-

ing the ratings of the 108 program characteristics and was labeled the

"Program Responsiveness" dimension.

The factor analysis has resulted in the identification of a 10-fac-

tor structure with nine interpretable factors that remained relatively

stable across several rotations and for the two groups of respondents

(N = 315 and N = 450). The identified factor structure has been inter-

preted as representing the underlying dimensions common to the ratings

of the 108 program characteristics. Using the content of the program

characteristics that loaded .50 or greater on the factors, each of the









nine dimensions has been described and labeled. The labels have been

created to reflect the content of the program characteristics loading

.50 or greater on the factor representing a dimension. The following

labels have been created for the nine dimensions:

Resources Usage
Student Ratings of Support Services
Faculty/Staff Instructional Productivity
Physical and Academic Skills Needs Assessment
of Enrolled Students
Student Ratings of Program
Program Student Output
Faculty/Staff Preparedness
Faculty/Staff Program Ratings
Program Responsiveness.

In accordance with the evaluation theory developed by Stufflebeam et

al. (1971), the program characteristics that have been identified as in-

cluded in these dimensions were delineated in interaction with the ad-

ministrators making program quality-evaluation decisions in Florida pub-

lic community/junior colleges. These program characteristics were rated

by the administrators as the ones most highly useful in making program

quality-evaluation decisions. According to the results of this study,

the data represented by these program characteristics are those data

that should be collected, organized, and analyzed for the purpose of pro-

viding information useful to the administrators in program quality-eval-

uation decision making in Florida public community/junior colleges.

The results of the factor analysis performed in this study have dem-

onstrated that there are nine common dimensions that should be used to

organize those data for presentation of information to administrators in-

volved in program quality-evaluation decision making. As developed in

the theoretical rationale for this study, based on the theory of evalua-

tion developed by Stufflebeam et al. (1971), the items of information









identified reflected those aspects of the aggregate value system of these

administrators that are relevant to program quality-evaluation decision

making and the underlying dimensions of those items reflect the dimen-

sions of the aggregate value system that are relevant to this decision

situation. Therefore, the utilization of the nine identified common di-

mensions to organize the relevant data should result in an information

format that these administrators should find most useful, since the for-

mat should approximate the dimensions of those aspects of the aggregate

value system that are common to these administrators and that are being

used in making program quality-evaluation decisions. Any individual ad-

ministrator should find such a format more or less useful to the degree

that the relevant dimensions of his value system are reflected in the

aggregate value system represented in the nine dimensions.

It should be noted that these nine dimensions are dimensions repre-

senting the parameters of the information an administrator is most

likely to find useful in making program quality-evaluation decisions. It

should be understood that the information these dimensions reflect might

be positively or negatively valued by an administrator and in varying de-

grees in relation to assessing a program. Since quality is a value judg-

ment and not an attribute or characteristic of a program, these nine com-

mon dimensions are the dimensions of an aggregate value system used by ad-

ministrators in making program quality-evaluation decisions. They should

not be interpreted as dimensions of quality.

The identification of these nine dimensions completed the analysis

required for resolving the first aspect of the problem with which this

study was concerned: to determine any underlying dimensions of the mul-

tiple items of information rated as highly useful in program quality-









evaluation decision making by administrators involved in such decision

making in Florida public community/junior colleges. In the next section

of this chapter, the results are presented of a comparison of the mean

factor scores of the administrators classified first by program areas in

relation to which they had major administrative responsibilities and

then by administrative areas as reflected in their position titles. The

following results reflect an attempt to determine any significant differ-

ences between program areas or between administrative areas in emphasis

on any of the nine dimensions in order to refine the description for for-

matting by program or administrative area the information included in

these dimensions.

Factor Score Comparisons

Program Areas

Using the selected factor structure, factor scores were computed for

the 450 respondents using the regression method in the SAS factor proce-

dure and the SAS score procedure. Mean factor scores were calculated

for the respondents grouped according to program area. Included in this

analysis were those respondents whose position title indicated that they

had major responsibility in one of the five program areas common to most

community colleges in Florida: the Advanced and Professional, Occupa-

tional, Developmental, Community Instructional Services, and Student Ser-

vices program areas. Not all the administrators who participated in this

study had major responsibility in a specific program area. The position

codes used to classify the administrators included in each program area

are listed in Appendix A. Position codes, associated titles, and fre-

quency of the position codes are in Appendix D. The program areas, the

number of respondents classified in each program area, and the percentage

of all respondents that this represents are given in Table 16.









Table 16

Number of Respondents Per Program Area and
Corresponding Percentages of All Respondents (N=450)


Program Areas Number of Respondents Percentage of N

Advanced and Professional 65 14.4

Occupational 83 14.4

Developmental 5 1.1

Community Instructional 21 4.7
Services

Student Services 88 19.6

TOTAL 262 58.2



Although the number of administrators with primary responsibility in

the Developmental Program Area was small (N = 5), they represented five

different colleges. According to the list of administrators with respon-

sibility for compensatory/developmental education in the 1981-82 Direct-

ory of Florida Community Colleges (Division of Community Colleges, 1981a,

p. 71), there were very few position titles reflecting primary responsi-

bility in this program area.

The mean factor scores and standard deviations for the program areas

are presented in Table 17. It should be recalled that all of the program

characteristics included in this study were rated as highly useful in pro-

gram quality-evaluation decision making. Therefore, the factor scores in-

dicated the relative emphasis placed upon the program characteristics with

relatively greater loadings on a factor by the administrators classified

in a program area. Since the rating scale was 1 (program characteristics









84















I, n z C .. .P. .
o2CD CD p C CJ CD ) C-C- C0O 0o:C aO-
~~ b ~~ lhl im m mo h I


4-1o 0 -



SU-
0 41- LI
L., k -








C I1








4 II
oz















ccn
(U--
0-



















m



4-






> > II
00











t -


0







0-
`E0' *i i
(~VU i
UV In 1
c f0 t- ^
> 0


S- 00 00 0-









c-d- o'CO 001 I Cm
00 .9 Io 00)

00 00 0- 00











IDcO oo r- co
CDC0 00 C-00 C0
o0 0- 00 00
I I


h-c c (DCQJ CM (O h-0 00 d-0 coO.
Cn C) o- -o 00) 0C c3CD COPh i 0C) rC-
oo os om o oO mo oo o
00 00 00 00 00 00 00 0-


C C C C c C c C. C; C aC.
fOQ ~ e (a0 MO 10 d TQ 03 O fC
II LI In* n I3 n. I1 In a )n*
Mt V; oQ = L n =m InI 11i Il v) V




C- j cn ^d LO 1 I'l co GN


hifl .- . . .O ia
r. r- M Ln M mr












-0- O- OM OMC0 Or-M
C0m 7 001 I
00 00 00 00 0r-









-- 0 C=O ) CT^ OC CD CD C)*
00 0- 00oo oo oo00
I I


00)
0o
I









essential to quality-evaluation decision making) to 4 (program charac-

teristics of little or no use in quality-evaluation decision making), a

low factor score indicated that administrators included in the program

area rated the program characteristics with relatively greater loadings

on a factor as relatively more highly useful in program quality-evalua-

tion decision making and a high factor score indicated that they rated

them relatively less highly useful.

The results of testing for significant differences in mean factor

scores between program areas for all factors are presented in Appendix

L. As indicated in the description of the methodology for this study,

an analysis of variance prior to performing the t tests was inappropri-

ate due to unequal variances among some of the program area classifica-

tions. The Bonferroni correction for multiple t tests was applied to

the obtained t statistics.

For factors 1, 3, and 9, there were no significant differences in

mean factor scores between any of the program areas (Appendix L). For

factor 1, the Resources Usage dimension, the mean factor scores ranged

from -.072 for the Developmental Program Area to .487 for Community In-

structional Services (Table 17). For factor 3, the Faculty/Staff In-

structional Productivity dimension, the mean factor scores ranged from

-.341 for the Developmental Program Area to .231 for Community Instruc-

tional Services. For factor 9, the Program Responsiveness dimension,

the mean factor scores ranged from -.142 for the Occupational Program

Area to .500 for the Developmental Program Area. These results indi-

cated that the administrators classified into the five program areas did

not differ significantly in their emphasis on these three dimensions:

Resources Usage, Faculty/Staff Instructional Productivity, and Program

Responsiveness.









For factors 2 and 8, there were significant differences in mean fac-

tor scores between Student Services and all other program areas except

the Developmental Program Area (Appendix L). For factor 2, the Student

Ratings of Support Services dimension, the mean factor scores ranged

from -.480 for Student Services to .487 for Community Instructional Ser-

vices (Table 17). For factor 8, the Faculty/Staff Program Ratings di-

mension, the mean factor scores ranged from .410 for Student Services to

-.350 for the Developmental Program Area (Table 17). These results in-

dicated that the administrators classified in Student Services empha-

sized the Student Ratings of Support Services dimension significantly

more than did all other program areas except the Developmental Program

Area and emphasized the Faculty/Staff Program Ratings dimension signifi-

cantly less than did all other program areas except the Developmental

Program Area. Also, the results indicated that the other program areas

did not differ significantly in their emphasis on these dimensions. It

should be recalled that the number of administrators classified in the

Developmental Program Area was relatively small (N = 5) which influenced

the tests for significant differencesin mean factor scores.

For factor 4, there were significant differences in mean factor

scores between Community Instructional Services and all other program

areas except the Developmental Program Area (Appendix L). For factor 4,

the Physical and Academic Skills Needs Assessment of Enrolled Students

dimension, the mean factor scores ranged from -.133 for the Advanced and

Professional Program Area to .952 for Community Instructional Services

(Table 17). These results indicated that the administrators classified

in Community Instructional Services emphasized the Physical and Academic

Skills Needs Assessment of Enrolled Students dimension significantly









less than did all other program areas except the Developmental Program

Area. Also, the results indicated that the other program areas did not

differ significantly in their emphasis on this dimension.

There were significant differences in mean factor scores between the

Occupational Program Area and the Advanced and Professional and the Stu-

dent Services program areas on factor 5 (Appendix L). For this factor,

the Student Ratings of Program dimension, the mean factor scores ranged

from -.435 for the Developmental Program Area to .219 for Community In-

structional Services (Table 17). The mean factor score for the Occupa-

tional Program Area was .143 (Table 17). These results indicated that

the administrators classified in the Occupational Program Area empha-

sized the Student Ratings of Program dimension significantly less than

did the Advanced and Professional or the Student Services program areas.

Also, the results indicated that the Occupational Program Area did not

differ significantly from Community Instructional Services and the De-

velopmental Program Area in emphasis on this dimension and that program

areas other than the Occupational Program Area did not differ signifi-

cantly in their emphasis on this dimension.

For factor 6, there were significant differences in mean factor

scores between the Developmental Program Area and all other program areas

except Community Instructional Services (Appendix L). For this factor,

the Program Student Output dimension, the mean factor scores ranged from

-.579 for the Occupational Program Area to 1.619 for the Developmental

Program Area (Table 17). These results indicated that the administra-

tors classified in the Developmental Program Area emphasized the Program

Student Output dimension significantly less than did all other program

areas except Community Instructional Services. Also, the results









indicated that the other program areas did not differ significantly in

their emphasis on this dimension.

For the remaining factor, factor 7, there were significant differ-

ences in the mean factor scores between the Advanced and Professional

Program Area and the Occupational and Community Instructional Services

program areas (Appendix L). For this factor, the Faculty/Staff Pre-

paredness dimension, the mean factor scores ranged from -.390 for the

Advanced and Professional Program Area to .387 for Community Instruc-

tional Services (Table 17). These results indicated that the adminis-

trators classified in the Advanced and Professional Program Area empha-

sized the Faculty/Staff Preparedness dimension significantly more than

did the Occupational and the Community Instructional Services program

areas. Also, the results indicated that the Advanced and Professional

Program Area did not differ significantly from the other two program

areas in their emphasis on this dimension and that the program areas

other than the Advanced and Professional Program Area did not differ

significantly in their emphasis on this dimension.

As indicated in the preceding section of this chapter, the utiliza-

tion of the nine identified common dimensions to organize the 108 pro-

gram characteristics identified as most useful in program quality-evalu-

ation decision making should result in increasing the probability that

the format of the presented information will be perceived as credible

and useful by the administrator involved in the decision situation.

Examination of the differences in mean factor scores for the five pro-

gram areas was done to determine if there were any statistically signif-

icant differences that might be useful in tailoring by program area the

format of information presented to administrators in the five program

areas for use in program quality-evaluation decision making.









The results presented in this section indicated that the administra-

tors classified into the five program areas did not differ significantly

in their emphasis on three dimensions: Resources Usage, Faculty/Staff

Instructional Productivity, and Program Responsiveness. For the Student

Ratings of Support Services dimension, the results indicated that Stu-

dent Services emphasized this dimension significantly more than did all

other program areas except the Developmental Program Area. Community

Instructional Services emphasized the Physical and Academic Skills Needs

Assessment of Enrolled Students dimension significantly less than did

all other program areas except the Developmental Program Area. The Stu-

dent Ratings of Program dimension was emphasized significantly less by

the Occupational Program Area than by the Advanced and Professional and

Student Services program areas. The Developmental Program Area placed

significantly less emphasis on the Program Student Output dimension than

did all other program areas except Community Instructional Services.

For the Faculty/Staff Preparedness dimension, the Advanced and Profes-

sional Program Area emphasized this dimension significantly more than

did the Occupational and Community Instructional Services program areas.

The Faculty/Staff Program Ratings dimension received significantly less

emphasis by Student Services than by all other program areas except the

Developmental Program Area.

These results should be useful in tailoring by program area the or-

ganization of information for presentation to administrators involved in

quality-evaluation decision making in a specific program area. For

example, the results indicated that the information included in the Fac-

ulty/Staff Preparedness dimension should be emphasized when presenting

information to administrators with major responsibilities in the Advanced









and Professional Program Area to increase the probability that adminis-

trators in that program area will find the information credible and use-

ful in program quality-evaluation decision making. The nature of this

emphasis, although not an objective of this study, might include the

presentation of more information or more detailed information or some

type of weighting of the information related to this dimension. Simi-

larly, these results may be used to tailor the presentation of informa-

tion for program quality-evaluation decision making to administrators in

other specific program areas.

The results presented in this section applied only to significant

differences among the mean factor scores for administrators classified

within the specified program areas. In the next section of this chapter,

the results are presented for comparison of the mean factor scores be-

tween administrators classified within six administrative areas as de-

fined in Chapter III.

Administrative Areas

Using the selected factor structure, mean factor scores were calcu-

lated for the administrators classified within the six administrative

areas defined in Chapter III. The six administrative areas were General

Administration, Academic Affairs, Student Affairs, Community Instruc-

tional Services, Business Affairs, and Presidents. A description of the

administrative areas and the position codes used to classify the adminis-

trators included in each administrative area are given in Appendix A.

The administrative areas, the number of respondents in each administra-

tive area, and the percentage of all respondents that this represents

are given in Table 18.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs