THE EFFECTS OF DIFFERENT FORMS OF STUDENT RATINGS
FEEDBACK ON SUBSEQUENT STUDENT RATINGS OF PARTTIME FACULTY
BY
CHERYL MARIE BURBANO
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
1987
This dissertation is dedicated to my mother, Helen Busch, who supported
my early efforts to attain a college education despite a lack of
financial resources.
To my husband, Juan, my best friend, mentor, and confidant, whom I want
to thank for caring, understanding, and pushing me when I resisted.
To my daughter, Valentina, the most precious thing in my life, whom I
want to thank for all your patience, understanding, and love.
And finally, a special dedication to the memory of Raymond B. Stewart,
employer, educator, political mentor, and most of all friend.
ACKNOWLEDGEMENTS
This dissertation would not be complete without the acknowledgement
of several key individuals who provided continual support.
Sincere thanks go to my committee consisting of Dr. Al Smith, Dr.
Gordon Lawrence, Dr. James Wattenbarger, Dr. Max Parker, and Dr. Robert
Jester. Dr. Al Smith served as a great inspiration and mentor. Dr.
James Wattenbarger willingly filled in for Al Smith during his absence.
Dr. Gordon Lawrence provided helpful hints and writing guidance. Dr.
Robert Jester gave firm and patient guidance in the data analysis. And
finally, Dr. Max Parker provided a clear vision of the counseling com
ponents of this dissertation. To each of these professors, I am espe
cially grateful.
I would also like to thank the instructors and students who will
ingly participated in this study. My deepest appreciation goes to my
colleagues and friends who supported me, cheered me on a daily basis,
and helped provide solutions to stumbling blocks along the way. These
colleagues were Lee Leavengood for her interest and unwavering assis
tance in this study, Dave Koval at Saint Leo College for providing per
mission for the study at such a late date, and Valerie Allen and Marta
Carjaval for their assistance in gathering the data when it was impos
sible to be at two places simultaneously. Finally, I would also like to
acknowledge some special individuals who provided extra help, support,
and suggestions that made the task easier. These special people were
Arlene Kenger, Lagretta Lenker, Juan Sanchez, Larry Eason, and Mike Rom.
TABLE OF CONTENTS
PAGE
ACKNOWLEDGEMENTS................................................ iii
LIST OF TABLES.................................................. vii
ABSTRACT ........................... ............................ v 1ii
CHAPTER
ONE INTRODUCTION ........................... ..................... 1
Statement of the Purpose.................................... 4
Hypotheses .................................................. 4
Need for the Study............................................ 6
Delimitations ................................................. 7
Limitations ................................................. 8
Operational Definitions.................................... 10
Organization of Remainder of the Research Report...........12
TWO REVIEW OF THE LITERATURE AND RELATED RESEARCH.............. 13
Growth of Adult and Continuing Education................... 13
Parttime Faculty............................................ 14
Faculty Development Practices ................................ 17
Adult Development............................................ 18
Sources of Student Feedback................................. 20
Nature and Impact of Student Feedback...................... 22
Summary of the Chapter................................... .... 25
THREE METHODOLOGY................................................ 28
General Research Design...................................... 29
Instrumentation.............................................. 32
Pilot Study.............................................. .... 35
Present Study................................................ 36
Subjects ................................................... 36
Data Collection........................................ 37
Data Analysis ............................................. 42
Summary of the Chapter....................................... 46
FOUR FINDINGS.................................. ................. 47
Descriptive Analysis ....................................... 49
Test of Hypothesis 1....................................... 49
Test of Hypothesis 2 ....................................... 58
Test of Hypothesis 3....................................... 59
iv
Test of Hypothesis 4 ....................
Test of Hypothesis 5 ....................
Test of Hypothesis 6....................
Post Hoc Analyses.......................
Summary of the Chapter ..................
FIVE SUMMARY, CONCLUSIONS, IMPLICATIONS,
AND RECOMMENDATIONS ....................
Summary .................................
Conclusions .............................
Implications ............................
Recommendations for Further Research....
REFERENCES ....................................
APPENDIX
. .. .. 6 1
. .. .. 6 5
. .. .. .. 6 7
. .. 6 8
. .. .. 7 2
. . 7 7
. .. 7 7
. .. 8 5
. . 8 7
. . 9 3
. . 9 6
A INSTRUCTIONAL DEVELOPMENT EFFECTIVENESS ASSESSMENT
(IDEA) STANDARD FORM (MIDTERM) .........................
B INSTRUCTIONAL DEVELOPMENT EFFECTIVENESS
ASSESSMENT (IDEA) STANDARD FORM
(ENDOFTERM EVALUATION) .................................
C INSTRUCTIONAL DEVELOPMENT EFFECTIVENESS
ASSESSMENT SURVEY ITEM RELIABILITIES, STANDARD
DEVIATIONS, AND STANDARD ERRORS OF MEASUREMENT..........
D INSTRUCTOR QUESTIONNAIRE ...............................
E COVER MEMORANDUMS.......................................
F ORAL CONSULTATION FEEDBACK INSTRUCTIONS.................
G STUDY COVER LETTERS .....................................
H EVALUATION ADMINISTRATION INSTRUCTIONS ..................
I PARTTIME FACULTY SELFRATING INSTRUCTION SHEET.........
J IDEA FACULTY INFORMATION CARDS INSTRUCTION
SHEET ...................................................
K FINAL COURSE EVALUATION FOLLOWUP LETTER................
L COMPUTER PRINTED IDEA SUMMARY REPORT,
INTERPRETATION GUIDELINES, AND INSTRUCTOR SELF
RATINGS TO STUDENT RATINGS COMPARISON...................
M LEARNING CHARACTERISTICS OF ADULT STUDENTS..............
N COVER MEMO FOR MIDTERM EVALUATION SUMMARY
EXPLANATION (PRINTED FEEDBACK ONLY GROUP)...............
.103
.106
.108
.110
.112
.115
.118
.121
.123
.129
.132
.134
.142
.144
0 ANOVA TABULAR SUMMARY OF RESULTS OF HYPOTHESIS 1......... 146
P ANOVA TABULAR SUMMARY OF RESULTS OF HYPOTHESIS 5......... 158
BIOGRAPHICAL SKETCH .......................................... 170
LIST OF TABLES
TABLE PAGE
1 General Research Design................ ..................... 30
2 Frequency and Percentage Distribution of
Selected Instructor Demographic Variables .................... 31
3 IDEA Course Learning Objective Items......................... 33
4 Results of the Comparisons of Means and Standard
Deviations of Full Feedback, Partial Feedback, and
NoFeedback Groups ................................
5 Results of Paired Comparisons of the Pretest and
Posttest Means and Standard Deviations for Full
Feedback, Partial Feedback, and NoFeedback Groups
.......... 51
.......... 55
6 Results of the Comparisons of Regression Weights of
Instructor Overraters and Underraters in the Two
Feedback Conditions ....................................
7 Results of the Comparisons of Change Rates in
SelfRatings from Time A to Time B in Instructor
SelfOverraters and Underraters........................
8 Test for Differential Statistical Regression
Comparisons of Within Group Variances at Times A and
B for Both Instructor SelfOverraters and Underraters..
9 Results of the Comparisons of the Means and Standard
Deviations of ShortTerm Course Versus LongTerm
Course Instructors in the Feedback Condition...........
10 Results of the Comparisons of the Means and Standard
Deviations of the Instructor Questionnaire
(LongTerm Versus ShortTerm) ........................
........ 68
11 Tukey's Test for Between Group Differences on
Instructor Experience Variable ...................... ........ 70
12 Results of Anova Comparisons for Class Motivation Levels..... 72
...... 60
...... 62
...... 64
...... 66
Abstract of Dissertation Presented to the Graduate
School of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of
Doctor of Philosophy
THE EFFECTS OF DIFFERENT FORMS OF STUDENT FEEDBACK ON
SUBSEQUENT STUDENT RATINGS OF PARTTIME FACULTY
By
CHERYL MARIE BURBANO
May, 1987
Chairman: Albert B. Smith III
Major Department: Educational Leadership
The primary purpose of the study was to determine the effect dif
ferent forms of student ratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. Parttime faculty
opinions towards student ratings of instruction were also explored.
A quasiexperimental design was used with two experimental groups
receiving different forms of midterm student ratings feedback,
printed/oral consultation and printed only, and two control groups
receiving no midterm student ratings feedback. The population con
sisted of 94 parttime faculty; 53 teaching shortterm, noncredit
courses and 41 teaching longterm, credit courses. Instructor self
ratings were also incorporated into the design.
The evaluation tool used to assess the two dependent variables,
student ratings of instruction and instructor selfratings, was the In
structional Development and Effectiveness Assessment (IDEA) standard
viii
form. Results indicated that, generally, feedback had a modest, but
nonsignificant effect on parttime faculty student ratings (P<.05).
When compared to the printed only feedback condition, the printed/oral
consultation feedback condition appeared not to have effected subsequent
student ratings of parttime faculty. Therefore, similarities between
the two feedback conditions appeared greater than the differences. The
most significant change (2<.05) in subsequent student ratings was found
in parttime faculty teaching shortterm, noncredit courses in the two
feedback conditions. Instructor selfoverrating and underrating by part
time faculty seemed to produce a higher awareness of teaching practices
resulting in a change in final instructor selfratings. Post hoc
analyses of instructor demographic variables relating to sex, educa
tional credentials, and teaching experience found teaching experience to
be a critical variable (p<.05) to student opinion of instruction.
Analyses of class motivational level found students enrolled in short
term, noncredit courses more highly motivated at the .05 level of sig
nificance.
Implications are that student ratings of instruction could be util
ized as an effective and appropriate evaluation tool for parttime
faculty teaching noncredit courses. Instructor selfevaluation may be
used to heighten parttime faculty awareness of teaching behavior.
Recommendations for staff development applications are included.
CHAPTER ONE
INTRODUCTION
Adult parttime student participation in organized learning ac
tivities has increased significantly within the last two decades.
Boaz's (1978) study indicated over five million parttime students en
rolled in institutions of higher education nationally and a total of
over 21 million persons participating in adult education courses or
programs. In a 1980 study, which included interviews of a national rep
resentative sample of 2,000 Americans 25 years of age and older,
Aslanian and Bricknell (1980) showed that half of the adult learners in
terviewed had studied at least two different topics in the past year.
Cross (1980) has cited this rapid growth of adult learners as a major
trend in American higher education today and an indication that the
United States has become a learning society.
The demand for adult education is often being met by new learning
structures, one of which is noncredit, continuing education courses,
taught by parttime instructors who are hired often for their
"expertise" in a particular field. According to Bender and Hammons
(1972) parttimers bring something new to the classrooma breath of the
real world, in the form of daytoday experiences" (p.21). Because of
their various professional backgrounds, the instructors come to the
classrooms with different experiences and expectations from fulltime
instructors. Gaff (1975) indicated that these new learning structures
often impose a different demand on learning and teaching.
2
Since noncredit courses imply a nongraded structure, parttime
instructor teaching behavior and accountability to course content often
are solely measured by student evaluations of instruction. Research in
dicates that students are capable of identifying and describing teaching
behaviors which are conducive to their learning environment (Costin,
Greenough, & Menges, 1971; Feldman, 1976b; Kulik & Kulik, 1974).
Therefore, student opinion of instruction has become a widely accepted
and utilized evaluation tool. Although the results of student evalua
tions are intended to help improve teaching, the results often are seen
only by the instructor. The underlying assumptions are that parttime
instructors value student opinions, can analyze the results, and will
utilize these results to alter and improve their teaching behavior.
Although there are a variety of different learning needs and expec
tations of both adult students and parttime instructors, few instruc
tional improvement opportunities exist for faculty within institutions
that offer adult learners continuing education courses. Two recurring
problems are the effectiveness of the evaluation of parttime faculty
and the participation of parttime faculty in staff development oppor
tunities that do exist.
The expansion in the use of parttime faculty has precipitated the
adoption of various forms of teaching evaluation methods developed for
and utilized previously with fulltime faculty. One of these evaluation
methods is the widelyutilized student ratings of instruction. Aleamoni
(1978), Braunstein, Klein, and Pachla (1973), Centra (1973b), McKeachie
and Linn (1975), and Stevens and Aleamoni (1985) demonstrated how the
impact of student ratings of instruction can be increased with the use
3
of feedback to effect certain instructional changes. Using a meta
analysis of student ratings feedback studies, Cohen (1980) indicated
that overall, instructors who received midterm student ratings feedback
averaged .16 of a ratings point higher on endofterm student ratings
than instructors who did not recieve feedback. Centra (1973b) combined
augmented student feedback with instructor selfevaluation. He
demonstrated that teachers who were "unrealistic" in observing their own
behavior via selfevaluation of instruction as compared to their
students' opinion of the same instruction tended to make changes in
their instructional practices. Centra also found that more instructors
"change if given information to help them interpret scores" (p.297). In
other words, student ratings led to changes only when teachers saw the
results in such a way that increased their impact, such as counsel from
a master teacher or extensive narrative evaluations from students
(Centra, 1973b, McKeachie, Linn, & Mann, 1971, Stevens & Aleamoni,
1985). The theoretical justification behind Centra's (1973b) study was
developed by Gage, Runkel, and Chaterjee (1963) and may be found in
equilibrium theory. Equilibrium theorists assumed that when a condition
of "imbalance" (Heider, 1958), or "dissonance" (Festinger, 1957, 1964),
or "asymmetry" (Newcomb, 1959) was created, psychological discomfort was
experienced. This, in turn, "motivated" the person to reduce the dis
sonance or imbalance and achieve consonance (balance), and to avoid
situations which would increase dissonance.
Each of these studies utilized fulltime faculty or teaching assis
tants. The improvement of teaching practices of parttime faculty is a
significant problem facing American higher education today. Therefore,
there is an apparent need to look more closely at the evaluation methods
of parttime instruction. If the student ratings of instruction evalua
tion tool is an effective and productive one for parttime faculty,
educators can utilize this information to change teaching behaviors.
Such an investigation would help increase the usefulness of student in
structional ratings for improvement purposes.
Statement of the Purpose
The primary purpose of this study was to determine the effect dif
ferent forms of studentratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. Parttime faculty
opinions towards student ratings of instruction were also explored.
Gage (1972) indicated that people acquire attitudes and behaviors
through a process of learning, and that knowledge of results or
"feedback" is a fundamental condition of learning. Therefore, such
knowledge of results can lead to changed behavior. Centra (1973a)
showed that student feedback did effect some changes in student ratings
over time. Since these findings suggest that feedback facilitates be
havioral change, and behavioral change requires time, student ratings
feedback should have a more positive effect on the instructional be
havior of parttime faculty teaching longerterm courses than those
teaching shorterterm courses. Therefore, the effectiveness of student
ratings feedback as an evaluation tool for parttime faculty is the
focus of this investigation.
Hypotheses
This study tested the following directional and null hypotheses at
the .05 level of significance:
1. Parttime faculty who receive printed summary feedback along
with oral consultation about their midterm student ratings will receive
higher endofterm student ratings than parttime faculty who receive
only printed summary feedback of midterm student ratings, and parttime
faculty who receive only printed summary feedback of midterm
student ratings will receive higher endofterm student ratings than
parttime faculty who do not receive midterm student ratings feedback.
2. Parttime faculty assigned to both the midterm feedback and
nofeedback groups whose selfratings of instruction are higher than
their students' ratings of instruction at midterm will receive higher
endofterm student ratings than parttime faculty whose selfratings of
instruction are equal to or lower than their students' ratings of in
struction at midterm.
3. Parttime faculty assigned to both the midterm feedback and
nofeedback groups whose selfratings of instruction are higher than
their students' ratings of instruction at midterm will lower their end
ofterm selfratings of instruction.
4. Parttime faculty assigned to both the midterm feedback and
nofeedback groups whose selfratings of instruction are equal to or
lower than their students' ratings of instruction at midterm will raise
their endofterm selfratings of instruction.
5. Parttime faculty teaching longterm courses who receive mid
term student ratings feedback will receive higher endofterm student
ratings than parttime faculty teaching shortterm courses who receive
midterm student ratings feedback.
6
6. There is no significant difference between the opinions of
parttime faculty teaching longterm courses and parttime faculty
teaching shortterm courses towards student ratings of instruction.
Need for the Study
Although student ratings of instruction appear to be the dominant
evaluation tool in noncredit, continuing education courses for adult
students, their influence on teaching behavior is still debatable.
Since their use is an adoption of fulltime faculty evaluation methods,
the appropriateness of their use with parttime faculty is in question.
Past research resulted in voluminous studies focusing on the reliability
and validity of student ratings as a measure of instructional mode.
Recent researchers indicate increased attention directed toward enhanc
ing the efficiency and effectiveness of student ratings in changing in
structional behavior through the use of written and oral feedback.
Because of the widespread acceptance and interest in the use of
student evaluation of instruction data, several good ratings forms have
been developed. One of these is the Instructional Development and
Effectiveness Assessment form (IDEA) developed by Kansas State Univer
sity in 1975. This instrument is based on the assumption that there are
different styles of effective teaching which are dependent upon the
goals of the course and the characteristics and motivational level of
the students. Hoyt (1973) helped develop a questionnaire format which
asks students in various classes to rate their progress on 10 different
learning objectives, describe the instructor's behavior, and describe
different aspects of the course. A separate form was developed to col
lect instructor ratings of the importance of each of the 10 learning ob
jectives mentioned.
Aleamoni (1978), Braunstein et al. (1973), Centra (1973b), Cohen
(1980), McKeachie (1969), and Tuckman and Oliver (1968) demonstrated how
the impact of student ratings of instruction can be increased through
the use of feedback to positively effect certain instructional changes.
All of these studies concerned fulltime faculty or teaching assistants
teaching credit courses. Therefore, there is an apparent need to carry
the research one step further in order to determine the usefulness and
appropriateness of student ratings of instruction with parttime
faculty.
Delimitations
The population for this study was the instructors listed in the
Lifelong Learning Catalogue of NonCredit Courses, University of South
Florida, spring, 1986, and the instructors contracted to teach courses
in the Weekend College of the Educational Service Department at Saint
Leo College, Saint Leo, Florida. These listings may or may not have in
cluded all instructors of noncredit courses, since an instructor's name
may have been inadvertently omitted from the list, or an instructor may
have been contracted to teach a course after the catalogue went to
print. The list included nearly all instructors hired to teach the non
credit courses in the Division of Lifelong Learning, School of Extended
Studies, University of South Florida, Tampa, Florida, and at Saint Leo
College in Saint Leo, Florida.
8
Due to time, energy, and resource availability constraints, the
following delimitations (selfimposed by the researcher) also existed in
the study:
1. The researcher did not attempt to determine the amount of stu
dent ratings feedback desirable for instructors participating in the
study.
2. The areas of feedback which should be included to effect an im
pact on teaching behavior was not determined in this study. The basis
for change was limited to the 10 learning objective items and the over
all evaluation score on the student evaluation of instruction form util
ized for both selfevaluation and student evaluation and the type of
feedback received.
3. No attempt was made to analyze any possible longterm effects
student feedback might have had on teaching effectiveness or subsequent
student ratings.
4. The optimal level of feedback specificity to effect instruc
tional behavioral changes on either a short or longterm basis was not
a focus of this study.
5. Beyond the student ratings data, the researcher did not attempt
to investigate the teaching ability of the instructors. The extent of
change in student reported teaching behavior due to selfrating of in
struction and/or student ratings of instruction with printed feedback,
and/or printed feedback with oral consultation was investigated.
Limitations
1. The assumption that any change in instructors' teaching be
haviors from preevaluation to postevaluation was the result of in
9
structors participating in a selfrating, receiving printed student
feedback, or receiving oral consultative feedback, is debatable. Any
changes in teaching behavior during the preevaluation/postevaluation
interval of the study may have possibly occurred as a result of other
moderating variables other than the preevaluation, instructor self
rating, and/or feedback variables.
2. The quasiexperimental design did not allow the students to be
randomly assigned to treatment groups. The lack of random student
assignment could have resulted in very different or biased groups per
each instructor.
3. The generalization of the results of the students was limited
to parttime faculty teaching noncredit courses at the University of
South Florida and to parttime faculty teaching credit courses in the
Weekend College of Saint Leo College.
4. The researcher did not attempt to analyze or categorize teach
ing effectiveness other than that indicated by student ratings of in
struction. Other measures of teaching effectiveness could have included
classroom observations, videotaping, peer evaluation, etc.
5. The assessment of the importance given to student evaluations
by instructors was limited to that of a simple survey questionnaire.
6. Given time restraints and sample size limitations by the re
searcher, the treatments in the study were limited to two different
forms of feedback, printed data with oral consultation and printed data
only. Other treatment groups could have included varying amounts of
personal, consultative feedback, i.e., individual consultative feedback.
10
Operational Definitions
The following definitions were used in this study:
Comparative data feedback. A printed summary comparing item means
of the instructor's selfevaluation data to students' item means feed
back data as measured by the Instructional Development and Effectiveness
Assessment standard form (Kansas State University, 1975).
Continuing education. An organized, noncredit learning opportunity
for adult students.
Course coordinator. An employee of the Division of Lifelong Learn
ing, School of Extended Studies, University of South Florida, respon
sible for coordinating and scheduling noncredit, continuing education
courses for adults.
Endofterm measure. The final administration to all students
present during the last class meeting for both shortterm and longterm
courses of the Instructional Development and Effectiveness Assessment
standard form (Kansas State University, 1975).
Instructor Questionnaire. A short, fiveitem questionnaire assess
ing instructors' personal opinions of general student rating evaluation
tools.
Instructor selfevaluation data. The selfassessment responses to
items on the Instructional Development and Effectiveness Assessment
(IDEA) (Kansas State University, 1975) rating form of instructors teach
ing both longterm and shortterm courses.
Longterm course. A 14week long, college credit course for adults
offered on the weekends at various course locations in the Tampa Bay
area through Saint Leo College.
11
Midterm measure. The first administration to all students present
during the third or fourth class meeting (for shortterm courses) and
the seventh class meeting (for longterm courses) of the Instructional
Development and Effectiveness Assessment standard form (Kansas State
University, 1975).
Noncredit course. An organized learning activity for adults of
shorter duration than credit courses which, upon completion, earns no
college credit.
Parttime faculty. An instructor contracted to teach one or two
noncredit, continuing education courses for adults in the Division of
Lifelong Learning, School of Extended Studies, University of South
Florida or one or two credit courses offered on weekends through the
Weekend College of Saint Leo College during the spring semester, 1986.
Printed summary feedback with oral consultation. The computer
printout summary results of the student ratings response data to items
on the Instructional Development and Effectiveness Assessment standard
form (Kansas State University, 1975) and comparative data of student
ratings to instructor selfratings as interpreted in an individual con
ference by the course coordinator with the course instructor along with
suggestions for improvement.
Printed summary feedback. The computer printout summary results
of the student ratings response data to items on the Instructional
Development and Effectiveness Assessment standard form (Kansas State
University, 1975) and comparative data of student ratings to instructor
selfratings with printed instructions for selfinterpretation.
12
Shortterm course. Six to eightweek long, noncredit, continuing
education course for adults offered in the Division of Lifelong Learn
ing, School of Extended Studies, University of South Florida.
Student ratings. The mean score rating of the 10 learning objec
tives and the standardized score of the overall evaluation in Part I
Evaluation Progress Ratings section of the IDEA standard form (Kansas
State University, 1975).
Teaching behavior. Specific instructional behavior as measured by
items on the Instructional Development and Effectiveness Assessment
standard form (Kansas State University, 1975).
Organization of Remainder of the Research Report
The following chapters are utilized in the remainder of the re
search report. Chapter Two includes the growth of adult and continuing
education and the increased use of parttime faculty in general. Addi
tional research and literature that were pertinent to the investigation
also are included in this chapter. Chapter Three contains the proce
dures utilized to test the feasibility of the evaluation instruments,
the pilot study, as well as the research design and complete methodology
used in the study. Chapter Four contains the findings and analysis of
data. Chapter Five includes the summary of the findings and the conclu
sions drawn as a result of the study, as well as implications for prac
tices and further research.
CHAPTER TWO
REVIEW OF THE LITERATURE AND RELATED RESEARCH
To acquire an accurate understanding of the relationship between
teaching behavior and student feedback in higher education, an examina
tion of relevant research is presented in the following areas: (a)
growth of adult and continuing education, (b) parttime faculty and
faculty development practices, (c) adult development, (d) source of stu
dent feedback, (e) nature and impact of student feedback, and (f) sum
mary and conclusions. It is acknowledged that student feedback which is
used for other purposes such as pedagogy and administrative decision
making also is directed at changing teaching behavior. This review,
however, focuses on the use of student feedback for instructors.
Growth of Adult and Continuing Education
Social policy changes and attitudes during the late 1960s and early
1970s resulted in the opening wider of the doors of American institu
tions of higher education to an unprecedented number of nontraditional
students. These openadmission policies, stated Cross (1980), resulted
in a significant increase in the number of parttime adult learners who
"constitute the most rapidly growing segment in American education"
(p.627). A 30.8% growth in adult student participation in organized
learning activities occurred between 1969 and 1975. This increase, ac
cording to Parson (1979), was more than double the increase in the adult
population during the same time period. The United States Bureau of
Census data indicated that in 1981, 21 million adults participated in
14
courses or programs of one type or another, representing 13% of the to
tal adult population in the United States. The majority of adult stu
dents enrolled in these learning activities were enrolled on a parttime
basis.
Parttime Faculty
The literature reviewed in this and the next two sections falls
into three categories: parttime faculty, faculty development prac
tices, and adult development.
As institutional credit and noncredit enrollments expanded, demand
for increased numbers of college faculty continued to be met through the
increased employment of parttime, adjunct faculty at oncampus as well
as various offcampus locations. The perfunctory practice of the use of
parttime faculty has changed the staffing patterns of institutions of
higher education. Bender and Hammons (1972) reported, in a national
survey that parttime faculty composed 40% of the total 122,138 faculty
members in lower division colleges in the United States. Although
reasons for the phenomenal growth in the utilization of parttime in
structors have varied according to individual institutions, some common
factors cited for justification included availability, flexibility, ex
pertise, salary cost differentials, and community relations
(Friedlander, 1980; Hammons, 1981; Kuhns, 1963; Lombardi, 1976).
Cruise, Furst, and Klimes (1980) indicated that with typically no health
insurance, pension, or other benefits, parttime teachers cost con
siderably less, no matter which unit of output is measured. Spofford
(1979) cited U.S. Department of Labor estimates of nearly 80,000 new
Ph.D.s joining the ranks of 100,000 available adjuncts as evidence of
growing parttime faculty labor availability. Parttime faculty
utilization allowed administrators much greater flexibility with class
locations and time schedules (Friedlander, 1978; Kuhns, 1963; Lombardi,
1975). Such utilization also provided an important link with local
governmental agencies, community organizations, and industry.
In a 1980 study based on the 1975, 1977, and 1978 Center for the
Study of Community Colleges (CSCC) surveys, Friedlander explored teach
ing behavior discrepancies between fulltime and parttime faculty.
Significant differences were found between parttime and fulltime in
structors on most measures related to instructional practices. Specifi
cally, parttimers tended to have less rigorous instructional require
ments, less teaching experience, fewer teaching credentials, fewer out
ofclass student contacts, and less emphasis on written assignments than
did fulltime faculty.
Cruise et al. (1980) evaluated the instructional effectiveness of
parttimers by utilizing student, administrator, and teacher self
evaluation instruments. They concluded there were no statistical dif
ferences between the instructional behavior of fulltimers and part
timers. Bender and Hammons (1972), utilizing student judgment as their
criterion, found that "both the fulltime and parttime faculties
possessed the same strengths and weaknesses in their teaching" (p.22).
GuthrieMorse (1981), however, expressed the possibility of qualitative
differences in reference to inadequate parttime supervision, evalua
tion, commitment, and experience.
As research of parttime faculty indicated qualitative differences,
so too did research of parttime student characteristics and their
16
learning needs. Aslanian and Bricknell (1980) indicated that adults are
motivated to learn following transitions from one status in life to
another. Such transitions may be caused by many factors, including
technological advancement, job retraining, mandated education for
professional relicensure or recertification, personal growth and
development needs, as well as an increased acceptance and focus of the
lifelong learning concept. Queeney (1982) characterized these adult
students as a fairly elite, welleducated group with different learning
expectations from traditional students, while Wolfgang and Dowling
(1981) demonstrated significant differences between traditional age and
older students in terms of motivation. Kuh and Ardiolo (1979) studied
the differences between traditional age and older students. They
reported that the new, extremely diverse student body had a more dif
ficult time adjusting to new situations, had feelings of inadequacy and
selfdoubt, and preferred traditional teaching methodologies which
younger students tended to reject.
The combination of changing student characteristics with different
learning needs and expectations, new educational settings, and instruc
tional methods has required different teaching practices and new student
relationships for parttime instructors. New learning structures such
as noncredit, offcampus, shortterm courses have also created new en
vironments with new learning and teaching demands for parttime faculty.
Faculties must, therefore, acquire new ideas, teaching techniques, and
skills to meet these challenges; instructional staff development im
plications are eminent.
17
Faculty Development Practices
Effective teaching has been a complex set of interrelated at
titudes, knowledge, skills, values, and motivations according to Gaff
(1975). Because of the increased diversity among students, the improve
ment of teaching behavior and student learning has necessitated faculty
awareness of the complex interactions among students, institutions, and
teachers.
Many institutions of higher learning have developed programs to
cultivate and facilitate instructional improvement through faculty
development in order to meet these new needs and challenges. An early
study by Miller and Wilson (1963) indicated that the most commonly
reported development activities made available to parttime faculty
dealt with adjusting to rules and regulations of the college, in other
words, a simple "orientation session." Few opportunities were given to
improve communications or teaching techniques, other than financial as
sistance for attendance at professional meetings and conferences by some
departments. Very little has been done to enhance faculty development
activities for parttime faculty. According to Moe (1977) in a study by
the Instructional ACCtion Center, "parttime faculty development was a
top concern" among the institutions surveyed. However, the most common
inservice activity made available to parttime faculty concerned the
rules and regulations of the institution. Smith (1980) found that some
of the weakest staff development programs were those for parttime
faculty.
Gaff (1975) in a study of faculty development practices among col
leges and universities revealed a common set of assumptions upon which
18
the general goal of instructional improvement rests. He reported that
three different approaches basically were employed: faculty develop
ment, instructional development, and organizational development.
In the last decade more attention has been focused on parttime
faculty development. Although several models have been developed,
little emphasis has been expended on providing developmental assistance
to parttime faculty on teaching strategies (Leslie, Kellams, & Gunne,
1982). Cost to the institution, lack of time and interest or commitment
on the part of the instructor, and great diversity among parttime
faculty are reasons identified for minimal parttime faculty development
efforts (Leslie et al., 1982; Weichenthal, Means, & Kozall, 1977).
The uniqueness of parttime instructors should be considered and
incorporated into development activity design as well as the
individual's own development needs and motivations (Emmet, 1981). In
creased research has resulted in the development of new models based on
the needs of parttime faculty (Black, 1981; Hammons, 1981; Jamerson,
1979; Moe, 1977; Pedras, 1984; Pierce & Miller, 1980). Staff develop
ment topics for parttime faculty have included institutional mission,
adult students, instructional development and delivery, evaluation tech
niques, learning theory, and collective bargaining (Pedras, 1984).
Adult Development
In spite of the welldocumented trend in the increase of the
utilization of parttime instructors by continuing education institu
tions and the increase in staff development activities, considerably
less research attention has been given to instructor motivation and
adult development needs as related to changes in teaching behaviors.
Previously most human development studies have had a childcentered
orientation with no integrated theory to encompass total life span
(Neugarten, 1968). In recent years, however, there has been an increase
in research focused on adulthood. In an attempt to chart the progress
of adult development, gerontological research has steadily expanded
knowledge concerning middleage and older adults.
Of particular interest was the research of Chickering (1981), Erik
son (1972), Gould (1979), Levinson (1978), and Sheehy (1974). They in
dicated that adults, like children, develop through several distinguish
able stages or transitions. The pervasive similarity of this research
to child development seemed to indicate that, even though experiencing
sequential transitions, the quest for stability rather than change was
the rule for the remainder of adulthood. A developmental and holistic
perspective on adulthood allowed an appreciation and awareness of the
shifting mix of stability and change during the life cycle (Knox, 1977).
Based on Levinson's (1978) life transitions, more recent researchers
have identified life issues related to college faculty and adminis
trators. Faculty careers were associated with age stage development by
Duncan and McCombs (1982).
Since basic learning theory held that the learning of new behavior
was easier than eradicating earlier learned behavior and replacing such
behaviors with others, results of adult development research had sig
nificant implications for developing faculty and changing teaching be
havior. If stability was preferred to change as people grew older, then
a given amount of change should require more effort and possibly more
dynamic and powerful environments as a behavior or trait stabilized.
20
According to Sanford (1973), for change to occur at all there needed to
be the presence of "an appropriate and effective challenge, one that is
sufficient to upset the equilibrium, but not so extreme as to induce
regression; in other words, not too severe in an objective sense and not
beyond the limits of the individual's adaptive capabilities" (p.16).
For faculty members, the increased call for accountability and prevalent
use of student evaluations of instruction techniques may provide such a
challenge.
Compared to the scope and number of studies conducted regarding the
characteristics and motives of students and fulltime instructors of
credit activities, relatively few studies have been conducted concerning
the teaching characteristics, motives, and effectiveness of parttime,
noncredit, continuing education instructors. Although there has been a
paucity of available data, there were even fewer substantive data con
cerning parttime instructor teaching ability as measured by student
ratings of instruction. Consequently, the suspicion many critics hold
of parttime faculty members' teaching ability and effectiveness has
remained high, and continued questioning of their commitment to higher
education goals shall prevail until more studies are conducted relative
to the teaching performance of parttime faculty.
Sources of Student Feedback
As student opinions of instruction became one of the dominant
evaluation tools in higher education, the credibility of students as a
source of information on teaching behavior has frequently and con
tinually been questioned by the opponents of this form of evaluation
(Hildebrand, Wilson, & Dienst, 1971; Page, 1974). The argument
21
presented has been that students are not competent judges of instruction
because of lack of experience and knowledge, as well as the influence of
extraneous factors to the quality of teaching such as environmental
stimuli, grade point average, and degree of interest in the subject
matter.
Doyle (1975) indicated that whatever instructor data are gathered
from whatever source needed to be evaluated in terms of reliability,
validity, generalizability, and utility. Centra (1973a), Costin et al.
(1971), and Hildebrand et al. (1971) all found student ratings to be
reliable in terms of consistency and stability over time. Reliability
coefficients in the .80s and .90s have been obtained with consistency
for a class size of 20 or more students (Gage, 1974).
Validity has been defined as the extent to which ratings measure
what they are intended to measure. Studies to establish the validity of
student ratings have generally reported positive results (Aleamoni &
Yimer, 1973; Costin et al., 1971; Gessner, 1973; Marsh, 1977; McKeachie,
Linn, & Mann, 1971; Sullivan & Skanes, 1974). Shingles (1977) reported
that procedures used to reduce the influence of extraneous factors do
not seem to change instructors' concerns about what student ratings ac
tually measure. On the other hand, Aubrecht (1979) suggested that if
instructors accept and value knowing how satisfied their students are
and how they perceive their teaching behavior, student ratings are a
credible and reliable source of information to be utilized. As student
ratings as a data source for information on teaching effectiveness have
increased, so has the body of research literature in this area. Al
though there remain concern and reservation over the use of student
22
ratings data as appropriate and sufficient criteria for administrative
decisions of teaching effectiveness (Menges, 1979), their use for in
structional improvement purposes has become an accepted practice (Cohen,
1980).
Nature and Impact of Student Feedback
It generally has been accepted that students are capable of iden
tifying and describing specific teaching behaviors which are conducive
to their learning environment (Costin, et al., 1971; Feldman, 1976a;
Kulik & Kulik, 1974; Kulik & McKeachie, 1975). Many techniques have
been used over the years to identify and describe effective teaching.
Hildebrand et al. (1971) surveyed students and faculty at the University
of California, Davis. Students were asked to identify the worst and
best teachers' characteristics. The most frequently mentioned charac
teristics were such items as ability to explain clearly, enthusiasm, in
terest in teaching, willingness to help students, friendliness towards
students, knowledge of subject, and organization. Many of these same
qualities have been found in similar studies by Crawford and Bradshaw
(1968), Gaff and Wilson (1971), and Perry (1969). Studies such as these
have been the basis of items institutions utilize as an important re
search and evaluation tool of teaching behavior with predictable
reliability and validity (Feldman, 1977).
If one can agree with Gage (1972) that people acquire their at
titudes and behaviors through a process of learning and that knowledge
of results or "feedback" is a fundamental condition of learning, then it
follows that such knowledge of results could lead to changed behavior.
This viewpoint is based on the assumption that teachers value student
23
opinion enough to alter their instructional practices when necessary or
at least their attitude towards change in instructional practices. This
assumption bears examination, however.
Glassman and Rotem (1977) reviewed a large body of research on stu
dent feedback and concluded that although student ratings data may be
valuable for individual instructors, overall, these data have had mini
mal impact, at best, in changing teacher behavior. Thomas (1980) showed
that graduate student feedback to professors seemingly did not change
professors' teaching behavior nor did it significantly change
professors' selfperceptions of their teaching behavior. On the other
hand, Aleamoni (1978) and Stevens and Aleamoni (1985) all found that in
structors who received augmented student ratings feedback improved on
certain teaching dimensions over a period of time. In a more recent
critical review of the research concerning college teaching improvement,
LevinsonRose and Menges (1981) indicated that of 93 studies reviewed,
78% supported intervention strategies. After a meta analysis of student
ratings feedback, Cohen (1980) showed that, in general, instructors who
received student ratings feedback averaged .16 of a ratings point higher
on endofterm ratings than instructors who received no feedback. The
use of student ratings consultation feedback as a tool for instructional
improvement was investigated by McKeachie et al. (1971). Their study
consisted of two experimental faculty groups (consultation feedback of
student ratings and printed feedback of student ratings) and one control
faculty group (no feedback). Midsemester and endofsemester measures
of student ratings of instruction were compared to study the effect of
consultative student ratings feedback to printed student ratings feed
24
back on teaching effectiveness. Their results indicated consultation
enhanced the positive effect of the student ratings feedback and
resulted in subsequently higher student ratings of the participating
faculty. Erikson and Erikson (1979) demonstrated significant effects in
the use of a consultation form of augmented student ratings feedback in
the faculty evaluation processes. Based on the analysis of 22 studies,
Cohen (1980) found that student ratings feedback had a modest but posi
tive effect on improving college instruction as measured by subsequent
student ratings. Specifically, a typical instructor who received aug
mented feedback performed at the end of the semester at the 74th percen
tile. The instructor who only received midsemester student ratings
performed at the 58th percentile; the instructor receiving no feedback
performed at the 50th percentile. Aleamoni (1978) showed that instruc
tors who received consultation improved their student ratings on at
least two of five dimensions over a period of one semester to one year.
His design, however, utilized nonequivalent control groups which raised
questions of threats to internal validity and possible regression to the
mean (LevinsonRose & Menges, 1981). Centra (1973b) combined augmented
student feedback with instructor selfevaluation. He demonstrated that
teachers who were "unrealistic" in observing their own teaching behavior
via selfevaluation of instruction as compared to their students'
opinion of the same instruction, tended to make changes in their in
structional practices. An additional finding of his was that a wider
variety of instructors "change if given information to help them inter
pret scores" (p.297). In other words, student ratings led to changes
only when teachers saw the results in such a way that increased their
25
impact, such as counsel from a master teacher or extensive narrative
evaluations from students (Centra, 1973b; McKeachie et al., 1971;
Stevens & Aleamoni, 1985).
Centra's (1973b) study was developed from research by Gage, Runkel,
and Chaterjee (1963) and may be based in equilibrium theory. Equi
librium theorists assumed that when a condition of "imbalance" (Heider,
1958), or "dissonance" (Festinger, 1957, 1964), or "asymmetry" (Newcomb,
1959) was created, psychological discomfort was experienced. This, in
turn, "motivated" the person to reduce the dissonance or imbalance and
achieve consonance (balance), and to avoid situations which would in
crease dissonance. When instructors receive student ratings feedback
with no help in interpretation or no explanation, ratings have provided
little help in effecting change. Accordingly, it follows that student
ratings feedback or consequences of behavior must be augmented to indi
cate to instructors how close their behavior approaches a given expecta
tion and then how much and in what direction they should change their
behavior if they want to come closer to that behavior, restore a condi
tion of "equilibrium," and utilize the data for improvement purposes
(Cohen, 1980; Rotem, 1978).
Summary of the Chapter
Adult continuing education enrollment continues to increase at a
rapid rate with nontraditional adult students who come to the classroom
with different learning levels, needs, and expectations. Much of the
demand for this type of learning activity is being met through the use
of parttime faculty at various teaching locations. While the use of
parttime faculty predominates, faculty development opportunities for
26
them to improve teaching effectiveness remains minimal. Accountability
of the effectiveness of parttime instruction appears to be measured by
the widelyaccepted student ratings of instruction, which is an evalua
tion tool adapted from fulltime faculty evaluation systems.
The major implication of the present review of the research and re
lated literature is that validity of student opinions of instruction as
an instructional evaluation tool is debatable. However, despite the
controversy and strong opposition, student ratings of instruction still
have been universally utilized and endorsed by students and educators
alike. Feedback of student opinion of instruction has been shown to be
a significantly effective tool in changing instructional behavior
(Aleamoni, 1978; Centra, 1973b; Cohen, 1980; LevinsonRose & Menges,
1981; McKeachie & Linn, 1975; Rotem, 1978). Therefore, if student feed
back can possibly be made to be more effective and provocative, a more
significant change in teaching behavior as measured by student ratings
and other instruments or observational strategies may be observed.
It appears that much attention has been given to the reliability
and validity of student ratings as a method for evaluating instructional
behavior. Based on the literature review, it is apparent that as part
time faculty enter the classroom they encounter nontraditional students
in new educational settings with different learning expectations. Given
the significant differences in new learning structures and teaching
demands being made of parttime faculty as compared to their fulltime
counterparts, there has been a gap in the research on the impact of stu
dent ratings on instruction of parttime faculty. While student ratings
feedback research has indicated that overall, feedback has had a sig
27
nificant effect on changing instructional behavior, each of these
studies involved fulltime faculty or teaching assistants. Because the
teaching effectiveness of parttime faculty has continued to be measured
predominantly with similar student ratings of instruction tools, it
therefore seemed appropriate that an investigation be made of the effect
of student ratings feedback on parttime faculty. After careful
analysis of the results of related research studies, the researcher ex
pected student ratings feedback to have a significant effect on the sub
sequent student ratings of parttime faculty.
CHAPTER THREE
METHODOLOGY
The primary purpose of this study was to determine the effect dif
ferent forms of student ratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. This chapter con
tains the description of the general research design, instrumentation,
population, pilot study, collection of data, and analysis of data used
in this investigation.
In a study of the effectiveness of student ratings feedback on col
lege instruction, Centra (1973b) utilized a threegroup design which in
cluded an experimental faculty group (written feedback), and two control
faculty groups (no feedback and posttest only with midsemester and end
ofsemester measures). On the basis of equilibrium theory, a major
hypothesis of his was that student ratings would produce changes in
teachers who had rated themselves more favorably than their students had
rated them. Research results generally supported his hypothesis.
McKeachie and Linn (1975) investigated the use of student ratings
consultation feedback for instructional improvement. Their research
design also consisted of two experimental faculty groups (consultation
feedback of student ratings and printed feedback of student ratings) and
one control faculty group (no feedback). Results indicated an enhanced
positive effect of student ratings feedback.
29
General Research Design
The research design employed in this investigation was a quasi
experimental design. This design allowed the experimenter to
combine the aspects of two feedback experimental groups, as found in
McKeachie and Linn's (1975) study, and two control groups as found in
Centra's (1973b) equilibrium hypothesis study.
The general research design used in this research had a total of
four instructor groups (Gl, G2, G3, and G4). Two of the four were ex
perimental groups which received different forms of midterm student
ratings feedback (Gl and G2). As indicated in Table 1, the other two
groups were control groups which received no midterm student ratings
feedback (G3 and G4). All four groups consisted of two levels for each
group: instructors of longterm courses and instructors of shortterm
courses. Longterm courses were 14week credit courses taught by part
time faculty in the Weekend College Division at Saint Leo College.
Shortterm courses were noncredit courses of six to eight weeks in
duration taught by parttime faculty in the Division of Lifelong Learn
ing, School of Extended Studies at the University of South Florida. All
instructors participating in the study were randomly assigned to one of
the four groups. The independent variable was midterm student ratings
feedback with two conditions: a full feedback condition (Gl) consisting
of printed feedback of the midterm IDEA standard form summary report of
student ratings with oral consultation and a partial feedback condition
(G2) consisting of printed feedback only of midterm IDEA standard form
summary report of student ratings.
Table 1
General Research Design
GROUPS TREATMENT PRETEST POSTTEST
Gl : Full feedback X X
G2 : Partial feedback X X
G3 : No feedback pretestt and posttest only) X X
G4 : No feedback (posttest only) X
Another factor incorporated into the research design was instructor
selfevaluation. In Gl, G2, and G3 instructor selfevaluation data from
the IDEA standard form were collected at midterm pretestt) and endof
term (posttest). Instructor selfevaluation data in control G4 were
collected at the endofterm only (posttest). The data collection of
student ratings and instructor selfratings of instruction at the end of
the term in control G4 was to determine whether midterm ratings had a
sensitizing effect on student raters or instructors.
Instructor demographic characteristics regarding sex, teaching ex
perience, and educational degree obtained were collected by the inves
tigator prior to the beginning of the study. These data were collected
to investigate whether there was any relationship between instructor
characteristics and student ratings of teaching behavior. The instruc
tor demographic information is presented in Table 2. As indicated,
there was similar demographic distribution among the four treatment
groups involved in the study. Teaching experience and educational
degree obtained for one instructor in group 4 were not available to the
Table 2
Frequency and Percentage Distribution of Selected Instructor Demographic
Variables
GROUPS
GI G2 G3 G4
(n24) (n23) (n23) (n24)
VARIABLES FREQ. % FREQ. % FREQ. % FREQ. %
Sex
Male 14 (15%) 17 (18%) 12 (13%) 15 (16%)
Female 10 (11%) 6 (6%) 11 (11%) 9 (10%)
Teaching Exp.*
03 yrs. 5 (6%) 3 (3%) 2 (2%) 4 (4%)
49 yrs. 8 (9%) 12 (13%) 6 (6%) 7 (8%)
1014 yrs. 4 (4%) 4 (4%) 6 (6%) 3 (3%)
15+ yrs. 7 (8%) 4 (4%) 9 (10%) 9 (10%)
Educ. Degree*
Bachelor's 2 (2%) 3 (3%) 3 (3%) 1 (1%)
Master's 17 (19%) 13 (14%) 13 (14%) 14 (16%)
Doctorate 5 (5%) 7 (7%) 7 (7%) 8 (9%)
*The category data for one instructor in G4 were not available.
searcher. Of the total number of 94 instructors
study, 62% were male and 38% were female, for an
participating in the
approximate 2 to I
ratio. Table 2 also shows the distribution regarding teaching ex
perience. Fifteen percent claimed 03 years teaching experience, 36%
reported 49 years experience, while 17% and 32% of the instructors
claimed 1014 years and over 15 years experience, respectively. The
percentage of instructors reporting a bachelor's degree was 9%, while an
overwhelming majority of 63% held a master's degree, and 28% held a doc
torate degree.
32
Instrumentation
Centra (1973a) demonstrated that a descriptive evaluation instru
ment allows the recipient to judge the responses to items received by
the evaluators. Descriptive feedback has been shown by researchers to
be less threatening to the recipient and more informative than judgmen
tal feedback (Harari & Zedeck, 1973). Other researchers showed that
items of inclusion on the instrument need to be aspects of behavior over
which the recipient has control, specific behaviors rather than global
descriptions or personality characteristics, low inference rather than
high inference items, and items relevant to the specific teaching situa
tion (Braunstein et al. 1973; Cook, 1979; Glassman, Killiat, & Gmelch,
1974).
The student ratings of instruction instrument was central to the
study in measuring the perceptions of students regarding the teaching
effectiveness of parttime instructors and of the instructors' self
perception of their own teaching performance. In a critical review of
college teaching improvement utilizing student feedback research,
LevinsonRose and Menges (1981) concluded that the clearest findings
were from those studies which utilized discrepancies between
instructors' selfratings and ratings by students.
The instrument utilized in this study was the standard form of the
Instructional Development and Effectiveness Assessment (IDEA) (Kansas
State University, 1975). The IDEA evaluation form was first developed
by Donald Hoyt in 1968 at Kansas State University (Hoyt, 1973). It be
came widely used in Kansas and available since 1975 to college instruc
tors outside Kansas State University (Hoyt & Cashin, 1977).
33
The standard form of the IDEA survey instrument contains 39 items
(see Appendixes A and B). The basis upon which the IDEA standard form
was developed is that effective teaching is reflected by students'
progress toward certain course goals. Therefore, 10 of the 39 items on
the standard form are course learning objectives which the instructor is
asked to rank order in terms of priority. As shown in Table 3, these 10
course learning objectives (items 2130) are contained in Part I
(Progress Ratings) of the IDEA summary report and form the basis for
evaluating effective teaching. This section of the report is divided
into three subcategories: subject matter mastery, development of
general skills, and personal development.
Table 3
IDEA Course Learning Objective Items
ITEM CATEGORY
Part I. Evaluation (Progress Ratings)
Subject Matter Mastery
21. Factual Knowledge
22. Principles and Theories
24. Professional Skills and Viewpoints
25. Discipline's Methods
Development of General Skills
23. Thinking and Problem Solving
26. Creative Capacities
29. Effective Communication
Personal Development
27. Personal Responsibiity
28. General Liberal Education
30. Implications for SelfUnderstanding
Overall Evaluation (Progress on Relevant Objectives)
34
Item reliabilities and standard errors of measurement for the IDEA
form were obtained by Cashin and Perrin (1978) for three class sizes:
small, medium, and large. Samples of 200 or more classes were obtained
by taking all small classes with 10 raters, all medium classes with 20
raters, and all large classes with 39, 40, or 41 raters. Each class was
split in half and SpearmanBrown corrections were applied (see Appendix
C). An average item reliability of .69 was obtained for 10 raters, .81
for 20, and .89 for 40.
In developing the IDEA standard rating form, the standard errors of
measurement for the 39 items on the IDEA survey form were also calcu
lated for three class sizes: small, medium, and large. The average
standard error of measurement for small classes was .37, .25 for medium
classes, and .18 for large classes (Cashin & Perrin, 1978).
Validity of the IDEA evaluation system has been established with a
data pool of over 23,000 classes. From this pool a data base for all
courses having utilized the IDEA system and courses similar in motiva
tion level and class size has been established for evaluative comparison
purposes. Correlation tables relating specific student progress to
variable and specific teacher activities (called "methods") are stored
in a computer and compare student ratings of the IDEA system to three
kinds of data: (a) direct measures of student learning, (b) ratings by
others (than students), and (c) possible sources of bias (Aubrecht,
1979).
This instrument was modified by the researcher to complement the
specific datagathering time frames; i.e., the midterm pretestt) IDEA
evaluation form items were stated in the present tense (see Appendix A),
35
while the final (posttest) IDEA evaluation form remained unchanged
utilizing the past tense (see Appendix B). In addition, two extra
global items were included in the evaluation form to assess the
students' overall evaluation of the instructor and the course (see Ap
pendixes A and B). The additional two global items were not part of the
IDEA data pool and, therefore, did not affect the instrument's overall
validity and reliability ratings.
A fiveitem Instructor Questionnaire with a fivepoint Likerttype
response scale was developed by the researcher to assess instructor
opinions towards student ratings of instruction (see Appendix D). The
five items were used to find out how much an instructor felt the in
stitution valued student opinions (items 12) and how much he/she valued
student opinions of instruction (items 35). The questionnaire was sent
to all instructors participating in this study prior to the starting
date of their class along with a cover memo (see Appendix E).
Instructor information regarding the sex, teaching experience, and
educational degree obtained of the study participants was gathered by
the researcher prior to random assignment and treatment group level.
Pilot Study
The investigator conducted a pilot study to determine the
suitability of the fiveitem Instructor Questionnaire and the
feasibility of using the Instructional Development and Effectiveness
Assessment (IDEA) standard form (Kansas State University, 1975).
The pilot sample consisted of a total of eight parttime faculty
members from the School of Extended Studies at the University of South
Florida teaching shortterm courses. These faculty members were ran
36
domly selected and not included in the full study. Each subject was
randomly assigned to one of four treatment groups, and the IDEA evalua
tion form was administered during the fall semester, 1985. As a result
of the pilot administration, the IDEA standard form was modified to
reflect the specific data gathering time frame. As indicated in Appen
dixes A and B, the midterm evaluation form items were stated in the
present tense, while the endofterm form was stated in the past tense.
The pilot study also allowed the researcher to measure the amount of
time necessary to receive the computerized printed summary reports from
the Center for Faculty Evaluation and Development, Kansas State Univer
sity. As a result, steps were taken in consultation with the Center to
minimize the time necessary to receive the reports.
A training program regarding uniform directions for IDEA ad
ministration was developed by the researcher. Additionally, the oral
consultation format regarding score interpretation and comparative data
feedback utilized in the full study was also developed (see Appendix F).
Both the evaluation administration training and oral consultation format
were piloted during this time with the research assistants.
Present Study
Subjects
The study sample consisted of a total of 94 parttime faculty from
two different postsecondary institutions in Florida. Fortyone of the
total participants were parttime instructors of longterm, credit
courses contracted to teach one or two Weekend College courses in the
Educational Services Division of Saint Leo College. Of the 43 parttime
instructors informed of the research study at Saint Leo, two declined to
37
participate, representing a 95% participation rate for this institution.
The other 53 of the participants were University of South Florida, part
time instructors listed in the Lifelong Learning Catalogue of NonCredit
Courses, spring semester, 1986, who were teaching a course six to eight
weeks in duration (shortterm courses). All parttime faculty in the
Division of Lifelong Learning (54 total) elected to participate in the
study. However, one instructor suffered ill health during the third
week of classes and was replaced by a substitute instructor for the
remainder of the course. His student ratings and selfratings data were
warranted unusable. Therefore, a total of 164 evaluation of instruction
measurements and 164 instructor, selfevaluation measurements were
analyzed in the study. The total number of students enrolled in long
term courses was 649 and in shortterm courses it was 897 for a total of
1,546. The total number of students rating in this study was 1,017 or
66%. Each instructor was randomly assigned to one of four treatment
groups by utilizing a table of random numbers.
Data Collection
The research was conducted during the spring semester, 1986.
During this time, data consisting of instructor information, instructor
opinion of student ratings, student ratings, and instructor pretest and
posttest selfevaluation of instruction ratings were collected. All
data were collected by either the researcher or her two assistants. The
researcher, who was also employed as a program coordinator at the
University of South Florida's Division of Lifelong Learning, gave the
midterm student ratings feedback to all instructors assigned to this
treatment condition. The fiveitem Instructor Questionnaire that was
38
developed by the investigator was sent to each instructor participating
in this study as part of their spring semester course assignment let
ters. It was accompanied by a memorandum from the Director of Lifelong
Learning (to instructors of shortterm courses) and a letter from the
researcher (to instructors of longterm courses) indicating instructions
for completion and deadlines for return (see Appendix E). In both cases
the questionnaire's value to the institution was emphasized as an impor
tant part of a research study on student ratings, which the division or
college was participating in during the semester. The questionnaire was
completed approximately two weeks prior to the beginning date of each
course.
During the first week of classes, a detailed letter from the Direc
tor of the Division of Lifelong Learning (for shortterm course
instructors) and the Dean of Educational Services of Saint Leo College
(for instructors of longterm courses) was sent concerning the
division's or college's participation in an important research study ex
ploring what students are able to evaluate in the classroom and how use
ful this information might be to the instructor (see Appendix G). These
letters emphasized the study's importance to the respective institution
and the desire for all instructors to participate. It also explained
that a new student evaluation form would be utilized and that the course
coordinator/researcher would be responsible for the administration of
the form. Most classes had this form administered twice; once at mid
term and again at endofterm.
During the second and third week of classes, each instructor teach
ing a shortterm course assigned to feedback groups 1 and 2 and no
39
feedback group 3 was contacted by the course coordinator/researcher and
informed of the midterm student ratings administration scheduled for the
following class meeting. The researcher or one of her assistants ad
ministered and collected the IDEA standard form at the beginning of the
third or fourth class meeting from all students present. The ad
ministrator read a statement stressing the importance of the evaluation
and its purpose. All students were encouraged to participate and com
plete all 41 items on the evaluation to the best of their ability (see
Appendix H). The instructor was directed to go to another classroom
during the administration of the instrument and complete the same IDEA
standard form rating his/her own teaching behavior as he/she believed
the students would rate him/her and themselves thus far in the term (see
Appendix I). The instructor was asked to prioritize instructional ob
jectives for his/her course utilizing the IDEA instructor form prior to
the midterm evaluation (see Appendix J).
The IDEA standard form was again administered at the beginning of
the sixth, seventh, or eighth class meeting to all students present in
all shortterm courses participating in this study (both feedback and
nofeedback groups). The instructor was directed again to go to another
classroom during the administration of the instrument and asked to com
plete the same IDEA standard form rating his/her own teaching behavior
as he/she believed the students would rate him/her (see Appendix I).
During the midterm pretestt) and final (posttest) administration
of the IDEA form, the students were asked to code their response cards
with the last four digits of their social security numbers. It was ex
plained that this was for research coding purposes only, and that the
instructor would not see individual response cards, but only the total
computer summary of the student response cards. The researcher utilized
this procedure to reduce the possibility that any observed instructional
change was due to possible attrition of low student raters occurring be
tween the mid and final evaluation intervals (see Appendix H).
The same procedure was followed for each instructor teaching long
term courses. However, the researcher contacted each instructor and ad
ministered the midterm student ratings form at the beginning of the
seventh class meeting to all students present, and at the fourteenth
class meeting to all students present for the endofterm ratings.
During each administration, the instructor was directed to go to another
classroom during the administration of the instrument and complete the
same form rating his/her own teaching behavior as he/she believed the
students would rate him/her (see Appendix I). Again, the instructor was
asked to prioritize instructional objectives for his/her course utiliz
ing the IDEA instructor form prior to the midterm evaluation (see Ap
pendix J).
If less than 50% of the total enrolled students were present at the
last class meeting (sixth, seventh, or eighth class meeting for
shortterm courses; fourteenth for longterm courses) an IDEA evaluation
form and a selfaddressed envelope was sent to the absent students to
fill out and return. This was accompanied by a memo from the
coordinator/researcher (to short and longterm term course students)
explaining the importance of the students' evaluation of the instructor
and course (see Appendix K).
41
Both instructors teaching shortterm and longterm courses assigned
to full feedback group, group 1 (printed feedback with oral
consultation) received an appointment date for a consultation with the
course coordinator/researcher within one week to 10 days after the stu
dent ratings evaluation administration. During the approximately 30
minute oral consultation period, the computerprinted summary of the
students' rating of instruction was interpreted and explained. The in
dividual instructor's selfrating in comparison to student feedback also
was presented, explained, and interpreted (see Appendix L). Included in
the IDEA summary report were such statistics as the mean for each item,
standard deviation, percentage of students in class responding, and stu
dent ratings on course goals as compared to similar classes (see Appen
dix L). Suggestions for possible improvement were made where ap
plicable, and a paper regarding effective teaching techniques and the
learning characteristics of adult students was given to the instructor
(see Appendix M).
Instructors assigned to the partial feedback group, group 2
(printed feedback only, no consultation), received (via mail) within one
week to 10 days after the midterm student ratings administration, a
computer summary of the IDEA evaluation, a comparison to the individual
instructor's selfrating on the same items (see Appendix L), and a paper
describing the learning characteristics of adult students (see Appendix
M) accompanied by a cover memo from the researcher (see Appendix N).
Standard interpretation instructions were provided on the back of the
IDEA summary form (see Appendix L). Included in the report were the
mean for each item, standard deviation, percentage of students in class
responding, and student ratings data comparison to similar classes on
course goals (see Appendix L).
Instructors assigned to nofeedback groups 3 and 4 did not receive
any printed or oral midterm or endofterm student feedback summaries
prior to the end of this study.
Data Analysis
The dependent variables in the study were the posttest measures of
student ratings of instruction and the posttest measure of instructor
selfratings. The independent variable was the type of student ratings
feedback received.
Data on the instructor's sex, teaching experience, and educational
degrees obtained were coded into the statistical analysis utilizing a
oneway analysis of variance to determine if there was any relationship
between these variables and student ratings of parttime instruction.
The IDEA standard form's basis for evaluating effective teaching is
dependent upon how students rate their progress on 10 course learning
objectives, each of which is ranked by the course instructor as either
essential, important, or of minor importance to his/her teaching. These
ratings are reflected on the first 10 course objective items (item 21
through 30) contained in Part I Evaluation (Progress Ratings) of the
IDEA summary report (see Appendix L). An additional overall evaluation
score is indicated at the bottom of Part I. Although no numbers are
reported, a verbal rating is given based upon the weighted average of a
set of Tscores. Objectives which the instructor considered essential
are given double weights; important objectives are given single weights;
all other objectives are given zero weights and are dropped from the
43
calculations. For purpose of analysis, the researcher assigned numeri
cal weights to the verbal Tscores on a 15 basis with 1 being the
lowest score and 5 being the highest score.
The following directional and null hypotheses were investigated:
Hypothesis 1. Parttime faculty who receive printed summary feed
back along with oral consultation about their midterm student ratings
will receive higher endofterm student ratings than parttime faculty
who receive only printed summary feedback of midterm student ratings,
and parttime faculty who receive only printed summary feedback of mid
term student ratings will receive higher endofterm student ratings
than parttime faculty who do not receive midterm student ratings feed
back.
Endofterm student rating means of the 10 course learning objec
tive items of Part I Evaluation (Progress Ratings) and the overall
evaluation Tscore of the IDEA standard form were examined utilizing a
oneway analysis of variance to test for Hypothesis 1. Multiple pair
wise comparisons were performed using Fisher's Least Squared Difference
procedure to identify specific group differences of each of the 10
course learning objective item means and the overall evaluation Tscore,
if significant differences were indicated (.05 level) by the global
Anova. Since Hypothesis 1 was directional, a onetailed test of
Fisher's LSD procedure was used. To test for sensitivity effects of
protesting on both the posttest student ratings and the instructor self
ratings, additional analyses of variance were conducted.
Hypothesis 2. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
44
higher than their students' ratings of instruction at midterm will
receive higher endofterm student ratings than parttime faculty whose
selfratings of instruction are equal to or lower than their students'
ratings of instruction at midterm.
A linear regression model was used on each of the 10 course learn
ing objective item means of the student ratings and instructor self
ratings to calculate a regression weight. This statistical analysis is
similar to the regression formula found in Centra's (1973b) study which
was based on equilibrium theory and which showed the relationship be
tween instructor selfratings to be linear. Centra showed that instruc
tors who rated themselves higher than their students rated them
(overraters) and who showed a greater discrepancy (received feedback)
also showed the greater likelihood of improvement on subsequent student
ratings. The regression equation employed with the feedback and no
feedback groups was R2 al+ bl Rl and c(lRl) where R2 is the predicted
endofterm rating; thus, 1R1 is the difference between the instructor
selfrating and the midterm rating. A statistically supported
hypothesis would show a significant difference between the regression
weights for IR (i.e. c) for the feedback and nofeedback groups, with c
for the feedback groups being positive and greater. Forty regression
weights were calculated and analyzed for differences utilizing a t
statistic on the paired regression weights. A total of 20 tstatistics
were then subsequently analyzed for differences utilizing a onetailed
test.
Hypothesis 3. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
45
higher than their students' ratings of instruction at midterm will
lower their endofterm selfratings of instruction.
Hypothesis 4. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction were
equal to or lower than their students' ratings of instruction at mid
term will raise their endofterm selfratings of instruction.
Hypotheses 3 and 4 investigated the effect instructor self
overrating and underrating at midterm had on final instructor self
ratings. A correlated ttest was utilized to test these hypotheses on
the 10 course learning objective item means. Since Hypotheses 3 and 4
were directional, a onetailed test of the hypotheses was used.
Hypothesis 5. Parttime faculty teaching longterm courses who
receive midterm student ratings feedback will receive higher endof
term student ratings than parttime faculty teaching shortterm courses
who receive midterm student ratings feedback.
Endofterm student ratings means of the 10 course learning objec
tive items of Part I Evaluation (Progress Ratings) and the overall
evaluation Tscore of the IDEA standard form were examined utilizing a
oneway analysis of variance to test for Hypothesis 5.
Hypothesis 6. There is no significant difference between the
opinions of parttime faculty teaching longterm courses and parttime
faculty teaching shortterm courses towards student ratings of
instruction.
A twotailed related samples tstatistical analysis was used to
test for differences between instructor opinions towards student ratings
of instruction in Hypothesis 6.
46
An alpha level of .05 was used in each case to test the six
hypotheses.
Finally, a oneway Anova was used to test for differences with the
three instructor demographic variables on the 10 course learning objec
tive items and the overall evaluation Tscore at the .05 level. Tukey's
method was used to test at the .05 level for within group differences
for each item means that indicated a significant F value.
Summary of the Chapter
This chapter contains the methodological procedures used in this
investigation. The student ratings instrument was described and
validity and reliability information provided. The fiveitem Instructor
Questionnaire was also described. Finally, this chapter contains a
description of subjects, data collection, and data analysis techniques.
CHAPTER FOUR
FINDINGS
The primary purpose of this study was to determine the effect dif
ferent forms of student ratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. Parttime faculty
opinions towards student ratings of instruction also were explored.
This chapter contains the results of the study. First, the results
pertaining to each of the six hypotheses under investigation are
described in terms of either treatment condition or instructor level
(shortterm versus longterm courses) with reference to both dependent
variables, posttest measures of student ratings of instruction, and the
posttest measures of instructor selfratings. Then the post hoc
analyses of the instructor demographic variables of the subjects and
student motivation levels are presented and discussed.
The study's research design combined aspects of two feedback ex
perimental groups as found in McKeachie and Linn's (1975) augmented
feedback study and two nofeedback control groups as found in Centra's
(1973b) equilibrium hypothesis study.
The researcher pilot tested the instructional assessment instru
ment, IDEA standard form with its 41 items (39 standard items and 2 ad
ditional items), and the fiveitem Instructor Questionnaire on eight in
structors teaching shortterm courses at the University of South Florida
during the fall semester, 1985.
48
The research involved 165 midterm and final evaluation of instruc
tion measurements and 165 midterm and final selfevaluation measure
ments that were collected for the study. Of the total population, two
instructors from Saint Leo College elected not to participate, while all
of the parttime instructors from the Division of Lifelong Learning at
the University of South Florida participated in the study for a total of
95 instructors in the sample. One University of South Florida instruc
tor teaching a shortterm course became critically ill and was replaced
with a substitute instructor to teach the remainder of the course. His
midterm evaluation measurements were, therefore, warranted unusable.
Therefore, the study sample ultimately consisted of 94 parttime in
structors representing 97% of the total population. This resulted in a
total of 164 evaluation of instruction measurements and 164 instructor
selfevaluation measurements analyzed in the study.
Two different measurements were used for the dependent variables.
The first dependent variable was the posttest measure of student ratings
of instruction. The second dependent variable was the posttest measure
of instructor selfratings. These two variables were determined by the
ratings given to 10 course learning objective items and the overall
evaluation Tscore of the IDEA standard form by all students present at
the last class meeting in all courses involved in this study. Instruc
tor selfratings of teaching behavior on the same 10 course learning ob
jective items were also collected at the last class meeting from all in
structors involved in the study. There was no overall selfratings
score collected since this score is a weighted Tscore generated by the
computer, and the instructor selfratings were not computer scored.
Therefore, two different dependent variables were independently measured
and analyzed: (a) student ratings of instruction and (b) instructor
selfratings of instruction.
The original intent of this study was to test six hypotheses on the
total sample of 94 subjects. The purpose of the study was to determine
the effect different forms of student ratings feedback had on subsequent
parttime faculty student ratings and instructor selfratings. Addi
tional considerations of interest developed, specifically, differences
between motivation levels of students enrolled in credit and noncredit
courses and instructor demographic variables. Because of these con
cerns, post hoc analyses were performed to measure the differences in
motivation levels and the instructor demographic variables. Class
motivation levels between students enrolled in longterm, credit and
shortterm, noncredit courses was determined by the mean class score
indicated by item 36 of the IDEA summary report (see Appendix L).
Descriptive Analysis
A subprogram of the Statistical Analysis System, version 5.08 (SAS,
1984) computer program was used to calculate the Anova, Fisher's LSD
procedure, tstatistic, Tukey's comparisons, and regression weights of
the linear regression model. A special computer program was written to
calculate the tstatistic of the regression weights in the linear
regression model utilized for Hypothesis 2.
Test of Hypothesis 1
Hypothesis 1. Parttime faculty who receive printed summary feed
back along with oral consultation about their midterm student ratings
will receive higher endofterm student ratings than parttime faculty
50
who receive only printed summary feedback of midterm student ratings,
and parttime faculty who receive only printed summary feedback of mid
term student ratings will receive higher endofterm student ratings
than parttime faculty who do not receive midterm student ratings feed
back.
The overall Anova performed on each of the 10 course learning ob
jective variables and the overall evaluation Tscore revealed that there
were significant differences at the .05 level among the four sets of
population means on five course learning objective items. The overall
evaluation Tscore showed no significant differences. The five course
learning objective items which showed significant differences consisted
of the four items related to the subject matter mastery category and one
item in the development of general skills category (see Table 4). Of
the items found significant at the .05 level in the subject matter mas
tery learning objective category, item 21 (factual knowledge) had an F
value of 4.11, item 22 (principles and theories) had an F value of 4.95,
item 24 (professional skills and viewpoints) had an F value of 3.66, and
item 25 (discipline's methods) had an F value of 2.81. The other item
showing a significant difference with an F value of 3.66 was item 26
(creative capacities), which is related to the development of general
skills category (see Appendix 0 for summary of all Anovas).
Further analysis, which utilized Fisher's LSD method for planned
pairwise comparisons, indicated additional significant differences be
tween treatment groups at the .05 level of significance utilizing a one
tailed test. Analysis of item 21, factual knowledge, revealed specific
significant differences at the .05 level between the means of both feed
rH ~C14"
CM \.D \D 0
CIA
00
O'CM4 0
'I JC1
*rn
00m Cr n
'T oan .D 
rr I0 em
*4 *y C4
roe e en)\.
J O\
a\' CM r
CM4 Ta
Ocr r " oO
r0 00 r CM4
0
00 CM 1 00
CYL 1n ir 00
* *4 i/t 0yi
*
rc onm
en n
* *; C
1 \ \D, r
LtMLH \
cn 00 00
CO
,4
11
o>
. 0
L44 *T4
0 >
s
P4 0
4 C4 .1 n
CM C C CM
in en 00
C.
en I'D en
0
p0 0 co
p
0 >
9) i (3)
< > Q
Y 41 U 4
*H 9 < 41 C
ri Q) 4 0
E4 U W U3
a)
C'C14C'14
CM4 r4 00 '
\D r~, rn) V
Lrn 00 ^o
0
04 C4 C4 en
oo* *S 
en e n e
0
L4 (U )
44 W
ia *'i c
o) 1 + )
C 0
900'a'
LN M O cn
si
r.oo
CM CM en~ 0
< 
E'4
C ,.
0 w
0.
rn .
back groups, full feedback group (group 1) and partial feedback group
(group 2), and the endofterm (posttest) means of the nofeedback post
test only group (group 4) and between the means of the partial feedback
group (group 2) and the nofeedback pretest and posttest group (group
3). The LSD statistic for the full feedback group (group 1) was .413,
and the MSE for the paired comparisons was equal to .273, while the de
grees of freedom equaled 90. The LSD for the partial feedback group
(group 2) was .482 and for the nofeedback pretest and posttest group
(group 3) it was .274. There was no significant difference indicated at
the .05 level between the two nofeedback groups, the nofeedback
pretest and posttest group (group 3) and nofeedback posttest only group
(group 4).
Further analysis of item 22 (principles and theories) showed the
endofterm (posttest) item means of the full feedback group (group 1),
the partial feedback group (group 2), and the nofeedback pretest and
posttest group (group 3) significantly different at the .05 level from
the endofterm (posttest) means of the nofeedback posttest only group
(group 4). The MSE for the paired comparisons was .226 and the degrees
of freedom was 90. The LSD for the full feedback group (group 1) was
.363, for the partial feedback group (group 2) it was .518, and for the
nofeedback pretest and posttest group (group 3) it was .262. Another
paired item comparison significantly different at the .05 level occurred
between the nofeedback pretest and posttest group (group 3) and the no
feedback posttest only group (group 4). The MSE for the paired com
parison was .226, degrees of freedom was 90, and the LSD for group 3 was
.262. Fisher's LSD procedure on item 24 (professional skills and
53
viewpoints) showed a significant difference at the .05 level on the end
ofterm (posttest) means of the full feedback group (group 1), partial
feedback group (group 2), and nofeedback pretest and posttest only
group (group 3) and the nofeedback posttest only group (group 4). The
MSE for the paired comparisons was .391 and the degrees of freedom
equaled 90. The LSD value for the full feedback group (group 1) was
.546, the LSD statistic for the partial feedback group (group 2) was
.487, and the LSD for the nofeedback pretest and posttest group (group
3) was .326. The last item related to subject matter mastery which
showed significant differences among the four sets of population endof
term student ratings means was item 25, discipline's methods. The
Fisher LSD analysis on this item's group means indicated a significant
difference at the .05 level between the endofterm (posttest) student
ratings means of the full feedback group (group 1) and the partial feed
back group (group 2) and the nofeedback posttest only group (group 4).
Additionally, the difference between the partial feedback group (group
2) and the nofeedback pretest and posttest group (group 3) was found
significant at the .05 level. The MSE for the paired comparisons
equaled .470 and the degrees of freedom was equal to 90. The LSD
statistic for the full feedback group (group 1) was .367. The LSD value
for the partial feedback group (group 2) compared to the nofeedback
pretest and posttest group (group 3) was .404, while it was .527 for the
nofeedback posttest only group (group 4) comparison.
Further analysis utilizing Fisher's LSD procedure on the endof
term (posttest) student ratings means of item 26 (creative capacities)
indicated significant differences between the full feedback group (group
54
1), partial feedback group (group 2), nofeedback pretest and posttest
group (group 3) and the nofeedback posttest only group (group 4) at the
.05 level of significance. The MSE was .399 and the degrees of freedom
was 90. The LSD statistic for the full feedback group (group 1) was
.446, for the partial feedback group (group 2) it was .565, and for the
nofeedback midterm and posttest group (group 3) it was .438.
Fisher's LSD procedure results indicated no significant differences
on the 11 items between the nofeedback pretest and posttest group
(group 3) and the posttest only group (group 4). Therefore, the mid
term evaluation procedure did not seem to have had any sensitizing ef
fects on the endofterm student ratings.
Because of the overall Anova and followup findings, an additional
analysis was used by the researcher to test for differences between the
means of the midterm pretestt) student ratings and the endofterm
(posttest) student ratings within three of the four treatment groups
(groups 1, 2, and 3). A related samples ttest of paired comparisons
was utilized to test for differences of pretest and posttest means
within the full feedback group (group 1), partial feedback group (group
2), and nofeedback pretest and posttest group (group 3). A onetailed
test for differences was used. There was no analysis for the no
feedback posttest only group (group 4) since this group's treatment con
sisted of an endofterm (posttest) only assessment. The results of the
analysis are presented in Table 5. As indicated in Table 5, the full
feedback group (group 1) showed significant differences at the .05 level
between the midterm pretestt) student ratings means as compared to the
endofterm (posttest) student ratings means on 6 of the 10 course
cN
E4
E4
H
H
E4
oV
0 E4
*O H
E4
E4
c/r 1 0
0
E CV
w CN
II
U H
< M
0
z
EH
U EH
< Ei
m ." P6
Itdn
Hx. C1M
^1 Q
M E4
E4 wn M
E4
PL4
E 
MI
w
E4
mn
00 ^C r> r
T a% \o 
1/ r) V) I r~
r~ r, %D
n Cm C4
r0 l. r .
 "4
t~ 00
1r 0 3I
co C C4
r r 1 3.
K
C~I Ll,
r,3 en Cn *^D
.T Cs,1s ms
'40
*
r. en ci n
a\ a\ r* r
Cn
en IT i 11o
o 0
Lf 4
Ln a\
o) r T 0
CN zr C4 iLf
r 0co r C
V4
n 0oo 0
M V)Lr 0
ooC* *i c
*i o
em a\
It K
C'Jo
00 ID
00CN
** en C3,
CM J4 00 0
0oo ,4 \ ,
K Kx *
Ca% ar %D
oss
sr s e3 '3
0OC4 C0
ir> r. r~ in
0 * 4
0A 4 4
a
r4 J3 T4 0
U i I 0 C
La oid
0 i< 4
C S n
u *
0N 4< E
r4 P P4L > I
V)* *
C14C 3' i
4 00
* 1
4 *
4 u
,a (1)
! t4 40
>
*H H r )
C 0 $4
en~ \0
CM C
CI o 4
K K
<
r, 14 '4
sT 1, 0
Le) 0 ^ :T
* *
ancim c
41
C
B 0 W O4W
I. 00
U 0 (U '0 0
o
Q pr4 *r Is
00 0
U t4 si' 0
(0 c M *'J44 M
44 r C4
44 o 00 ) 0 i
C C4C1 0
56
learning objective items. These significant differences were found on
item 21 (factual knowledge) with a tstatistic of 2.34, item 23 (problem
solving) had a tstatistic of 2.65, item 26 (creative capacities) had a
tstatistic of 1.84, and items 29 (effective communication), 27
(personal responsibility), and 30 (implications for selfunderstanding)
had tstatistics of 2.60, 2.54, and 2.67, respectively. Each of the
other four learning objective items, although not significant at the .05
level, revealed an increased endofterm (posttest) student ratings
mean. The overall evaluation Tscore was found nonsignificant at the
.05 level.
The partial feedback group (group 2) indicated a significant dif
ference at the .05 level of significance between the midterm pretestt)
student ratings mean and the endofterm (posttest) student ratings
means on three course learning objective items. Item 25 (discipline's
methods) had a tstatistic of 1.74, item 28 (general liberal education)
had a tstatistic of 1.85, and item 30 (implications for self
understanding) had a tstatistic of 3.18. The other seven course learn
ing objective items, although not significantly different at the .05
level, indicated higher endofterm (posttest) student ratings means
than midterm pretestt) student ratings means. The overall evaluation
Tscore was found significant with a tstatistic of 1.74.
The ttest evaluation of the nofeedback pretest and posttest group
(group 3) showed no significant differences between the student ratings
means from midterm pretestt) to endofterm (posttest) on any of the 10
course learning objective items or the overall evaluation Tscore.
Seven of the learning objective items and the overall evaluation Tscore
57
indicated higher or increased endofterm (posttest) student ratings
means, while three of the learning objective items indicated lower or
decreased endofterm (posttest) means.
The initial Anova performed by the researcher on the 10 course
learning objective items and the overall evaluation Tscore indicated
only 5 out of 11 student ratings means significantly different at the
.05 level of significance. Closer investigation of the between group
differences of these significant items utilized Fisher's LSD method for
followup pairwise comparisons. Only 15 learning course objective item
means out of a possible 30 paired item means were found significant at
the .05 level using a onetailed test. Eleven of the 15 significant
item means were those items related to subject matter mastery and
belonging to the two feedback groups (groups 1 and 2). Even though 10
of the 15 pairwise comparisons of student ratings mean differences be
tween treatment groups were found significant at the .05 level when the
full feedback group (group 1) and the partial feedback group (group 2)
were each compared to the no feedback posttest only group (group 4),
this finding was judged not to be strong enough to support Hypothesis 1.
Nor could the fact that 55 out of a possible 66 paired mean comparisons
were in the predicted direction of the hypothesis be used as a basis for
partially accepting Hypothesis 1. In fact, when the partial feedback
group (group 2) was compared to the full feedback treatment group (group
1), 8 out of 11 individual item mean comparisons were not in the pre
dicted direction of the hypothesis, suggesting that midterm partial
feedback (printed feedback) may be at least as effective as midterm
full feedback (printed feedback with oral consultation). The ttest for
58
related samples suggested that the greatest change in student ratings of
instruction from midterm pretestt) to final evaluation (posttest) oc
curred within the treatment group which received the full feedback
(printed feedback with oral consultation). Comparison of midterm
pretestt) to final evaluation (posttest) means showed 6 out of 11 in
dividual item means significantly different at the .05 level of sig
nificance. The remaining five item means for this group were in the
predicted direction revealing increased final (posttest) student rating
means.
Based on the above mentioned findings, the results did not appear
to be strong enough to even partially support the hypothesis at the .05
level of significance. Therefore, Hypothesis 1 was rejected. However,
the full feedback and partial feedback groups (groups 1 and 2) reported
consistently higher subsequent student ratings than the two nofeedback
groups (groups 3 and 4).
Test for Hypothesis 2
Hypothesis 2. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
higher than their students' ratings of instruction at midterm will
receive higher endofterm student ratings than parttime faculty whose
selfratings are equal to or lower than their students' ratings of in
struction at midterm.
The statistical analysis utilized a linear regression model to cal
culate the regression weights of the 10 learning objective variables of
instructor selfoverrating and instructor selfunderrating in the two
feedback conditions and the nofeedback condition. No overall evalua
59
tion Tscore was generated by instructor selfratings, and thus was not
included in this analysis. Differences of the 40 regression weights be
tween the feedback and nofeedback groups were tested using a correlated
tstatistic and a onetailed test of significance. Table 6 indicates
the results of the ttests performed to compare the regression weights
for instructor selfunderraters and overraters in the feedback and no
feedback conditions. Results for instructors who rated themselves less
favorably at midterm than their students rated them (selfunderraters)
indicated 3 items (items 27, 28, and 29) of the 10 items analyzed were
significant at the .05 level for instructors who received feedback.
Generally, the beta weights for instructor selfunderraters showed
fairly random differences. Two of the 10 learning objective items
(items 27 and 30) of the instructor selfoverraters in the feedback con
dition were significant at the .05 level. Four other items (items 24,
26, 29, and 28) showed beta weights of the instructor overraters in the
feedback condition positive and greater than the beta weights of those
in the nofeedback condition. Generally, overrating did not appear to
be a valid predictor of endofterm student ratings for parttime
faculty, resulting in the rejection of Hypothesis 2.
Test of Hypothesis 3
Hypothesis 3. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
higher than their students' ratings of instruction at midterm will
lower their endofterm selfratings of instruction.
A correlated ttest analysis was performed to test the hypothesis
at the .05 level of significance on the difference scores of instructors
Table 6
Results of the Comparisons of Regression Weights of Instructor Over
raters and Underraters in the Two Feedback Conditions
UNDERRATERS OVERRATERS
ITEM FEEDBACK BETA WEIGHT t BETA WEIGHT t
(1R) (1R)
Subject Matter Mastery
21. Fact. Knowl. yes .038 .585 .018 .203
no .019 .044
22. Princ. & Theo. yes .160 .143 .268 .996
no .274 .332
24. Prof. Skills & yes .009 .068 .564 .739
Viewpoints no .002 .148
25. Disc. Methods yes .063 .756 .421 .120
no .008 .512
Development of General Skills
23. Think. & Prob. yes .062 .027 .263 .746
Solving no .058 .064
26. Creative Cap. yes .083 1.188 .383 .649
no .280 .112
29. Effec. Comm. yes .015 .424 .097 .992
no .209 .229
Personal Development
27. Pers. Respon. yes .095 2.059* .090 1.796*
no .294 .712
28. Gen. Lib. Ed. yes .094 1.729* .206 1.071
no .298 .080
30. Implic. for yes .101 1.733* .136 1.861*
Self Undersg. no .205 .128
*p<.05 onetailed test for differences between beta weights
61
who selfoverrated on the 10 learning objective item variables. Again,
since instructor selfratings were not computer scored, an overall
evaluation Tscore was not generated and thus was not available for use
in this analysis. As indicated in Table 7, results showed a significant
difference on 9 of the 10 learning objective item means in the predicted
direction of the hypothesis (onetailed test of significance). Although
the other item mean was not found significant at the .05 level, it is
worth noting that the direction of the difference was in the predicted
direction of the hypothesis, leading the researcher to conclude that the
hypothesis was generally supported. Therefore, parttime faculty who
rated themselves higher than their students rated them at midterm
pretestt) tended to lower their endofterm (posttest) selfratings.
This suggested that instructor selfrating may be related to increased
instructor awareness of teaching behavior.
Test of Hypothesis 4
Hypothesis 4. Parttime faculty assigned to both the feedback and
nofeedback groups whose selfratings of instruction are equal to or
lower than their students' ratings of instruction at midterm will raise
their endofterm selfratings of instruction.
A correlated ttest was used to test for differences (onetailed
test of significance). No overall evaluation Tscore was generated in
selfrating, and therefore was not available for this analysis. Six out
of the 10 learning objective item differences tested indicated a sig
nificance at the .05 level (see Table 7). Although not found sig
nificant, it is important to note that the direction of the other four
item mean differences were also in the prediction of the hypothesis. In
Table 7
Results of the Comparisons of Change Rates in SelfRatings from Time A
to Time B in Instructor SelfOverraters and Underraters
OVERRATERS UNDERRATERS
ITEM tstatistic DIR. tstatistic DIR.
Subject Matter Mastery
21. Fact. Knowl. 2.96* A>B 2.82* B>A
22. Princ. & Theo. 3.74* A>B 2.94* B>A
24. Prof. Skills & 1.70* A>B 1.61 B>A
Viewpoints
25. Disc. Methods 1.75* A>B 1.91* B>A
Development of General Skills
23. Think. & Prob. 2.40* A>B 1.09 B>A
Solving
26. Creative Cap. .49 A>B .43 B>A
29. Effec. Comm. 2.86* A>B 2.14* B>A
Personal Development
27. Pers. Respon. 2.37* A>B 2.44* B>A
28. Gen. Lib. Ed. 3.53* A>B 1.60 B>A
30. Implic.for 3.07* A>B 3.41* B>A
Self Undersg.
*p<.05 onetailed test for differences between instructor pretest and
posttest selfratings
63
other words, parttime faculty who tended to rate themselves equal to or
lower than their students rated them at midterm pretestt) rated them
selves higher on the endofterm (posttest) ratings. Therefore, the
findings suggest that the hypothesis was partially supported.
Because of the findings of Hypotheses 3 and 4, the investigator
continued the analysis to test whether the significant differences found
were a result of instructor increased awareness of his/her own teaching
behaviors or representative of a statistical phenomenon known as regres
sion to the mean. It is common in some research areas to find dif
ferences in growth rates particularly when respondents selfselect to
receive treatment. The point in this test is that many patterns of
selectionmaturation should lead to increased within group variances at
the endofterm (posttest) when compared to the midterm pretestt).
Table 8 shows the results of the Anova test that indicated the sig
nificant differences of within group variances in time B (posttest) was
found for both overrater and underwater instructor groups. Items 22,
25, and 23 were found significantly different at the .05 level with
respective F values of 1.90, 3.41, and 2.02 for instructor self
overraters. Item 26 had an F value of 1.63 which was significant at the
.05 level for the selfunderrater group of instructors. Therefore, a
similar statistical regression for both groups was assumed, and it was
concluded that no differential statistical regression occurred. In
other words, the differences found with instructors who tended to self
overrate or underrate appeared to be a result of instructor increased
awareness of his/her own teaching behavior rather than a statistical
regression to the mean. This resulted in changed final (posttest) in
Table 8
Test for Differential Statistical Regression Comparisons of Within Group
Variances at Times A and B for Both Instructor SelfOverraters and
Underraters
ITEM S2A S2B DIR. FVALUE df
Overraters
AB
AB
AB
AB
AB
AB
AB
AB
AB
AB
.229
.254
.307
.332
.346
.499
.677
.377
.658
.534
.360
.483
1.188
1.133
.699
.519
1.256
.677
.862
.745
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
1.57
1.90*
3.88
3.41*
2.02*
1.04
1.85
1.79
1.31
1.40
33,35
31,33
25,24
20,20
30,30
25,26
28,28
28,28
20,21
27,27
Underraters
AB
AB
AB
AB
AB
AB
AB
AB
AB
AB
.733
.529
.895
1.137
.644
.937
1.644
.903
1.401
1.125
.850
.752
1.172
1.795
.997
1.531
1.972
1.187
1.768
1.655
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
B>A
1.16
1.42
1.31
1.58
1.55
1.63*
1.19
1.31
1.26
1.47
57,31
60,36
67,42
69,43
58,34
66,41
61,37
61,38
70,46
63,39
65
structor selfratings which would more closely approximate the "reality"
of student ratings of instruction.
Test for Hypothesis 5
Hypothesis 5. Parttime faculty teaching longterm courses who
receive midterm student ratings feedback will receive higher endof
term student ratings than parttime faculty teaching shortterm courses
who receive midterm student ratings feedback.
Results of the analyses of variance performed on the combined 10
learning objective item means and the overall evaluation Tscore of the
two feedback groups (groups 1 and 2) between instructor levels (long
term courses versus shortterm courses) revealed no significant dif
ferences at the .05 level in the direction predicted by the hypothesis
(see Table 9). As indicated in the Anova summary table (see Appendix
P), there were fairly large F values for four item means that would have
been significant if the prediction of the hypothesis had been in the op
posite direction.
Results of the test of Hypothesis 5 seemed to indicate that the
hypothesis was not supported in the predicted direction, and therefore,
the hypothesis was rejected. Nine of the 11 item means showed higher
endofterm student ratings for parttime faculty teaching shortterm,
noncredit courses than for parttime faculty teaching longterm, credit
courses. Findings suggest that the feedback condition may have had sig
nificant effects on subsequent student ratings of parttime instructors
of shortterm rather than longterm courses.
Table 9
Results of the Comparisons of the Means and Standard Deviations of
ShortTerm Course Versus LongTerm Course Instructors in the Feedback
Condition
ITEM SHORTTERM LONGTERM DIR.
(n53) (n41)
MEAN S.D. MEAN S.D.
Subject Matter Mastery
21. Fact. Know. 4.10 .352
22. Princ. & Theo. 4.06 .346
23. Prof. Skills & 3.95 .571
Viewpoints
25. Disc. Methods 3.65 .720
Development of General Skills
23. Think. & Prob. 4.05 .431
Solving
26. Creative Cap. 3.76 .662
29. Effec. Comm. 3.42 .955
Personal Development
27. Pers. Respon. 3.89 .652
28. Gen. Lib. Ed. 3.38 .831
30. Implic. for 3.83 .604
Self Undersg.
3.74
3.82
3.65
.564
.478
.520
3.69 .590
3.73 .581
3.60
3.40
3.66
3.13
3.50
.602
.618
.499
.704
.530
3.19 1.003 3.60 1.191
S>L
S>L
S>L
L>S
S>L
S>L
S>L
S>L
S>L
S>L
Overall score
67
Test of Hypothesis 6
Hypothesis 6. There is no significant difference between the
opinions of parttime faculty teaching longterm and parttime faculty
teaching shortterm courses towards student ratings of instruction.
A comparison of the responses of the instructors of shortterm
courses and instructors of longterm courses to the five items on the
Instructor Questionnaire was made utilizing a tstatistic. The ques
tionnaire, developed by the researcher, was designed to assess the
opinions of parttime instructors towards student ratings of instruc
tion. The first two items, 1 and 2, were more informational in nature
and measured how much importance instructors felt their respective in
stitution currently placed and should place on student ratings evalua
tions. The other three items, 3, 4, and 5, were used to ask the in
structor how important he/she personally felt student ratings of in
struction were, and therefore, measured instructor opinions towards stu
dent ratings.
As presented in Table 10, the results indicated statistically
significant differences at the .05 level (twotailed test) on two of the
three items measuring instructor opinions (items 3 and 5). Item 3 had a
tstatistic of 2.20 and asked the instructor how much weight should be
given by student evaluation of instruction to his/her overall evaluation
as an instructor. The direction of the difference showed that more in
structors of shortterm courses weighted this item more importantly than
did instructors of longterm courses. Item 5 of the questionnaire asked
instructors how much importance he/she placed on knowing how satisfied
students were with his/her teaching and had a tstatistic of 2.76.
Table 10
Results of the Comparisons of the Means and Standard Deviations of the
Instructor Questionnaire (LongTerm Versus ShortTerm)
ITEM SHORTTERM LONGTERM DIR.
(n53) (n41)
MEAN S.D. MEAN S.D.
1. Import. to Org. 3.57 1.140 3.48 1.040 S>L
2. Import. Should 3.84 .857 3.81 .943 S>L
3. Weight in Eval. 3.96 .790 3.54 1.030 S>L
4. Usefulness of 3.96 .970 3.60 1.110 S>L
Results
5. Import. to 4.81 .445 4.43 .860 S>L
Teaching
* p<.05 twotailed test
Again, instructors of shortterm courses rated this item of more impor
tance than instructors of longterm courses. The other item means
(items 1, 2, and 4), although not significant, indicated instructors of
shortterm courses rated these items higher than did the instructors of
longterm courses. Based on the above mentioned findings, the results
seemed to suggest that Hypothesis 6 be rejected. Therefore, parttime
instructors teaching shortterm, noncredit courses seemed to have
valued student ratings of instruction more than instructors of long
term, credit courses.
Post Hoc Analyses
A oneway analysis of variance was used on the endofterm
(posttest) student ratings of the 10 learning objective item means and
69
the overall evaluation Tscore of the IDEA standard form to analyze the
instructor demographic information regarding sex, educational degree ob
tained, and teaching experience. In the analysis of the instructor sex
variable, only one item was found significant at the .05 level. On this
item (item 27, personal responsibility), female instructors received a
higher endofterm rating than did male instructors. The remaining 10
item variables analyzed showed no significant differences between the
student ratings of male and female instructors. The oneway Anova per
formed on the 10 learning objective item means and the overall evalua
tion Tscore regarding the instructor variable of educational degree ob
tained revealed no significant differences among the three educational
groups: bachelor's degree, master's degree, and doctorate. The Anova
applied to the endofterm (posttest) student ratings of the 10 learning
objective item means and the overall evaluation Tscore regarding teach
ing experience showed significant differences at the .05 level on 6 of
the 10 learning objective item means (items 21, 22, 23, 26, 27, and 30).
The overall Tscore was also significant at the .05 level (see Table
11). These significant differences were nearly equally distributed
among the three category groups of items (subject matter mastery,
general skills, and personal development). Followup analysis for those
items indicating a significant F value was conducted utilizing Tukey's
method to test for between group differences. Overall, Tukey's method
revealed that students rated instructors with the least amount of ex
perience, group 1 (03 years), lower on each of the six significant item
variables than the other three teaching experience groups, [group 2 (49
years), group 3 (1014 years), and group 4 (15+ years)] at the .05
Table 11
Tukey's Test for Between Group Differences on Instructor
Experience Variable
ITEM FVALUE df TUKEY'S DIFF.
BY GROUP
Subject Matter Mastery
21. Fact. Know. 6.51** 3,89 1< 2,3,4
22. Princ. & Theo. 8.71** 3,89 1< 2,3,4
24. Prof. Skills & 2.45 3,89
Viewpts.
25. Disc. Methods 1.33 3,89
Development of General Skills
23. Think & Prob. 7.68** 3,89 1< 2,3,4
Solving
26. Creative Cap. 4.14** 3,89 1,2< 3,4
29. Effec. Comm. 1.73 3,89
Personal Development
27. Pers. Respon. 5.51** 3,89 1,3< 2,4
28. Gen. Lib. Ed. 2.30 3,89
30. Implic. for 4.95** 3,89 1< 2,3,4
Self Undersg.
Overall 2.78* 3,89 1< 2,3,4
* p<.05 ** P<.01
71
level. Therefore, comparisons of parttime instructors across levels on
sex and educational degree attainment variables showed similarities on
student ratings that were greater than the differences. The one major
exception was the instructor variable related to teaching experience.
The parttime instructors who reported three years experience or less
were rated significantly lower than instructors with more experience.
Tukey's results indicated that the more teaching experience the instruc
tor claimed, the higher the endofterm student ratings. Therefore, of
the three instructor demographic variables investigated, amount of
teaching experience seemed to be significantly related to student
ratings.
One other additional analysis performed by the researcher was re
lated to differences between student motivation levels (shortterm
course students versus longterm course students). The IDEA standard
form computergenerated summary report utilized in the study reported a
class motivation level score from IV with Ilowest and Vhighest. This
class motivation level score was based on the mean student ratings to
item 36 of the evaluation form. This item asked students to rate them
selves on a 15 basis on how strong their desire was to take this par
ticular course. As the researcher consulted with the instructors as
signed to the fullfeedback group, she noticed that there appeared to be
distinct differences between the class motivation levels printed on the
summary reports of instructors teaching shortterm courses and instruc
tors teaching longterm courses. Therefore, a post hoc analysis was
conducted to test this perceived difference. A oneway analysis of
variance was performed to test for differences between motivation levels
72
reported by students enrolled in shortterm courses and students en
rolled in longterm courses at the midterm pretestt) and endofterm
(posttest) evaluation points. As indicated in Table 12, findings
clearly showed a significant difference between motivation levels of
students who enroll in shortterm, noncredit courses and students who
enroll in longterm, credit courses at the .05 level, with the short
term course students being more highly motivated.
Summary of the Chapter
This chapter contains the findings of the study. A total of 94 in
structors participated in the study and received student ratings of in
struction and selfratings of instruction summary reports which were
analyzed. In addition, 94 Instructor Questionnaires developed by the
investigator were completed and analyzed. Each of the six original
hypotheses were tested. Of the six hypotheses analyzed, five were
tested in reference to one of the two dependent variables in the study,
either endofterm student ratings of instruction or endofterm in
structor selfratings utilizing the 10 learning objective means of the
IDEA standard form and the overall evaluation Tscore. The sixth
Table 12
Results of Anova Comparisons for Class Motivation Levels
RATINGS SHORTTERM LONGTERM FVALUE df
MEAN MEAN
Midterm 4.30 2.51 53.80** 1,68
Endofterm 4.42 2.45 80.52** 1,92
* p<.05 ** P<.01
the overall evaluation Tscore. The sixth hypothesis utilized the last
three items of the fiveitem Instructor Questionnaire developed by the
investigator to analyze for differences in instructor attitudes toward
student ratings of instruction. Post hoc analyses utilized additional
Anovas to investigate the differences between the instructor demographic
variables and student motivation levels.
An Anova was utilized to test for overall differences on the 11
student rating item means between the feedback and nofeedback treatment
groups. Followup analyses were done utilizing Fisher's LSD method for
multiple comparisons of each treatment group to test for specific dif
ferences between the significantly different item means and to test any
sensitizing effects the pretest may have had on posttest student ratings
and instructor selfratings. Hypothesis 1 analyzed the effect the full
feedback condition (printed feedback with oral consultation) and the
partial feedback condition (printed feedback only) had on subsequent
student ratings of parttime faculty instruction. The overall Anova in
dicated only 5 out of 11 item means significantly different at the .05
level. Of the 30 paired item analyses performed, only 15 learning ob
jective item means were found significant at the .05 level, resulting in
a rejection of Hypothesis 1. A related samples ttest of paired com
parisons testing for differences between the midterm and endofterm
ratings means on each of the 11 items within the two feedback and one
nofeedback groups showed the greatest amount of change in student
ratings occurred within the full feedback group. Although the
hypothesis was rejected, overall results indicated that midterm student
ratings feedback seemed to have had a moderate, but nonsignificant, ef
74
fect on subsequent student ratings of parttime faculty. There were no
significant differences found between the two feedback conditions, in
dicating that the midterm printed feedback condition may be at least as
effective as midterm printed feedback with oral consultation on sub
sequent student ratings of parttime faculty.
Hypothesis 2 utilized a regression analysis model to test whether
instructor selfoverrating was a valid predictor of endofterm student
ratings for parttime faculty. Of the 20 ttests comparing differences
in regression weights of instructor selfoverraters and underraters,
none of the overraters' regression weights were significant at the .05
level. The results indicated that overrating was not a valid predictor
of endofterm student ratings, and the hypothesis was, therefore,
rejected.
Hypotheses 3 and 4 tested the relationship of instructor self
overrating and selfunderrating to endofterm selfratings. Of the 20
correlated ttests performed to test each hypothesis on the 10 learning
objective item variables, 15 were significant at the .05 level; 9 item
means for the selfoverraters and 6 item means for the selfunderraters,
thus supporting Hypothesis 3 and paritally supporting Hypothesis 4.
Hypothesis 5 tested for differences of the endofterm student
ratings between instructors of shortterm courses and instructors of
longterm courses in the feedback condition. Of the 10 learning objec
tive items and the overall evaluation Tscore tested, no significant
differences were found at the .05 level in the direction predicted by
the hypothesis. Nine of the 11 item variable means, however, showed
higher endofterm student ratings for parttime faculty teaching short
term, noncredit courses than for parttime faculty teaching longterm,
credit courses. Findings suggested that the hypothesis be rejected,
since results seemed to indicate that midterm ratings feedback may have
had moderate effects on the subsequent student ratings of parttime in
structors teaching shortterm, noncredit courses.
Hypothesis 6 utilized a tstatistic to test for differences between
instructor levels (shortterm versus longterm) on instructor ratings of
five items on the Instructor Questionnaire developed by the researcher
to assess instructors' opinions towards student ratings of instruction.
Of the five tstatistic analyses performed, results indicated that dif
ferences in responses to two of the three items measuring instructor
opinions towards student ratings were significant at the .05 level with
the direction of the difference indicating a higher ratings mean for in
structors of shortterm courses. Although found nonsignificant at the
.05 level, the other three items also showed a higher ratings mean from
instructors of shortterm courses. Therefore, the null hypothesis was
rejected, indicating that parttime instructors teaching shortterm
courses seemed to value student ratings of instruction more than part
time instructors teaching longterm courses.
Post hoc analyses of variance were utilized to test for differences
on instructor demographic characteristics. Results indicated that of
the three instructor demographic variables tested, only one, teaching
experience, seemed to be significantly related to the dependent vari
able, student ratings of instruction, at the .05 level of significance.
Additional analyses of variance were performed to test for dif
ferences in class motivation levels comparing the means of item 36 of
the IDEA standard form. Findings clearly indicated students of short
term, noncredit courses were significantly more motivated than students
of longterm, credit courses at the .05 level.
Considering all variables of interest tested, the similarities be
tween the two feedback treatment groups were greater than the dif
ferences across instructor levels, indicating that printed midterm
feedback may be at least as effective as printed midterm feedback with
oral consultation on subsequent student ratings of parttime faculty.
Paired within group comparisons of the feedback and nofeedback condi
tions, however, showed greater significant differences in the change of
subsequent student ratings for the full feedback group (printed feedback
with oral consultation) than for the partial feedback group (printed
feedback only). Further investigation between instructor levels (short
term course versus longterm course) indicated an opposite effect than
predicted. Results, therefore, seemed to suggest that, overall, mid
term feedback had a greater moderate significant effect (.05 level) on
the subsequent endofterm student ratings of parttime instructors of
shortterm, noncredit courses than on the subsequent student ratings of
instructors of longterm, credit courses.
CHAPTER FIVE
SUMMARY, CONCLUSIONS, IMPLICATIONS, AND RECOMMENDATIONS
This chapter presents a summary of the study, conclusions, implica
tions, and recommendations related to the findings.
Summary
The primary purpose of this study was to determine the effect dif
ferent forms of student ratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. Parttime faculty
opinions towards student ratings of instruction also were explored. The
Instructional Development and Effectiveness Assessment (IDEA) standard
form (Kansas State University, 1975) was the evaluation tool utilized in
the study to assess the two dependent variables, student ratings of in
struction and instructor selfratings. The researcher developed the In
structor Questionnaire (see Appendix D) to measure the degree to which
an instructor felt the institution valued student ratings of instruction
and how much an instructor personally valued student ratings of instruc
tion. Prior to the beginning of the study, the researcher gathered in
structor demographic information regarding sex, educational degree ob
tained, and years of teaching experience.
A pilot study consisting of eight parttime instructors teaching
shortterm courses was conducted to assess the feasibility of the two
instruments. Student ratings of instruction data, instructor self
ratings data, and response data from the Instructor Questionnaire were
gathered and verified. The final study included a total of 94 parttime
77
78
faculty from two different postsecondary institutions. Fiftythree of
the participants were instructors of shortterm, noncredit courses from
the Division of Lifelong Learning of the School of Extended Studies at
the University of South Florida and 41 were parttime instructors of
longterm, credit courses from the Educational Services Department of
the Weekend College Division at Saint Leo College.
A oneway analysis of variance was used to test Hypothesis 1 on the
10 learning objective item means and the overall evaluation Tscore.
Followup analysis for those items indicating a significant F value was
conducted utilizing Fisher's LSD method for multiple comparisons to fur
ther analyze for between treatment group differences. In total, 30
Fisher's LSD tests for paired groups differences were performed to test
Hypothesis 1. In each of these analyses, the dependent variable was
student ratings of instruction. Further Anova analyses to test for the
sensitizing effect of the pretest treatment on the posttest (endof
term) student ratings of instruction and instructor selfratings showed
no significant effects at the .05 level.
Hypothesis 1. Parttime faculty who receive printed summary feed
back along with oral consultation about their midterm student ratings
will receive higher endofterm student ratings than parttime faculty
who receive only printed summary feedback of midterm student ratings,
and parttime faculty who receive only printed summary feedback of mid
term student ratings will receive higher endofterm student ratings
than parttime faculty who do not receive midterm student ratings feed
back.
Hypothesis 1 was not supported at the .05 level of significance in
a general analysis of variance test for differences on each of the 11
item means. However, Fisher's LSD paired comparison tests for specific
differences between treatment groups on the 5 items, that had sig
nificant differences in the general Anova showed 15 comparisons were
significant at the .05 level out of a possible 30 comparisons. This
finding resulted in a rejection of Hypothesis 1, although inspection of
the means of the 10 item learning objectives and the overall evaluation
Tscore of the two feedback conditions were in the predicted direction
indicating a positive, but nonsignificant effect for the instructors
who received the full feedback (printed feedback with oral consultation)
and partial feedback (printed feedback only). The instructors who
received full feedback of midterm student ratings (printed feedback
with oral consultation) did not appear to receive significantly higher
endofterm student ratings than instructors who received partial feed
back (printed feedback only). Generally, parttime instructors who
received printed feedback of midterm student ratings tended to receive
higher endofterm student ratings on more learning objective item vari
ables and the overall evaluation Tscore than did instructors who
received midterm printed feedback with oral consultation or who
received no midterm printed feedback of student ratings. This finding
suggests that midterm printed feedback may be at least as effective on
subsequent student ratings of parttime faculty as midterm printed
feedback with oral consultation. Within group paired comparisons,
however, seemed to indicate that the full feedback group (printed feed
back with oral consultation) showed significant change on more student
80
ratings item means from the midterm evaluation to the endofterm
evaluation than did the other three treatment groups.
Hypotheses 2, 3, and 4 tested the influence of instructor self
ratings on the 10 learning objective item means. No overall evaluation
score was generated from instructor selfratings, and thus was not
analyzed.
Hypothesis 2. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
higher than their students' ratings of instruction at midterm will
receive higher endofterm student ratings than parttime faculty whose
selfratings of instruction are equal to or lower than their students'
ratings of instruction at midterm.
Hypothesis 2 utilized a linear regression statistical model to cal
culate regression weights and a tstatistic to test the hypothesis.
Based on the equilibrium theory used in Centra's (1973b) study, this
hypothesis tested instructor selfoverrating in the feedback condition
as a predictor of endofterm student ratings. Regression weights were
calculated for the 10 learning objective item means in two instructor
categories: those who selfoverrated and instructors who self
underrated for both the feedback and nofeedback conditions. The 40
regression weights were then tested with a onetailed tstatistic to
analyze for differences between the feedback and nofeedback conditions
for each instructor category: underraters and overraters. The ttest
results indicated differences on only 5 out of 20 regression weights
analyzed were significant at the .05 level. The results, therefore, in
dicated that the hypothesis was not supported.
81
Hypothesis 3. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
higher than their students' ratings of instruction will change their
endofterm selfratings of instruction.
A onetailed correlated ttest statistic was performed to test for
differences on the 10 learning objective items. Results indicated that
the differences in all 10 item means were in the predicted direction of
the hypothesis and 9 out of 10 differences in item means were sig
nificant at the .05 level.
This seemed to indicate that instructors who both rated their own
evaluation of instruction higher than their actual student ratings and
who received student ratings feedback tended to lower their subsequent
selfratings score. The hypothesis was, therefore, generally supported.
Hypothesis 4. Parttime faculty assigned to both the midterm
feedback and nofeedback groups whose selfratings of instruction are
equal to or lower than their students' ratings of instruction at mid
term will raise their endofterm selfratings of instruction.
Onetailed correlated ttest results indicated that the differences
in only 6 out of 10 learning objective items significant at the .05
level, resulting in a partial acceptance of Hypothesis 4. Since the in
structors in Hypothesis 3 who tended to overrate themselves at midterm
significantly changed their endofterm selfratings by lowering their
scores on 9 of the 10 learning objective item variables, and the in
structors in Hypothesis 4 who tended to underrate themselves at midterm
significantly changed their endofterm selfratings by increasing their
scores on 6 of the 10 learning objective item variables, an additional
82
ttest statistic was performed to test for the statistical phenomenon
known as regression to the mean. Results of this analysis indicated no
differential statistical regression. Therefore, results confirmed sup
port for Hypothesis 3 and partial support for Hypothesis 4.
Hypothesis 5. Parttime faculty teaching longterm courses who
receive midterm student ratings feedback will receive higher endof
term student ratings than parttime faculty teaching shortterm courses
who receive midterm student ratings feedback.
Results of the Anova performed on the 10 learning objective item
means and the overall evaluation Tscore indicated no significant dif
ferences in the direction predicted by the hypothesis at the .05 level
between the instructors of longterm courses and instructors of short
term courses in the feedback condition. Inspection of the endofterm
(posttest) student ratings means, however, indicated that 9 of the 11
item means were higher for parttime instructors teaching shortterm,
noncredit courses than for parttime instructors teaching longterm,
credit courses. Results seemed to indicate that midterm student
ratings feedback tended to have had a more positive effect on the end
ofterm student ratings of shortterm course instructors; these results
were the reverse of the prediction. Therefore, based on the statistical
analyses, Hypothesis 5 was rejected.
Hypothesis 6. There is no significant difference between the
opinions of parttime faculty teaching both longterm and shortterm
courses towards student ratings of instruction.
Hypothesis 6 tested for differences in instructor opinions towards
student ratings of instruction utilizing a tstatistic analysis on the
83
five items of the Instructor Questionnaire. No significant differences
were found at the .05 level between instructors on the first two items
of the Instructor Questionnaire. These items, however, were more infor
mational in nature and measured how much importance instructors felt
their respective institutions placed and should place on student
opinions of instruction. The results of these two items, although not
significant at the .05 level, indicated higher means for instructors of
shortterm courses. The other three items measured instructor opinions
towards student ratings of instruction. The ttest results indicated
that the differences in two of these three items were significant at the
.05 level with instructors of shortterm courses placing higher value on
student ratings of instruction than the instructors teaching longterm
courses. The other item, although found nonsignificant at the .05
level, also indicated a higher mean for instructors of shortterm
courses. The findings seemed to suggest that parttime faculty teaching
shortterm, noncredit courses valued student ratings of instruction
more than instructors teaching longterm, credit courses.
Additional analyses were made on three instructor demographic vari
ables and on class motivation levels. The instructor demographic data
of sex, educational degree obtained, and teaching experience were tested
with an Anova on the 10 learning objective items and the overall evalua
tion Tscore. Results showed only teaching experience as a critical
variable in relation to subsequent student ratings of instruction. The
instructors with the least amount of teaching experience (03 years)
were rated significantly lower on 6 of the 10 learning objective item
means and the overall evaluation Tscore at the .05 level.
Student motivation levels were analyzed with a oneway Anova on the
midterm and endofterm ratings of the class motivation level score
reported on the IDEA standard form printed summary report. Results in
dicated clear significant differences at the .05 level between students
enrolled in longterm courses and students enrolled in shortterm
courses on both the midterm and endofterm ratings. Students enrolled
in shortterm courses were more highly motivated than students enrolled
in longterm courses.
When considering the results of the study, certain limitations in
regard to their generalizability should be kept in mind. One limitation
was the issue of instructor volunteers. Even though only two instruc
tors of longterm courses and one instructor of shortterm courses chose
not to participate in the study, there is the possibility that these
nonparticipating instructors differed from the ones that participated.
In addition, the quasiexperimental design of the study did not allow
random assignment of students to treatment groups. This lack of random
assignment could have resulted in very different or biased groups per
instructor.
Another limitation was that the study sample was drawn from the
parttime instructor population of one state university's noncredit
course program and one private college's nontraditional weekend college
credit course program. Since the study involved only two institutions,
generalizations to other institutions would be restricted. A final
limitation was the fact that the researcher and the course coordinator
were the same person functioning as the instructors' source of feedback
information. This may have resulted in experimenter bias.
85
Conclusions
The results of analyses of variance of Hypothesis 1 regarding the
first dependent variable, student ratings of instruction, indicated al
though nonsignificant overall, the instructors assigned to the two stu
dent ratings feedback conditions appeared to have improved on subsequent
student ratings. Closer analysis of differences between the two feed
back conditions showed that the similarities were greater than the dif
ferences between the two experimental groups, suggesting that the mid
term printed feedback may have been at least as effective as midterm
printed feedback with oral consultation. However, it is important to
note that although the full feedback group (printed feedback with oral
consultation) was not statistically different from the printed feedback
only group, results suggest that midterm printed feedback with oral
consultation of student ratings did minimally effect some changes in
subsequent student ratings of parttime faculty instructional behavior.
Findings also suggest that the full feedback condition (printed feedback
with oral consultation) showed improved endofterm student ratings of
the instructors who received that type of feedback. In other words, in
structors who received printed summary feedback with oral consultation
did receive better endofterm student ratings on certain item vari
ables.
Results of the analysis of the effect of feedback between instruc
tor levels (instructors of shortterm courses versus instructors of
longterm courses) led the researcher to conclude that student ratings
feedback appears to have had a more positive effect on changes of in
structional behavior of instructors of shortterm, noncredit courses as
measured by subsequent student ratings. This conclusion is evidenced by
a statistical analysis result indicating the reverse direction as pre
dicted by Hypothesis 5 and the rejection of Hypothesis 6. Although in
structors teaching longterm courses had a longer amount of time to
change instructional behavior, instructors teaching shortterm courses
seemed to have valued student ratings of instruction more. This could
possibly explain the result that parttime instructors teaching short
term courses received higher endofterm student ratings means on most
item variables than did parttime instructors teaching longterm
courses.
Hypotheses 2, 3, and 4 investigated the second dependent variable
of instructor selfratings. From the linear regression and tstatistic
analyses results of Hypothesis 2, it generally would appear that in the
feedback condition, instructor selfoverrating was not a valid predictor
of change in endofterm student ratings for parttime instructors.
Other results of instructor selfratings testing Hypotheses 3 and 4 in
dicated that instructors who selfoverrated (rated themselves higher
than their students) at midterm tended to rate themselves lower at the
endofterm; and instructors who selfunderrated (rated themselves lower
than their students rated them at midterm) tended to rate themselves
higher at the endofterm. Instructors in the two midterm feedback
conditions received a written comparison summary of student ratings with
selfratings, while the instructors in the one nofeedback group who
selfrated at midterm did not. This result seems to suggest that the
practice of instructor selfrating may have had a significant effect on
increasing the instructor awareness of his/her own teaching practices.
87
Hypothesis 6 investigated differences between opinions instructors
of longterm courses and instructors of shortterm courses had towards
student ratings. The analysis of the tstatistic data seems to suggest
that instructors of shortterm, noncredit courses tended to value stu
dent ratings data more highly than instructors of longterm, credit
courses, resulting in a rejection of Hypothesis 6.
Of the three instructor demographic characteristics tested in this
study, results indicated that instructor teaching experience is a criti
cal variable in relation to the student ratings of parttime faculty.
The statistical analyses clearly showed that the less experience the
parttime instructor reported, the lower his/her student ratings of in
struction were likely to be. Instructor sex and educational degree ob
tained did not appear as related variables to student ratings.
There also appeared to be a large significant difference between
the motivation levels of students enrolled in shortterm courses (non
credit courses) and students enrolled in longterm courses (credit
courses). Students who enrolled in the shortterm, noncredit courses
tended to be more highly motivated.
Implications
The primary purpose of the study was to determine the effect dif
ferent forms of student ratings feedback had on subsequent parttime
faculty student ratings and instructor selfratings. Parttime faculty
opinions towards student ratings of instruction also were explored.
Although found nonsignificant at the .05 level, possibly the most
important implication of this study was the apparently modest improve
ment student ratings feedback appeared to have had on the subsequent
88
student ratings of parttime faculty. The results of this study were
consistent with earlier studies by Aleamoni (1978), Centra (1973a,
1977), McKeachie et al. (1971), and Sullivan and Skanes (1974) with one
exception. Augmented midterm student ratings feedback (printed feed
back with oral consultation) appeared not to have been more sig
nificantly effective in changing certain instructional behavior of part
time faculty than was printed midterm ratings feedback. These findings
did not confirm an earlier study by Aleamoni (1978) which showed the
positive effect of augmented student ratings feedback over printed
ratings feedback on instructional changes. One major difference is that
Aleamoni's study consisted of fulltime faculty teaching credit courses.
Student evaluation of instruction is a widely accepted and utilized
evaluation instrument of credit courses. Although most research studies
of the validity and reliability of student ratings have focused on its
use with fulltime faculty teaching credit courses, it has been widely
adopted as an instructional evaluation tool with parttime faculty
teaching both credit and noncredit courses. This evaluation instru
ment, adopted from traditional educational settings and implemented to
evaluate the instruction of an emerging new type of instructor to
American higher education (parttime faculty) teaching a new learning
structure (noncredit courses), deserves further research investigation.
Further implications from this study suggest that not only does
student ratings feedback help improve subsequent parttime faculty stu
dent ratings, but it appears also to have had a more positive effect on
instructors of shortterm, noncredit courses. This finding was in the
reverse direction than predicted. The hypothesis prediction was based
89
on a study by Centra (1973b) in which he indicated that student ratings
feedback did effect some changes in student ratings over time. There
fore, the investigator predicted that student ratings feedback would
have a more positive effect on the student ratings of instructors of
longterm courses than the instructors of shortterm courses. The basis
for the reasoning was that behavioral change requires time, and a longer
time lapse existed in this study between the student ratings feedback
point and the endofterm evaluation point for instructors of longterm
courses (six weeks) than shortterm courses (two to three weeks).
Results from this present study indicated the reverse situation,
however. This finding may be the result of a "halo" effect since the
instructors of shortterm courses in the study had never before received
student ratings feedback from the employing institution and were aware
of the research nature of the study. Perhaps the sheer presence of a
midterm evaluation procedure in a noncredit course was sufficient to
indirectly suggest to the class and the instructor the value of instruc
tional quality to the educational institution. Another possible ex
planation of the results is that continual employment of parttime
faculty teaching shortterm courses is more tenuous in nature and offers
less job security than faculty teaching longterm courses. This is be
cause students typically enroll in noncredit courses on an elective
basis as opposed to students in longterm courses who often enroll in
these courses as requirements in their curriculum of study. Therefore,
the noncredit nature of the shortterm courses in the study is heavily
dependent upon students who are satisfied with the instruction received.
Consequently, in order to hold on to their jobs on a continuous basis,
90
enrollment in noncredit courses must be maintained. Sustanence of
course enrollment levels are attained, in part, by parttime faculty who
are aware and concerned about the degree of student satisfaction of in
struction. Therefore, parttime faculty teaching shortterm, noncredit
courses possibly pay closer attention to student satisfaction levels
than do the parttime faculty of longer term courses. This also may ex
plain the findings which seemed to indicate that instructors of short
term, noncredit courses valued student opinion of instruction more than
did instructors of longterm, credit courses.
Another possible explanation for the greater effect of student
ratings on shortterm course instructors may be due to the motivation
levels of the students. The results of the post hoc analyses of student
motivation levels confirmed that students in shortterm, noncredit
courses were more highly motivated to enroll in a particular course than
students of longterm courses. This result was also consistent with
some informal observations of the researcher who noted the more intense
questioning of the students enrolled in shortterm courses about the
reason for and appropriateness of the evaluation procedures. It seemed
that the longterm, credit course students were more accustomed to stu
dent evaluation of instruction procedures than were the students of the
shortterm courses. The longterm course instructors also were accus
tomed to student opinion ratings, since this is a dominant and well
accepted evaluation tool to them. Although both shortterm and long
term course students were encouraged by the investigator to answer all
items on the evaluation form to the best of their ability, anecdotal
remarks by some shortterm course students indicated more difficulty in
91
answering some items that "just didn't fit the course." Consequently,
although the evaluation instrument had a forcedchoice format, there may
have been more response item omissions from the shortterm course stu
dents, and this, in turn, may have affected the ratings data of the
shortterm courses. Another possible explanation of the apparently
stronger effect of midterm student ratings feedback on endofterm
ratings of parttime instructors of shortterm courses is that the in
structors of shortterm courses teach a more highly motivated audience
who question and challenge their instruction more than students enrolled
in longterm, credit courses. This, in turn, would tend to give the in
structor more informal student feedback specificity of his/her teaching
behavior on a continuous basis, thus facilitating and expediting in
structional change.
Results of the linear regression and tstatistic analysis suggest
that the dependent variable of instructor selfoverrating is not a valid
predictor of endofterm student ratings. This result was not consis
tent with Centra's 1973b study in which he found that according to equi
librium theory, student ratings feedback produced changes in instructors
who rated themselves more favorably than their students rated them.
Perhaps in this study, the difference in instructor selfratings and
student ratings in the feedback condition was not large enough to
produce a discrepancy impact of necessary magnitude to motivate instruc
tors to change their teaching practices.
Further results of the analyses of the dependent variable, instruc
tor selfratings, do imply a confirmation of the equilibrium theory,
however. The result of the change in instructor endofterm selfrating
