Evaluation in an undergraduate special education course of a pragmatic alternative to Keller's proctoring system

MISSING IMAGE

Material Information

Title:
Evaluation in an undergraduate special education course of a pragmatic alternative to Keller's proctoring system
Physical Description:
ix, 129 leaves. : illus. ; 28 cm.
Language:
English
Creator:
Gaynor, John Francis, 1932-
Publication Date:

Subjects

Subjects / Keywords:
College teaching   ( lcsh )
Special Education thesis Ph. D
Dissertations, Academic -- Special Education -- UF
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis--University of Florida, 1970.
Bibliography:
Bibliography: leaves 127-129.
General Note:
Manuscript copy.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 029483158
oclc - 14360971
System ID:
AA00025808:00001

Full Text








EVALUATION IN AN UNDERGRADUATE SPECIAL

EDUCATION COURSE OF A PRAGMATIC ALTERNATIVE

TO KELLER'S PROCTORING SYSTEM














By
JOHN F. GAYNOR














A DISSERTATION PRESENTED TO THE GRADUATE COUNCIL OF
THE UNIVERSITY OF FLORIDA
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THB
DEGREE OF DOCTOR OF PHILOSOPHY










UNIVERSITY OF FLORIDA


1970










ACKN&;'.'LEDG,3!": -'-


It would be difficult to ackno'.wledze all the assist-

ance received during the year this study was in pro-ress.

The following acknowledgements are intended to express

special appreciation to those who were most directly in-

volved. It is not intended that these individuals should

share the responsibility for any of the study's shortcomings.

First, to Professor W7illiam D. Wolking, my Co"mittee

Chairman, my profound appreciation for the cars, patience

and frien.,.hip he has shown nme throu-hout our association,

He has provided a model of intellectual honesty that I will

find .'ty of emulation for yea.-3 to come.

Professor I -... n Cu-ninhhar, told mqe three years ago

that the most important thing I had to do in rr-uate school

was nanit2i- >' sanity. He has given me the kind of S'l:-

port and assistance that has made this possible.

The debt to Professor H, S. Pennypacker, the inor Teni-

ber of my supervisory ccrmmittee, is best e::'r-,ssed in thae

body of this report. It is his behavioral approach to col-

lege instruction that provides the point of cd for

the present study,

lMy good friend John H, Neel took time frori his own im-

portant research to help me with parts of the statistical

analysis, pa, clearlyy he "pofil ayi

The stu-ent in the .xp. r n-n-- t l.. .:ve speci3l










thanks. Their cooperation and enthusiasm for the project

made a pleasure of v hat otherwise might have been drudgery,

.,iss Gaye Holloway, i.iss Hilary Parjrnet, i,'iss Pam Swan-

son, iMIiss Lynn Sturgeon, and M.iiss Linda Greathouse (now Mrs.
Bruce Larson) stayed with the project from beginning to end.

Their daily labors were only a small part of their contri-

bution. They brought youth, beauty, wit and charm to the

project.

Finally, to my lovely family, rr.y deepest thanks for

the innumerable ways they have helped--all of them for

giving me a wide berth when I needed it, but particularly
my wife, Like, for relieving me of all other worries, Karen

and Anne for helping -.- with some of the early data trans-
criptions, Claire for keeping me well su.plied with eras-

ers, Seth for emptying my waste paper basket two or three

times a day, Justin for keeping me posted on the h openingss

on Sesarme Street, and Mlorgan, age four, for keeping my type-

writer in good working order.










TABL3 O1 -O1T2NTS



Pare

ACKNOiL 3DC-: i *. N * * * 0' L 0 s a 4 a # a a 0 m o o a o s 0 0 a

LIST O TA'A LE 3 ..... . ... .... .... .... ... ...... v

LIST OF FIGURES .......... .. ............. ..... .... vii

ABSTRACT. .. ... ... ... ... .9 . . .0 . . .. . .. . . viii

CHAPTER

I. I NTR.'J .C I'0l ON. 1......... . 1

II. t.LlTHCD... .... ... .. . 0 .. ... .. 00 .... 20

III. RESULTS.......... .......... ..,...0.*...... 45

IV. DISCUSSION.. ....... ... ..... ..... .. 0 69

V. Sl '. AliY AND CONC! .1USICiS ...... ..... .... 83

APP?2NIGSD8

APPENDIX A... ..... ,. ... ..o..... .. ..... .,. 87
A PPENDIX 3.,.,. .. .. .. .. ... .. ,.. ...... .. .. .. 96

APPSNDI C . ...... a .. .. ... a 0 ....0. 0..0 105
A."~ :) P Ii i 1 D 1 C a s r a 0 a 0 0 0 a 0 0 4 0 0 0 6 a a a a 0 1095
APPFP DIE X D.. ... .. . . . .. ... .... 109

BIBLIOGRA PHY .... . . . .. . . . . . 127











LIST OF TA-



Table Page

I A comzarison of conditions under which per-
formance samples '.-Iere tair en, Control group
versus Classroomgroup...,.,.............. 25

II Derivation of adjustment factors and adjust-
ment of mean correct rates in 17 pilot per-
formance sessions.......**... **.*,....*... 28

III I'ean linns per blank (L3) and adjustment
factors for adjunct auto-instructional mat-
erials used in this study.................. 32

IV Comparison of mean performance rates of
Classroom and Control groups on two repeat
tests in oral response mode............,,. 49

V Comparison of mean performance rates of
Classroom and Control groups on two review
tests in written response mode,,,...,,., 50

VI Comparison of total class mean performance
rates on two sets of review tests utilizing
different response modes, oral versus writ-
ten responding..: .. *. .. . . .a . a 53

VII Comparison of group mean percentages of cor-
rect responses in written versus oral modes,
on test sets taken in first and second
halves of course.. .. ... ......... ........ 54

VIII Results of 16 individual t tests for relia-
bility of differences between mean perform-
ance rates, 2lo Practice group versus Prac-
tice grC I. .. . . ,...., .. ... .. ..,,,.,... 58
IX Co-. ,,'.rison of the percent correct on eight
oral response samples 'ith percent correct
on the midterm written review exam, by Class-
roo an.d Control groups............r ,... 63

X Compnarison of the percent correct on eight
oral response samsiples with percent correct
on the final written review exam, Oby Class-
room and Control grouPs,..... ............. 64










LIST OF .kL!3 (continued)


Table Pag

XI Chi square contingency tables for_ compar-
isons of proportions of lecture items in-
correctly ans.!ereci on written review exams,
Classroom versus Control groups............ 67










LIST OF FIGURES


Figure Page

1 Schematic illustration of between groups
design used in this study, with units by
order ofperformance.................. 22
2 Relationship of response rates to length of
items over 17 pilot performance sam-oles, ex-
pressed in mean rate correct (n = 6) versus
mean lines per blank...................... 30

3 Comparison of rate correct functions de-
scribed by adjusted versus unadjusted scores
in 17 pilot performance sessions........... 31

4 Illustrative exanple of performance graph-
ing procedure used by Jomhnston and Penny-
packer ............. . .,. . .. .. 34
5 An individual performance graph used in this
study (Student ,f24), reflecting individual
unit and cunula-live performance in adjusted
difference rates.......................... 36
6 Three step computation of adjusted differ-
ence rates. .... e.. ... .. ...... .. .. 40
7 ean uor ''Lomac .... U ,
.7 Mean .fornance rate vectors of the Class-
room and Control groups on 16 first-time
oral response performance sessions,........ 47
8 Illustration of performance rate differences
by (a) written versus oral response modes,
and (b) Classroom versus Control groups.... 52

9 Miean **:-rfor mance rate vectors of the Prac-
tice and N!o Practice groups on 16 first-
time oral response performances. ........... 56
10 iean performance rate vectors of the No
Practice and Control grc.u-s on 16 first-
time oral response performances,......S.. 59


vii











Abst---a::t of Dis3-'.-?.ticn Presented to the Graduate Coun-cil
of the Univerbity of' Florida in Partial Fulfilient of the
Requirements for the Degree of Doctor of Philosophy



EVALUATION I AN UNDERGRADUATE SOCIAL EDUCATION
COUR3.L C? A PRGA.TIC ALTERNATIV TO
K=-L!IR'S ROCTORiI-iG- SYSTE[



by

John F. Gaynor


August, 19h a-

Chairman; L'illia::i D, /'/clking, Ph.D.
Major Department: Special Education

AppDlications of educational technology thus far haie

not successfully harnessed the power of our knowledge of ce-

havicr. At-erpts to individualize i'-.s'rction through t.ch-

ing machines:, co .puter-ascis ted instructions and prograed

learning have produced equivocal results at prohibit, ex-

pense. Keller*s introduction of a pursonalizoe.- approach to

colle-e underr: ae;uate instrtuctiocn opened new ossibilities

for precti.ca. appli cation of educational technoloy. In his

model, the eatensive ioictoring and feedback functions of

individualize instruction are c'ied out c,-, student proc-

tcrs v'ho have dernratcd mastry of te subiec-t 'ae i

a previous course, The present s udy evaluates a modifica-

tion of Kele-'s po ri sys-ce, with a view to p)rovding
an a3.t'rnat:.ve :-" yd"l bo, for sowne users, i.ore feasible

to ain'i:i ;e,











Z.Teth:,J
The performance of tw.o -rou-pos is compared on 20 per-

forT.ancte soa'.-rs of the kind developed by Johnston and

Pennypacker (direct continuous recording of oral responses

to stimulus items taken
rates of correct and incorrect responding). The Control

group uses previously trained proctors of the type employed

by Keller. The Classroom group uses currently enrolled

students; i.e., the students enrolled in the course p,:octor

each other. To control for the benefits of practice that

one student. might derive from auditing the per.for..ance of

another, alternate test forms are used. AdCditionally, a

performance v-?.lidation procedure is desi.. into the ex-

periment to control for possible collaboration between stu-

dents.

Results

1. The Classroom group achieved reliably superior

perfcrznance ra-,.es,

2. The ff.cs of p)ract(ic on Lific stimulus items

and the effects of. collaboraton were ruled out as possible

sources of the iffer2n'w be,'ve'-n groups.

Co: e! ".:._-

1. The self-or -tan.. cla oom model is an acceptable

pr a-viatic al't'native to miller's model.

2, ho acSt, n s o a ne.. .s.....dU' is nsne-ps p-c ctrc )

o ac s zLspo -z' a,: vc v ,3a ats tc a eraes taon tIos

whIio a ct ?S, r p -c-', r:;r-, a I c.,, r















CHAPTER I


INTRODUCTI OIN


The need for improved instructional technology appears

to be continuous, renewable from one generation to the next

as a function of the increasing complexity of society.

While American education has been generally responsive to

the technological revolution of the past two decades--for

which Skinner's well known contribution (1954) has provided

the main impetus--early prophesies of sweeping changes in

classroom technique have not been fulfilled, th= task of

translating theory into practice has turned out to be more

difficult than expected. Eckert and Neale's review of the

literature (1965) has found the contribution of the new

technology to be "quite modest" in various applied situa-

tions; Fressey (1964) has referred to some programs as edu-

cational monstrosities; Oettinger anr? Iarks (1968) have ex-

pressed doubt that educational technology will become estab-

lished in the schools during the twentieth century.

The need for an effective technology nonetheless re-

mains. Koo-ski (1969) ha: summarized the present state of

affairs by u--lng that three admissions be made: First,










.th..t tad. tioa 19. i q t ndle the indiv-
idual needs of tod-.y'. i .e., that the new tech-

nolo)C)y is in2 '*ute also but ho ls bet Ler promise for the

futuze; and thirn, tit Ah> best v 'y to bring that proraise

to fri Lo use the ne technology as broa as as poss-

ibl, e psen stud the n, technology is usec in a
cc lg clacs-oc: setting. .ol est-;blished principles from
edtara co..i-bl, +,,-E!

....... .. . t. h2. ot h cn ,c 'ly u ed in ccilege class-


a5 A t u ", r c' >-. i- v. .-
aroo,:j;r. i i .... d-ect c luc e'corin of'5L> tne vr-uL-


nie irQ eOr g it a> s or of0 tt c c;'r :7to ns of thoc

ezer. i ; .. ,enlal ,'rzjiy,,:s cf beha{ vic;r.


role c ch olo y fro4 tht F staL C n an. n c
o ,,:c, .. ... 4- i, C -..,
l. u- '- I: .-'J., ......f, ot' .. -.>a c o i. .truct io -

W~e T "i-A ~, ] c oiL h'> >'--o ...... ir~ t i'-""

ca..... a t ".- .... of the illnesss of a number of cxre,-,,

Jr>1ial pchoogis to 9 e~ V U 'Vpo la from research awith low-
er o0 :-.. .. *io < K: n g itua -o (pu 64,c
Skina;er J. is ... e:- te v.,'-th .e..g t first o e pl y p g a
L..It. f J- to
ig o.. eu.. ba.s (at avad.., i 1957), an' his intrvo-

(d ,.- n i I
duc:,o of" -d .:.cl.iin-g lea: chin-e. in 1952;, -, is "- ner:a!.l y held

'to })_ o.[ ^;', :'< t>:; SigTnifi can~c thtn -sa v' ',or .............i
: C,"" C. .

.t C n










intrinsic merit of th- device but to the technology which

supcrts it. Skinner himself (1965) makes this point in

stating that "teaching mach.res are simply devices which

make it possible to apply technical knowledge of human be-

havior to the practical field of education" (p. 168).

Hence, it is the knowledge of human behavior that is criti-

cal to a sound educational technology, not the particular

form of its application. ."hat is needed now is an applica-

tion that meets the test of practicality, for it is just

here that programed learning, teaching machines and cc-:outer

assisted instruction have failed. It is in the direction

of practicality that the present study makes its main

trLnlI3t.


Stter--.rt of' ro....J. ....

The foregoing paragraphs may be summarized and rephrased

to form a preliminary statement of the problem and purpose

of this study: Ibe problem is that applications of educa-

tional technology thus far have not successfully harnessed

the power of our knowledge of behavior; the purpose of the

study is to investigate an application of the procedures of

behavior analysis in under 7-.duate university instruction.



Review of ReL'-verl.. Research
The reference study for -chis research is an article by

Keller (1963a) 3n whichih the groun'.o.;'- is laid for a new










college teachivngrZ 'nsth4. +hrt. has been called .. .

flC':-tQa an i...tc-.'. (- ior, 1 O963b, p 1) It is con-

venentL to b...- the -vl', co'.'n into subtopics,. In the
first section., 'the I ltilonship between Keller's methoi and

prioreduaioatchnolo: y i- esablished Ihro'ors- discussion
p r i o r 0 C (Iu( 'J toO C.:{i C 2 o r .. .i e c 2 V ~ S V Z j f

of irmo t,,i_ simlrtes and differences, In the second

section, orhb-( s ..c..Ji al -- '.d,,th cont ingency r.I:. .e.ent
in~n the co!]on6 ,

in the coille cl-a,'oro are e'viefvedt In the third, a mn-

jor innovation to th basi sy sem is discussed, -ro this

th. pobL.. ... of te su reforamulaxted In

moreC conc t,.+ -* ,-,

R e !.l t c^ .oJ -: :-'; '- : .....'........... ..


................ iaLr c i n Kalr (l963a; 19-0 cana-aed:'eG th


............ t.... o pc.r itnstruction. He ,e- es-to "th

saC>a s2 re& s up:>n- an l',: of the task, t.. sa. cthe. vit

..:.....;. :':, .. ...... s-eopp or -i y for in iv ual

prco re.sson" r )t(J.9). p O r fte of hs o

ter. a aor, -, st 5 or.-tic.a.7 TO t;os-, listed in ..a.... ut
LT~ S u D!9ZQoul

.lines of prOp 'a. 3 ,"l'2' (of, Coo: and 0.:ec' h 192)

The aciV o) o u s 1-1 c by te stude, imed

ate feo r o > sty of rela-ively

e u.ii ho.'ev.:: dm w e 2O. ,- t oal f-:vs o










conventional homework assignments or laboratory exercises.

The "response" is not simply the completion of
a prepared statement through the insertion of a
word or phrase. Rather, it may be thought of as
the resultant of many such responses, better de-
scribed as the understanding of a principle, a
formula, or a concept, or the ability to use an
experimental technique. (Keller, 1968a, p. 84)
The position revealed here follows the spirit of Pressey's

arguments (1963; 1954a; 1964b; 1967) for variable frame

length--"from a single statement to an entire chapter" (1967,

p. 239)--and for the preservation rather than replacement of

established materials of instruction. Pressey sees "adjunct

auto-instructional devices" as emi-ently more practical than

programinr7 of the type exemplified by Holland and Skirmnner

(1958), and usually more effective. The question of rela-

tive effectiveness has not been satisfactorily resolved in
the research literature (Holland, 1965; Silberman, 1962),

but there can be little doubt that adjunct auto-instruction

is more practical. Basically it is a testing procedure. It

follows the presentation of the material to be learned (lec-

ture or reading assignments); in programed instruction, the

program is the ma`':2rial to be learned. The purpose of the

adjunct method is to "clarify and extend the meaningfulness"

of the basic material through emphasis of key points and re-

structurin; of ideas (Fressey, 1963, p. 5). As a practical

matter, the importance of adjunct auto-instruction is that

it places no constraint upon its user to solve the mysteries

of pro.gr.am structure, sequencing or hierarchy of learning











tasks. "Such auto-elucidation," Pressey says, "will not cov-

er everything, may jump from one point to another or even

back and forth . ." (1963, p. 3).

One of the unsolved problems of programed instruction

has been the cost in time and money of developing programs.

It is especially acute at the college level because of the

complexity of the'subject matter and its tendency to-.,ard

early obsolescence (Green, 1967). Pressey's adjunct auto-

instruction is one way out of the dilemma, even though it

does violence to Skinner's concept of programed instruction.

In Keller's used of adjunct materials, the break from earl-

ier theory is less painful. The auto-instructional materi-

als are not programed in the Skinnerian sense, but the over-

all system is cased predominantly on the principles of be-

havior analysis that apply to programed learning. Indeed,

Keller refers to it as a "programed system."

Another departure -that Keller makes from earlier models

is the use of students in proctoring roles. The r'noower

requirement is enormous in a system that breaks the course

material into many small units and requires every student

to demonstrate mastery of each unit in turn. Alternate

forms of the performance test are taken as many times as

necessary until criterion (mrastery) is reached. The price

of individualized instruction is high, whether it be carried

out throuTh lov student raltiocs, programed instruction, or

the computer, Keller's r.aster stroke ;,as to effectively










utilize a resource that was available at no cost. In so

doing, he not only individualized instruction, but personal-

ized it as well.


Related Co.-1i.e-' ; ., Z r- e tA' :s

Since the inception of Keller's work in 1962, several

reports of similar projects have appeared. Unfortunately,

most of these have been demonstration rather than research

projects. The main focus in the following review is on the

various adjunct auto-instructional systems used,

Ferster (1968) developed an interview technique in

which students responJed orally to study questions on the

textbook, Eeh ..-1o" Fr-'i1cz (Ferster and Perrott, 1968).

Each student had the responsibility of acting as a listener

for another. Satisfaction with the interview was decided

jointly by both the speaker and the listener, with unsatis-

factory performances being repeated. Several short written

quizzes and a two hour final examination were administered

for purposes of certifying the validity of interviews. The

grade, however, was determined by the number of interviews

successfully cc':.::leted. Fifty-nine interviews earned the

grade of "A" and percentages of that number earned "B" ar,

"C", In the class reported, 90 percent of the students re-

ceived the grade of "A".

A group interview technique has been used by Postle-

thwait and Novak (1967) in a freshman botany cci.rse. A doz-











en or so students met regularly with the instructor and re-

sponded orally to items fro'n previous study assignments.

Points for performance were awarded by the instructor on a

scale from zero to ten.

Malott and Svinicki (1969) have organized an introduc-

tory psychology course along contingency r.mana-enent lines.

Quizzes are given daily to cover assigned reading. Each

quiz requires written answers to two questions. If both are

not answered correctly, the student attends a review session

and takes one or more additional quizzes to achieve 100 per-

cent mastery. Another adjunctive device is the four-man

discussion group. As in Ferster's system, the purpose is to

help students develop oral fluency in discussing the subject

matter. Peers rate each other's performance- in the discus-

sion group, and instructional personnel monitor interviews

on a sa-mpling basis only. Teaching apprentices, equivalent

to Keller's proctors, are drawn from previous classes and re-

ceive academic credit for their-.services. Fifty-two of them

serve the typical enrollment of 1,000 students. Addition-

ally, 13 advanced teacher apprentices, four paid assistants,

and a full ti:-.e secretary assist the three faculty members

who conduct the course.

Lloyd ?nJ Knutzen's (1969) course in the experimental

analysis of behavior features a point system, or token econ-

omy approach to contingency management. Thirty-five students

were given a list of course activities at the beginning of










the term, v;ith point va!Cu-.s slpecifiedj for each activity,
Grades were made contin-ge.nt upo the activities und ertaken

and che number of point; e ...d Several ac tivities re.quir-

in cucd r-so1ns r.s --re given nuvmeric value. rating

in the ,i:a-ner used by Pro t ethD a: bc.. (1967)

neof O f t tat make ani- experintal com-
pari sn between a contingency-.a n,,ed classroomr and a class

tauht by con.-?enti.onal,- meod. has been ::ord by :c:e

and C y ( 96). :,yfod-t ieV.r ni., as r-efJected in

final e.ai,-SC>... coe, was signi:icantly butteC it a

c]asc usi.g'Ieller's me..ods than in co-ventoal ......

clase Tee inding 2 c,:3',-3 so;. results v'ic. a,1 ar ,en-

tioeI by 1,ler (l96 ... o-) bu .... ar not slis'hed In
the ex:eifYenta section of ,c:,ichoel and Corey's class,

ther ... 221 tud vo i uco ad t2o r+ad u..t

stucu K ~ assistd by 19 undergo auae proctors in aCd-

mini... t a expeEi.a. pog1.ra


Use of Poh. ( r
...... ^lls .c:j n'agytic g:e :hod oOf:Y

The use of ras, es as a basic datum inte exeimenta

analyis of behavior isel no2 :oV.- Its use in desc.oribwing

behavior of interest. to education has also been cstabl's 'eCd

(Lindsley, 1964) cdi r (1969) h7. o, recently den stuate
the utKi5it; of usi.g fre.e op-rant tecniqurs in the meas:ur..e-

.... "d: analysis of prov--c .... ,. used by eleOentary
school chid'ren Koe,, ethe ... ..i.iuouy ab-
s. .. ......... h o o-











sent from contin:fency -ang ed college classrooms until Pen-

nyp cDe-a and his asscF ates (1969) deve2oPed a precision

taut course in ex-. -rimerraln behavior ana.":.is. Regarding

the method of measureT ent, Pennywacker says:

T : teaching process has as. its stated objective
the ren In o`7 b havio ,a cane It is es-.
s .ial that tese changes in behavior be mea-
sured as directly and continuously as possible.
Direct measurement avoids the numerous hazards
of psyc-ho.etric inference uhile continauo reas-
uroment periits constaant adjusO .nt of curricu-,
lum as nccesa,,y to obtain the staed cbject-
ives. (,nyp.c..r et al, 1969, p.2)

a ruc.'s the procorial system to Leet the 1-an-

povJer 1ecuire1.P1.nts of direct continuous recordini The ad-

junct materials are con..truct'.ed recpon.o Itec irdlar to

th: fr7;os of ..ro.n:..d n s.t ucton. They ars typed on

3" X 5" flip cads, The sude i... t psce o hi San-

a.er (pocor d s ; nJ aod, supplies the iss

in- ele?.en the"n fp t; e c'd for .... t cr ..rm tion
or t; J ZC 0 O1' R',K23COaC.a
or cor:c-.o." ,r Te p:'/O "..:u':ce is ti'-'ed. Rates correct and

ircorr-ct a. e i! ..f'.t.;d by d.di t number :-ie s of esoso

(comrrce resp0 1ctivet ly) by the nu-iber of min-

urte eaped_... (- -o fie). As in other contingency

inb a ,.. ,.. -. o .i'I C1 s a c iteri-on for "as te and
.s a... r
rea ...... *} pecetae of students achieving the

ga "'< is high, aboo 90upercent, as in the coOurses of

;ailer, (" -'5 ;), ; ..........er (19n,5), an, and SvInic:i
(1969).











In a more recent manuscript, Johnston and Pennypacker

(1970) report the early results of a long term research pro-

gram which uses the method referred to above. Preliminary

investigation indicates that the system is superior to con-

ventional teaching methods in the quality and quantity of

learning it prccuces; that it can be applied to subject mat-

ter outside the field of behavior analysis; that it works as

well with written response modes as with oral; and that the

recording and display of performance measurement may be

based on either rate or percent, depending on the instruct-

or's preference, without detriment to the results. Of in-

terest is thie consistently high rating the course is given

in statements of student preference. However, the primary

relevance of this report to the present study is the comipre-

hensive exposition of philosophy and method that it contains.

It is both the parent system and point of departure for the

present stu'-. The present research is best understood in

the light of that fact.


Su mv' o . .'. ; oo .,v'-.' t T i -:e ,.., -u.:,-;

Several statements can be made in summarizing the fore-

going:

1. The operational objective of educational technology is

to individualize instruction. Individualization implies con-
4,f" ~ o _- -a f e b c ',
tiruous Roniitorin7---a feedback system that informs the in-

structor as 'ell as the learned., of the exact point in the










curriculum at w,_hich .ar.n. (h rformance) has failed,

2. The develop nt of such systems has been slow, partly

because of ths eqivocl' r,.earlch findings on variables in

human learning in field situations, and partly because of

the high co' t of ap.lica tiou (h 'ar.. .. softr-.re).
3. Ke a'ns follo,-ccrs ha-ve demonstrated methods of

providing a continmAous feeo.ack system at minimal cost,

through utilization of stude'Ls in proctorial roles.

4. Concurrp-nt advances in expriental behavior analysis

have brought a pc:erful r,.azrch tool to bear on problems of



5. In seynspcker's system, the two mainstre- :,--'ieller'sc

method.. of piruc on ,with student proctors,

and the direct conWCinuou recordCin- procedures of behavior

analy...- .... c to produce a unique behav oral approa-'h

to college teachin,.



^e3.l;?-je~n.'l. oX-. -Th ?rob Ie' r
I}-,most a.ent h. .- ,tori c of the studies on con-

tingency Ianaged college c.isses is the paucity of systemat-

ic research. Cny voce of t publslisd studies evaluatef;s

the methods ixi,-,,-:. U ?his T.i. a gross cop..a zison of

KelPoer'- .. meho: -v. > co t;o,1 C ..M Ct0od. The effects of

Scoonen. 'i v;.. tt .... ae nc- been acdeuately inves-

tigcd. In -ti prcnt 1.dy;, aen r z ataiooo of E of the

,or 2c.... u; pnx .- i;!toda e n










Some of the assumptions underlying the use of student proc-

tors are tested. To isolate the experimental variable, it

is necessary also to drop other components from the usual

format. Foremost among these is the unit mastery provision.

Another is the self-pacing feature. These components are

discussed in more detail in the following pages, after which

the research problem is recast in a series of experimental

questions.


The Use of Previously Trained Proctors

Student proctors are referred to as "knowledgeable

peers" in the Johnston and Pennypacker (1970) article. In

proctor selection, it is the practice to accept only those

who have demonstrated their expertise with the subject mat-

ter by earning the grade of "A". Implicit in this is the

assumption that the currently enrolled student gains a salu-

tary benefit from the experienced proctor's tutorial skills.

Keller (1968a) states as much, and others appear to have ac-

cepted his reasoning. No one has tested the assumption em-

pirically, however, or evaluated alternatives to the present

proctorint- system. 'ThEre are several reasons for doing so:

1. The system wants simplification if it is to be

widely user. Coordinating the acquisition and assignment of

proctors is the Xkind of adMinistrative task that would inhib-

it some pct-?ntial users.
2. T-ere is some question whether this kind of activity










makes the best use of the student proctor's time. It is

probably true, as Keller (1968a) asserts, that a proctoring

assignment cements the student's co'manr. of the material.

This may be desirable in a course on the principles of behav-

ior analysis. These are difficult principles to grasp, and

overlearning may be the best way to master them. But this

would not necessarily hold for other curricula. The curri-

culum used in the present study is a case in point. In the

introductory course for Teaching Exceptional Children, the

units are more discrete than sequential, Mastery of the

unit on the blind in not a prerequisite to success on units

for the deaf or mentally retarded. The presentation is more

horizontal than vertical, and there is little reason to be-

lieve that the proctor's understanding of it would improve

with repetition. He might be better served by pursuing an

interest in one of the exceptionalities in depth.

3. It is difficult to achieve uniformity of treatment

from one proctor to the next. liany test items, including

some of the most carefully worded, will evoke responses that

are neither clearly correct nor clearly incorrect. A re-

sponse judged incorrect by one proctor may be accepted by

another. In a system that has the student reading some 450

to 500 items per quarter, the disposition of the proctor can
be critically important. The problem can be circumvented to

a degree by conducting the perfor',nce sessions en masse, in

the manner Ferster (1968) used, with the instructor present










to arbitrate questionable items, With one person making the

decisions instead of five or six, the chances for uniformity
if.rp~-'.ve



Res-tr!rtn 7 5 '*'72-
Re str .i- U ,it .- v -T . :. .** v..- o- -.-. - < ,- '

Tihe basic distinctions between traditional evaluation

syst- -.-: and those employ. in the new educational technology

are well known. One seeks to reflect indiviCduhi differences

in ability by producing a distribution of scores, the other

seeks to obviate those difference by individualizing ite

curriculum. One makes judgments about the quality of the

student, the othr i,,ake-,s jud.gments about the quality of t

instructional piroram,

Cont 1,--,-ncy 'mnaed colle classrooms typically achieve

individualivatlon throut.h utilizing movement-bascd porlorm-

ance modes, In the normal classroom the performance mode is

essentially time-bssed. Studen-Ls take lessons, quizzes,

exams as a group. The time for doing so is fixed by the in-

structor, and the measurement of interest is the quality of

the movement within the set time frame. Usually this is ex-

pressed as number or percentage correct. l.oveient-based sys-
tems, on the other hand, make performance (unit mastery) the
peL oI.na (ui t-ery) the

inflexible elcment0 The time it takes to achieve mastery is

the m ..i.ul.nd... that nstcaj ..es the effect of individual

~diff n cs iab.1.ity level. Slo'.;er students simply kaap










working on a unit until they have it m-astered. Hence, time-

to-completion becomes distributed while quality of perforim-

ance is held constant. The former case is said to be in-

structor-paced, the latter student- or self-paced

Administratively, the movement-based system presents

some problems. A distribution of times-to-cc'v letion im-

plies multiple exit. Some students finish in less than the

usual time, others have to take the grade of "I" (Incomplete)

and finish the work in a succeeding term. Keller (1968a)

mentions a student who took nearly two terms to complete the

work of one and then bocn?'me a proctor. Lialott and Svinicki

(1969), unlike Kelle'r, attempt to con"-_ ol the (6i'tribution

of times-to-coe:pletion by governing the rate at vhich quiI-

zes are given on th- reading assigmtyss. Perfor..an.e., is

not complete 7 cto-0 ho'..ever:
rate a student covers a specific assi:n-
mert is s, teined by the student. in this sense
it is studentt raced," In other words one student
may n-ed to spend only 15 minutes on the assign-
ment whereas other students may need to L-rnd 2
hours; the students may adjust their own daily
work schedules accordingIly. In this way, indiv-
idual differences in th-0 rate of mastering the
material may be accommodate within the instructor
paced ass -,-, :,nt and quiz system. (,'.alott and Svin-
icki, 1969, p. 555)

This appeast to have all the advantages of Keller's

system plus a reduction in the number of students who receive

the grado of "I". How-ever, tlhe gain may be illusory. Hold-

ing the en-L.::.y and exit points c;on'';.nt means that the addi-

tional tim.,e required by The slo-. studo-nt must be teen f'eOn











other concurrent activities; i.e., social and recreational

activities, other courses. Thil3 it is true that most stu-

dents work at less than 100 percent of capacity, it is prob-

ably less true for the slow student than for others. The

student who has to work two hours on the 15 mint? assign-

ment in :Lalott and Svinicki's course will probably have to

put extra time into other courses too. Robbing Peter to pay

Paul is a dubious improvement.

Johnston and Pennypacker (1970) use an approach that is

similar in its effects. They permit student-pacing but ar-

range the contingencies in such a way that it is to the stu-

dent's advantage to take at least two performances per week

and to achieve mastery on the first trial for each unit.

The student retains his freedom to respond as the spirit

moves him; however, most students react to the structure of

the environment in the way the instructor intends. Thus, an

element of instructor-pacing is superimposed on the student-

pacing feature.


A Self-Contained Classroom i`odel

In the present study, still another variation is tried.

Students who are currently enrolled in the course provide

proctoring services for each other. Each student in the ex-

perimental group has a classmate for a proctor and is himself

the proctor for another student. Unit mastery and stuJent-

pacing are curtailed, primarily for purposes of giving the










experimental treatment a chance to take effect, but also be-

cause the omission of these features solves the multiple

exit problem. Actually, this is simply a reversion to the

traditional time-based system in which multiple exit was not

a problem. It ignores the plight of the individual student

who has difficulty with the material; i.e., it trades one

problem for another.

The proposed model also introduces problems of another

kind. Wolking (1969) noticed that students monitoring each

other's performance showed a distinctive practice effect.

Those who acted as proctors prior to performing typically

received higher scores than those who performed first. When

the order of proctoring was reversed, the results were also

usually reversed. A question of importance in the present

study is whether this effect can be satisfactorily controlled.

A second problem peculiar to the self-contained class-

room model is the possibility that students who control each

other's grades will not adhere to academic standards as

strictly as would those who are not subjectively involved.

Collaboration, whether inten-tional or unconscious, is a fac-

tor that r.;'t be taken into account. Again, the question is

whether the effect can be satisfactorily controlled


Rest?.t -.- iit_ ri,- .'l -

The purpose of the study is to answer the following

que stions:










1. Will students who proctor each other in a self-

contained classroom pe_'form as well as or better than stu-

dents who are proctored by students who mastered the course

material in a previous term?

2. Will it be possible to attain experimental control

of the "practice effect"; i.e., can the course be designed

such that there will be no particular advantage to monitor-

ing the performance of a protege prior to responding on the

performance items oneself? Assuming that there is a resid-

ual practice effect, how will it influence the results ob-

tained for question n11?

3. Will the performance picture of those who proctor

each other be inflated by mutual interest and collaboration;

i.e., will the performance levels attained in verbal response

sessions, which are student proctored, be validated by the

performance levels achieved on conventionally monitored

written e:- s?
















CHAPTER II


M T HO0D



Subjects

Thirty undergraduate students at the University of Flo-

rida participated in the experiment. The group was predom-

inantly female (two males) from the Colleges of Education,

Arts & Sciences, and Health Related Professions. The first

thirty registrants on the class roll were selected for the

experimental class. Two of these were unable to participate

because of schedule conflicts. Alternates from an overflow

section were obtained on the first day of class. All mem-

bers of the class were assigned to one of two main groups,

those being proctored by returnees from a previous class and

those proctored by other currently enrolled students. For

consistency of identification, these groups will be referred

to, respectively, as the Control g'COp and the Classroom

group (or short form, Class group). The Classroom group was

further subdivided into two groups of seven for controlling

the order in which students received practice benefit prior

to perfc:'m!n--. Reference to the First Perform group identi-

fies the group which, on any given unit, performed on the










Adjunct materials prior to auditing the performance of the

other group; th Seconc e orm c goup refers to those who

received the benefit of a.J..tin. pr; or to perfom ing. The

issue of practice benefit is covered in more detail in the

section of this chapter devoted to that topic,

Fi-ure -..esoent s the betw een groups desin schematic-

ally. AlI assignments to groups were made in ac,.cor Ct a rce

with the random number procedure specified by yatt and

Brid-cs (1967).



Ist.ruc Olr

Classroom activities were conducted by the crmncipa

investigator, an advanced grade s ent .hC had e er

taug1,1t or par7ticipatecd in the tc-,chi,- of the c ourse ,

th ree aprevioJ 0' occasin, C lassroo activities indu cu the

organn1-tivaonl meet-ngm on the first day of c ass, ei'.1teen

lectur.s:,, and "tie adr.inistratio- of two \;riten examtntions.,

Additionlly, eighteen verbal for'ance se.s ons ee timed

and su .vise. by the instructor for the Cla.oo, g.rop.



Proc toa

Five .derga dua. students served as proctors for the

Control Coup, A21 five had t.... a pilot version of the

course in toer preccdingV trm ani d. hao, ret criterion for the

grade of yA .- ,,y ha .n trained :n te proctorini as-

sign'] nt by" the course i;n; vuctoj.. anc! h-ci ?ssis-ced jn de-


















































































-I


-4CM 0 \

H- 4-"



00 4-
0 C ,


x
%r




p-
m1


'0
4J0
*a




-o.

(D 0
o o
.9 (1) a)
4a-'
0










l. 0 0
0
CO+ 03


00.4
;4 ) 0 )
,0 P. P
o p4 pj

04
5.4 M 02
Pi54 5-.;

-H -H

m to
~o 4
V5..O
P 10




0 p4-

;4O 0

0

0 00

0Ti 5.4 S
H- a) 0)


54 0)
P-
*0
4-lO

Ea 0
*i-..U

I1 II


4


'0
0)


z 0)
W0 0
.1i 5.4
m 0



mt>D

0 0




+D 0



r5l







oI
0)
a.O









w0
CO &






P4





0 ,-
(0










oi ..-


-H 4-
C.)
-P-i
Wa



10 19
3 >>

*r-I 4-'



Cq
*H- CO
"P.4


SC)
0 *.H


* U


E
0
0 4 +


,- 3 +
&1 0
(0 4'
rt 0 :n
H 4-P
0 02


t 4 O .


-^*"











op.i -the curricilr rials ta-kn from t textbook., The

lecture materials, ho'..ever, wre not presented during tr.

pilot course. Consequeu:tly, the proctors managed their stu-

dents under two conditions: (1) with sc"io measure of exper-

tise for the textbook units, and (2) no prior experience for

the lecture units.

Proctors received> one hour of academic credit for each

student under their ch- ,:_-.. In general, their duties fol-

lowed the full range of activities descried in dotafl by

Johnston r-. _:.nnypacker (1970). Supervisor. assist .re to

the proctors s. ble from th instructor before and af-

ter the timed r,-rformance samples. Grades were ,-_ed on the

ins,Ltructor's evaluation of thz proctor's perform-ance %,ith

their students, and their contributions during individual

case discussons and at wee m.y meetin-s.

''., currently enrolled proctors--.that is, all 14 mem-

bers of the Cassroom group--received very little formal

-tt'. .riin: in "n. proctorin2 assi'.c_*nLt. An explanation of the

operation of the system .-as gCiven on the first day of class,

and each student receive a coup' of the Student Kandbook, in-

cluding the roles gover.in peror:;;:,nce sessions (Rules of

the Ga.?, A,, Appendix A). In contrast -to this, the previously

trained students had received fairly extenslive training dur-

ing the pilot project, The irnstriuctor, while acting as proc-

tor for th. five previously trai(d ` students, had demont .rat--

ed the use of verbal praLsa fo; correct responses, pre-session










review of important points in the study unit, post-session

tutoring, rephrasing of questions for discussion purposes,

etc. In other words, the previously trained proctors had

been trained in the proctoring assignment as well as the

textbook subject matter; for the currently enrolled proctors,

it was not feasible tc do this.


Teaching Situation

The course carried four hours of credit and was sched-

uled to meet daily from 9:05 to 9:55 A.M., i.ionday through

Thursday. All students were invited to attend the sched-

uled class lectures (Appendix A); however, this was not com-

pulsory in view of a university ruling which makes class at-

tendance optional for upper classmen. Two class periods per

week were devoted to lectures. The other two were used for

obtaining samples of student performance on adjunctive auto-

instructional materials. Performance sessions were carried

out in the regular classroom for the Classroom group. The

Control group met individually with their proctors in the

Special Education Learning Center. A comparison of the con-

ditions under which the two groups performed on the adjunct-

ive materials is shown in Table I.


Curricular materialsl s

Curricular materials were taken from two basic sources:

(1) the text, Introduc;on to-oLxce-tional Child'ren, by S. A.













I
, !
#1
r 02
H C)

$54
0 02
P-P
:5+)



*O CQ
Z4-)>



> o 02
.ri N '
*'4 V4
0 .q
02 02
4-1 Pi CI
g3H
*H- (Cj
0 I4
oZ


4' .,q4
0: C0
C) rt+-
-)D


0U r-i >,
.i- r-t !
_P r-I

+V 0 02
Z-CJ H4


02 .d
4f >H


005
00
d 4-1

->--
oo




5 0
.,-q U ,.

0
02 .H 0



0
M .H 0


r-H 0 0)
4-' C 5H
-: 0 0E l

r-4 r-4 .35
57 e.0 Cd


0

;4
.H
o

*r-
Cd
4'

ci3

o
Co
0
0
54



00






43 od

4-3 C
U4
C)(
V5
0202o
544-
+:50
O4 '


-4-4 + 4 -~ 4


o
CO





0

obD




So
r-1









5,0o
)C
P4H
.Cf w







4 o




:5 0
0 HL
M U







0D r0
P54(






0


00
002



0)
CH 1;


w- 02


OS


0 a

OC)



14
m


0
r- 4
+'
+P C
.H 0




H oH
4-


&0
C2 .H
4-4:5t



00
*H ?C1








4302



Gf 0


W~ tLO
) .,A
V C:
H rO
4-> Cd
*PCd o4


Ec 0
c4i 4-
0 0Q

0 C3 Cd
;A r +_1 ;4
P, (D ;_4 0
00
44- Hr-I
z 0 v tA
C5 ;4 0
'l- 0 s0

-P Q
() C)O Wa
4- 03 02Q
V00 H-i P4
5 p4 F-4
4-' Cd ad
d 4-> CJ


02

CI0 2V Cd 0 0
H o dH C. .

0 .C4 "- 0 r-i C12
4-' 4-' ;L4~ CA2 0) :s5
6 m C) C.) 0 0 to 0
0 E 5.4t Pi d 0 0 >) 0
4 HH 3 r C.) 0c
P-1 P4 PL| 4


;4
0>,

tHHf
02

) CO

.CO (1)
C) Z

0

Z4-')
z z
CO20
0 Z
"-I
.ri1 ,0


N C
0 4
4-I- 0:
4-
P, 4-










Kirk (1962), which was covered in entirety; and (2) a lec-

ture series which emphasized the chronological development

of the main treatment strategies in special education. Book

units were scheduled to conform to the chronology of the lec-

ture series (see schedule, Appendix A). Supplemental mater-

ials of the adjunct auto-instructional kind made up the re-

mainder of the curricular materials.


Adjunct auto-instructional materials

Adjunct materials were developed during a pilot render-

ing of the experimental course in the Fall quarter, 1969.

Six volunteer students were tested twice weekly on items

constructed by the principal investigator. Items were modi-

fied or discarded for reasons of ambiguity or vagueness but

not simply because they were frequently missed. If the item

covered an important point, it was retained. The primary

objective of the adjunct materials was to instruct, not sim-

ply to test.


MI ni n -, 1jz'. cce Effect thron.h Alternate Test Fcr'-.s

The practice effect that was observed in Wolking's

class has been described above (p. 17). Practice in this

context means auditing another student's performance ses-

sion, then using the same test items on one's own perform-

ance session. To minimize this effect in the present study,

alternate test forms were constructed for each unit, Color











and number codes distinguished the alternate forms; thus, a

student who audited a performance on blue, even-numbered

cards would use the green, odd-nu-bered cards when his own

turn came to perform. Performance sessions were scheduled

so that any residual practice benefit was distributed evenly

between groups, each group receiving practice (audit first,

perform second") on the same number of book and lecture units

as the other.


Item Length and Adjustment Factors

One advantage of the system used in this study is that

it provides a means of analyzing some of the components of

performance. A student's rate of correct responding may be

influenced by his reading rate, the length of his response

latency, the rate at which he responds incorrectly, or any

of these in combination. These categories are exhaustive,

mutually exclusive, easily observed and measured. As such,

they provide a useful monitor on the system itself as well

as on individual performance.

During the pilot program, correct responding was found

to be closely related to reading rate. This in turn was

negatively correlated with the mean length of items from

unit to unit. It had been assumed that the laws of chance

would provide the pilot students with a task that would be

roughly equal across units, at least insofar as the reading

requirement was concerned. The listing in Table II indicates















TABLE II
Derivation of adjustment factors and adjustment of mean
correct rates in 17 pilot perfcrnance sessions


A
Mean Lines
Per Blank
(x)

1.52
2.14
2.17
2.17
2.32
2.46
2.47
2.48
2.61
2.67
2.69
2.69
2. 72
2.80
2.94
3.02
3.08


B
Adjustment
Factor
(Col A i 2.53)

.60
.85
.86
.86
.92
.97
.98
.98
1.03
1.05
1.06
1.06
1.08
1.11
1.16
1.19
1.22


C
Unadjusted
Group
Performance

6.23
4.44
5.07
5.01
2.97
4.29
4.47
3.83
3.08
3.75
4.03
3.71
3.54
3.97
3.14
3.80
3.63


D
Adjusted Group
PerforTmance
(Col B x Col C)

3.74
3.77
4.36
4.31
2.73
4.16
4.38
3.75
3.17
3.94
4.27
3.93
3.82
4.41
3.64
4.52
4.43


Ix = 42.95
x = 2.53


* 10 Book unit, 7 Lecture units,
designated "3" and "L", respect-
ively


Unit*

L6
L5
L4
L2
B1
B7
B5
B3
LI
L3
B6
B9
B8
B4
B2
B10
L7










that the assumption was not supported by actual measurement

of the test items.

There were two junctures at which the items varied: The

length of the item (numb'.r of typewritten lines to read); and

the number of possible points (number of blank spaces per

item). Column A of Table II shows mean lines per blank

(LPB) to be quite variable in length, the longest unit being

roughly twice the length of the shortest. Column C shows

the mean rate correct for the six pilot students. The high-

ly reliable relationship ( r = -.75, df = 14, P < .01) be-

tween rate correct and LPB is illustrated in Figure 2. For

this set of items, approximately half of the variance in rate

correct may be attributed to item length.

To neutralize the effect of variant item length, ad-

justment factors were established by dividing LPB for each

unit by the overall LPB (Table II, Column B). The differ-

ence between the functions described by adjusted and unad-

justed rates is shown in Figure 3. With the pilot materials,

the differences were great enough to raise questions as to

the accuracy of the feedback that the students received from

unit to unit. On this basis, the materials were revised to

reduce item length variance. This was accomplished by using

only one blank to the item and rewriting items that approach-

ed extremes in length. The materials actually used in the

experiment are described in Table III. A methodological

question that the study atte-pts to answer is whether the















;.lean Rate
Correct

Nean Lines
Per Blank
---------


II0'
0-0
-. o
/ I 0 t I


% I
0 0
I I
I
tI,
i0
t


( r = .75 )

I I I I i I I I I I


I I


I I I I


PERFORMANCE


UNI TS


FIGURE 2

Relationship of response rates to length of items over
17 pilot perform. ,arice sanriples, expressed in mean rate
correct (n = 6) versus mean lines per blank


6.o


5.0


3.0


2.0
















Unadjusted
Rate Correct
0 0

Adjusted
Rate Correct


A C I I C I I I I a I


II I I I A


PERFORMANCE


UNITS


FIGURE 3

Comparison of the rate correct functions described
by adjusted versus unadjusted scores in 17 pilot
performance sessions


7.0 o-


6.o





5.0





4.0





3.0


is..--.


I I I I I i












TABLE III



Mean lines Der blank (Lp-) and adjustment factors for
adjunct auto-instructional materials used in this study


Green Form
(odd nr)


2.32
2.37
2.47
2.50
2.66
2.74
2.76
2.80
2.81
2.93
2.93
3.02
3.05
3.10
3.13
3.16


Adj
Fctrs


.84
.86
.89
.90
.96
.99
1.00
1.01
1.01
1.06
1.06
1.09
1.10
1.12
1.13
1.14


Blue Form
(even nr)


Unit


L1
14
B7
L2
L6
B8
B3
B2
B5
B4
B6
L3
B10
L5
Bi
B9


2.18
2.24
2.50
2.53
2.63
2.77
2.77
2.79
2.80
2.81
2.84
2.90
2.90
2.98
3.05
3.16


B = Book


L = Lecture


Unit


LI
B?
B4
L4
B8
B6
L2
L3
B2
B5
L6
Bi
B10
L5
B3
B9


Adj
Fctrs


.79
.81
.90
.91
.95
1.00
1.00
1.01
1.01
1.01
1.03
1.05
1.05
1.08
1.10
1.14










adjustments made in mater:i
the relationshi- between item lengh and performance mea-

sures. This is taken up in Appendix C.




The Beh!.viorcl Ap-proach fashioned by Johnstcon and Pen-

nypaceer (1970) utilizes t',vo criterion lines in specifying

the performance objective. The sample record in Figure 4

illustrates their method of graphing performance resu-lts.

The abscissa represents tine by days of the ;ee. The or-

dinate represents rate of performance. The uppermost diagon-

al is the criterion line for rate. correct; the lo'.er is cri-

teron f-or rate incorrect, In order to meet criterion, the

student nust ierforu'i at a rate which places the cumulative

performance lines (jal: ed lJnes) above the rate correct cri-

terion line and be.o';' the rate incorrect criterion line.

In the present study, the graphing of results is re-

tained for Lurpooe-s of '.:'.-idin; continuous and cumulative

feedback; however, the for of the graph is ch;-.:.., Since

performances were inste to-paced rather then student-t.-..'-ed,

the individual var -iations in frequency of performance ',ere

not of interest in the present study. Therefore, it was

appropriate to convert the abscissa to a line rerp, :-senting

units rather than ida i the week. This done, there was
no further neod to retain dia-oa'l criterion lines A ciag-

onal display of rerforr;:nce is desirabe in a free o .peran









70 -





60 -
CORRECT
2L


50

9



50 40
9/



E-4
-9y
c 8

E-4

: 30 -
6
SL = LECTURE


20

5

4
3
Mp 3



101L
103 1INC OR R ECdT





10 20 30
V.-,&* DAYS
FIGURE 4
Illustrative example of performance a1'-r,-ini.
procedure used by Johnston and Pennypacker










situation because of the meaningful Skinnerian cumulative

record it produces. In the present study, the students were

not free to respond ctcause of the fLixity of the schedule;

hence, any such record would have been meaningless. A graph-

ing procedure of the kind shown in Figure 5 was used instead.

Note that criterion lines were not entered on the graph un-

til after the tenth performance session. During the first

half of the course, students were given feedback only on

their position relative to the class mean. Absolute criter-

ion lines for letter grades were not established until some

measure of the class's performance was available to help de-

termine what those criteria should be. This was recognized

as a retreat from the preferred method of stating criterion

in terms of some externally specified behavioral objective,

yet was considered necessary because the system did not per-

mit students to repeat units to achieve mastery. An object-

ive of complete mastery is appropriate only if students are

provided the opportunity to reach it.

Figure 5 is a reproduction of the graphic record of Stu-

dent 24. The two performance lines represent adjusted dif-

ference rates for individual units (solid line) and mean cum-

ulative performance (broken line). At the second data point,

the bro::ken line represents the mean of performances 1 and 2;

at the third, the mean of performances 1, 2 and 3; and so on.











M












>44
* 1 1









0<4J z ;4 --







Cd2
C-^ 'f- 0 0









w- 020
.r +q
^s^-;4 > T1 ^

^^* 0 0 ~ o
\ S *H1 *-I
















.r+'
S* oI o H *

1U









;4 1-14









cd



-0t 0
.^--->N n -i















.--H

-d 1> I Ul C 0 1
~~ - aj '- N
















SOl C1
> a)-H CV r0
o-** o 0 0 00M t0


-i (\ N- '-I-
p~~p :5 c -
/ T < 4-) >'


/ ', x1 h--<


m H rz4 rl W <` C .3


rl -4 E-4 t^q Cj










The Use of Adjusted Difference Scores

The use of twin performance criteria (rate correct and

rate incorrect) poses problems. First, the nature of the

feedback may be ambiguous. A student who finds himself well

above criterion for rate correct but not below criterion for

incorrect rate receives equivocal feedback. The reinforcing

properties of the feedback may be vitiated by the complexity

of the information that is provided. Second, the system en-

courages suspension of performance in some cases. The stu-

dent who is well above correct criterion but has not met in-

correct criterion is well advised to stop responding. The

easiest way to make no errors is to make no responses. Third,

the system can be punitive for high rates of responding.

Consider the case of the student who reads 20 cards in five

minutes. Answering 18 correctly and missing only two, he

will achieve rates of 3.6 correct and 0.4 incorrect. If

criteria for rates correct and incorrect are 3.5 and 0.4,

respectively, criterion will have been met in both cases by

this student. Compzre this with the performance of the stu-

dent who reads and answers the items more briskly. He goes

through 30 items in the time it took the first student to go

through 20, missing three and getting 27 correct. His ratio

of correct to incorrect is the same, 9:1, but the rates he

achieves do not reach criterion for the unit. While his rate

correct is well above criterion, his rate incorrect, 0.6,

falls on the wrong side of the incorrect criterion line.










This is the student who will have to modify his behavior in

order to meet criterion. He will have to decrease his read-

ing speed, increase his response latency, or stop performing

altogether to give the criterion line a chance to catch up

to his incorrect rate. Experience has shown that the con-

tingency arrangements are potent enough to produce this mod-

ification; the question is whether the change itself is de-

sirable.

It is possible to overcome these objections with a sin-

gle criterion measure that encompasses all the information

provided by the reading rate, correct rate, and incorrect

rate. Difference rates meet this requiremenrit. Reading rate

minus correct rate equals incorrect rate, and correct rate

minus incorrect rate equals difference rate. The difference

rate is a function of all three components interacting. It

summarizes the interaction in a single score that accurately

reflects the quality and quantity of performance. This com-

prehensiveness makes it especially valuable for research pur-

poses. The statistical analyses performed in this study are

based on adjusted difference rates, except where inappropri-

ate. The exceptions are clearly noted.

Before leaving this topic, it is necessary to clarify

one further point. The reference to reading rates has been

convenient in the preceding discussion but is actually a

misnomer. The forms exhibited in Figure 6 show the term

"Card Rate" in its place. This is more technically correct,











for it states the rate in number of cards per minute that

the student attempted, "1'eading rate" irlies reading only,

while card rate includes everything that the student does

from start to finish o.r the timed performance. 'ihile read-

ing time is a substantial part of this, it is not the whole.

Response latency is another major subdivision of card rate.

It would be interesting to observe these variables and re-

cord them precisely, but it has not been within the scope of

this :K;.,'"y to do so,

Figure 6 illustrates the bookkeeping method that was

cmrr:oye:1 in the conversion of raw scores to adjusted differ-

ence scores. T:': complete forms from which these s-,-e

lines were taken nay be seen in Appe: 'i.:. 3, along with a

more detailed discussion of the bookkeeping metho:d.*


Constructi.o-. of ",rirtte:n Revie m n-t on-

Th-. tv'o written revie-.; e.:. :- were each comnosed of 124

item; rando--Jy selected from book and lecture units, The

midterm exa- used items from the first eL --.t units; the

final exi.m, from the re,-.'.-.,g eight, Each unit was repre-

sented in the e...,,i proportionately. The random -nuihber pro-

cedur of w:att and Bridgeis (1967) was follo.'ed, except that

duplicates (ite-.;- that covered the sam:e point in roughly the

same langua 2) "vere not allowed Cno of the two would be

dis,:- "de K'. the next i.:o: nu n thle same unit se-

:bcte1 in i-t p -ca, n ,When &,. 1 -2 item-s ha-d. been selected in











Step 1 Raw Data Collection


I LI II I I-1 1

Step 2 Computation of Adjusted Rates

I-
| ,


Unit___

Student
Name


Doe


Performance Adjustment Sheet K
Adjustment Factors: Green (odd) 1.06 Blue (even) .98

Adj Card Rate Rate /4d j Adj Ad S,%
Fctr Rate Corr Incr !Card Corr Incr \


f.o6[1x3.8T 3.2 o.61=4.o3r3.397 o.64 ..


Ry .98
I 1---. ..i-- ^ . ..

------------------J
Step 3 Computation and Accumulation 'of Adjusted
Difference Rates*


Adjusted Performance Record

Ime Jane Doe


Adj Cum Mean Adj Cu.'a lean Adj Curm MIean Adj Cum
Card Card Card Rate Rate Rate Rate Rate Rate Diff Diff(
Unit Rate Rate Rate Corr Corr Corr Incr Incr Incr Rage Rate
B1 4.031 3,1_ 39.3+ -- 0.64 1 t2.75s J
!
I I |- -___.._-_J .
I I I S
S--------J *3.39 0.64 = 2.75;
SIFIGURE 6
I I
FIGURES 6


Three step computation of adjusted diff.Lerence rates


f











this manne:.', the ..-_n:' vee sbuff'ied r then sorted to as-

sure that no two I it!: f .ro, th' sa2e unit would be next to

each other. The purpose of this was to assure that no mat-

ter how many items a given student completed in the 30 min-

ute time period, he would have had approximately equal ex-

posure to all units.

At the time of exam administration, studen.'-: were in-

structed to take each item in tuazn, Any items left blank,

prior to the 1a,7t iteni foe '.ch n an ,wer was .'.ven, were

counted as incorrect. Students colpoeting the exam prior to

expiration of the allot 'ed 30 irtes 7':ere instructed to

place their e and ans-,.er sheets face down on their C-sks,

and rai:. thir and tcrto have ther acta .. recorded


C ontrok of of< _'.-L

', depa'rtureo from. rou-'ne c-lassroo. i procedure was

strikin- enough to in:ke the experental nature of the course

obvious to the s-"uents, A Hawtho..n, effect :.;..peared to be

inevitable.. ,F ortunately, the design did not include coipaIr-

isons between Treatment .-r No Treatent groups, It was

possible, to control for e ..tho. Iec by xi .ng it

for all s- cents i'resoective of coup assignment Appen-

dix A coonctan .. tatemnt regsa'3i-n research intentions

that :as given to all students the ir(t day of class ("-tes

JtoJ:! the d-n ; Stu.od n 4''" .









Research Strate'y

Schutz an Baxer (1968) have discussed the merits of a

subtractive ap-ioach to experimentation in the behavioral

sciences. The strata:-. is distinguished from the tradition-

al additive method that has grown out of agricultural stat-

istics. In the additive arpro._h, independent variables

are added to a presumably neiral situation. The effects

of the addition on the dependent variable are observed and

compared to a control group that does not receive the ex-

perimental treatment. In the subtractive approach, the em-

phasis is reversed. A multivariate effect is produced at

the starting point and the effect of subtracting components

of interest is observed. By process of elimination, it may

be possible to learn which elements of a given multivariate

effect are critical and which ones superfluous.

In the present study, the starting point is a multivar-

iate amalIgamation of components from Keller's Personalized

Instruction method and the Behavioral Approach of Johnston

and Pennypacker. Student performance on small units of mat-

erial is recorded directly and continuously, feedback on per-

forr-n.-c2 is given immediately and displayed in relation to a

terminal goal, and tutorial assistance is available at the

time of per'fo.rmaarce,. Admittedly, the important elements of

unit mas-tery and self-pacing are withheld from the present

model. As previously noted, the effect of these conpcrnents

is to produce a uniform level of performance in all students,










An experiment that purports to study performance differences

cannot acccmmodate movement-based procedures that obviate

those differences.

Through the subtractive approach, the previously

trained student proctor is replaced by an untrained, current-

ly enrolled student proctor. If the selection criteria for

proctors are of any appreciable effect in the system, their

omission should be reflected in diminished student perfo:m-

ance. On the other hand, if the presumed tutorial benefit

provided by a previously trained proctor is in reality no

greater than that provided by an interested listener (cur-

rently enrolled student), then the performance of the group

using currently enrolled proctors should suffer no perform-

ance loss.

There are sound reasons for hypothesizing that the lat-

ter of these two outcomes will occur; that is, that the Class-

room group will perform at rates as good as or better than

the Control group rates. Foremost among these is the prac-

tice benefit that the Classroom group receives while proc-

toring the performance of classmates. Despite the efforts

to control it, a residual practice benefit is to be expected.

Its magnitude and relationship to specific test items corm-

prise questions vhich must be answered empirically--a.7ain,

using the subtractive approach. The omission to be made in

this case is of the students who received practice prior to

perfornin; in other rords, thcse who, on any given unit,










performed second. Those who performed first make up the

group whose conditions of performance were equivalent to the

Control group with respect to the practice variable. A com-

parison of the p-rformance of these two groups should clari-

fy the findings of the principal question. If the subtrac-

tion of practice results in a loss of performance vis-a-vis

the Control group, the effect of practice is established.

If not, it becomes necessary to look elsewhere for the sour-

ces of power within the system.
















CHAPTER III


RESULTS


The Effect of Proctoring by Previously Trained versus
Currently .nrolled_ Proctors

The chief burden of the experiment was to answer the

question whether students in a self-contained classroom,

monitoring each other's performance under group performance

conditions, could demonstrate a level of performance that

would be at least equivalent to that which has been repea-c-

edly prodauco2 in the individualized instructional nrodcls

that Keller, Johnston and Pennypacker, and others have used.

The affirmative evidence is of several kinds:

1. Between groups comparison on the 16 first-tirne

verbal performance sessions

2. Between groups comparison on the two rereat per-

formances (five minute verbal responding on the al-

ternate form of the unit on which the stu.ent had

pir-ror,.id most poorly)

3. Between groups comparison on the two 30 minute

written performance sessions

These are considered in turn.










Between rro'.T'3 co'o a'.rt. o o ...* 3 '" '- '.'.". vo-rb l.
nerf orr,-''-.? seioonm

Visual inspection of Fi.,re 7 shows that Classroom

group performance exceeded Contr:1 group performance at all

but three of the 16 data points. The differences at these

points are- slight cc.-'.:ar-2d to the greater differences in

favor of the Classroom group. Selection of a statistical

test to analyze these results was governed by the need to

test not only the differences at each point in the series

of scores, but also th2 cumulative effect of these differ-

ences. A profile analysis proce&'ij, described by .c.rrison

(1967) met these requirements. The first step was to test

the two mean vectors for parallelism. In the absence of

parallelism, a group interaction must be assumed, in which

case the profile analysis would be ina-.3r'opriate. A t2 of

2,19 w:as obtained (cf 15/14, n.s.), too low to reject th:

hypothesis of a reliable difference between mean vectors;

i.e., the mean vectors were found to be statistically paral-

lel.

A second test was conducted to determine whether the

mean vectors were at the same level. A significant differ-

ence was found between the Classroom in-d Control groups (t2

= 3.69, df 16/13, P < .025). Thp performance of the Class-

room group was reliably superior to the Control group,






47









U')0


0



co r.

0


0 F
0 V
oC. -
v "s< o
0 U,








S-.-. 0 CD


"- ,4 + -4
'-4 C "4

o.- .<_ H


%-, oq.
"- oo -m moo








0 1 V z 0I.



SL4 Z 4Z .
0D0 04" C. 0

It I4
'* ~H 0c
S.- -. ^ ^ ^-





-S.10 '-
-- --- 0)4A





/\ \ 4^ Pi


o $4 Z- o
(12 ya 4.' .C '.








*0
) 0 0 c*-h
o jO/ ini>
I* I CC
3 3 o*~ 0s 0 0
'-^ "^ r~-4 -0 *












Between groups cr.;urison on the two repeat performances

Table IV shows the results of t tests on the two per-

formance sessions which students repeated. No reliable

difference was found in the first of the repeated units,

although the mean for the Classroom group was again higher.

In the second repeat performance, a highly reliable differ-

ence between means was found (t = 2.66, df = 28, P < .02),

in favor of the Classroom group.


Bet'.'ees nro,.'^ ccr.r'ari;o:i on two written -'.foz'r.pnces

A test of the difference between means for the two 30

minute written performances revealed no reliable difference

between Class and Control groups. Table V shows that while

the means for the Classroom group were higher at both mid-

term and final exams, the differences were not reliable

(midterm, t = 1.43, df = 28, n.s.; final, t = i.46, df = 28,

n.s.).




For purposes of analysing the data, a distinction has

been made between the 16 first-time oral response perform-

ance sessions and the four review tests. Th? 16 oral re-

sponse performance tests were composed of adjunct materials

which had not been previously seen by the students while the

four review tests were composed of items that were to some

degree familiar. Within the four review tests, a further r







49












*
o o
-P P
o H-


0 +>
2)) Hl
O 0



CD 0 c .o
ww 431


o0
0 z CH v





00 0 "-
o o .,4 4 n a0
m P Q < 1




ra o C) PH
E d o a) 0) a) -












~I o 4
dl +) 4 w <
H cn ;4




0 c-j M 2





.0 H 0 t -
; 40






0 0 0 0,
z3 C bP















o p.40
0 4- Vc
zt 1O t- \ c



aI) 0) 0











o 4o + +'
0 0 >f -P Cc
hr O ril r c to (a
4- Q) C *

P(O -P(Do






Cd o
(1. z ; 0
E 0 1 0
4-1~< CO"r



o p-- - ------------- - i .,-
C0 4-> ,q
0 4 4-1 1a Cd
bD 0 Q) c4
^0 O 0 J(0 0 CO wO0
Cd ;O z a' r- ) 4 *3
E tO It z zl C i OHC
0 dt 0^ EO:^ 6
o~~~. rEs (B "*c' a




















*




00
- E-4O
0 (40

0C 0N
0 0 4-M
0 0> 0
-1 Ei H




O0 QH
(1 0) E-4 +) 0 N


4012 a r
od w 00
o i 1, r, P a)





40 10- $0 q-
0 0 4-1
0 -H .,q

In-o pH .-p E
0 i 5o
-m 4- > $4 0
03 0 3 o- om
$4 0) H-a
V2 tO 64 H \0 0 C-0
c+' 4-,r. 0 H- n' cX) (I)v
Sow $4 z 0 C; 4->
o0 (1) C l co, Hr4 c.w
0 -4-1
0n a)^ 4- C



0 *4 0 X


00) 4-l 4-
4-4 0 4 0.q E-:



0~ rq a
ni~~a 4->t *


0 0 IV co H

o3 pd p s o 45
co C1 (D z .
0 Q 0(1)



O OH 0
^ ~ ~ ~ 01 +'I1_____________ HH a, H
0 c.S O>b.~ 0
0 H'-, 0'- 0 0!
0O 0' g^- *- 0 -










distinction can be made between the two repeat performances,

which were done in the oral mode, and the two 30 minute re-

view tests, which were in the written response mode. Fig-

ure 8 illustrates the point that writing takes longer than

saying. The data supporting this statement are presented in

Table VI, indicating reliable differences in both pairs of

test scores (Set I, t = 2.00, df = 58, P < .05; Set II,

t = 2.06, df = 58, F < .05). Table VII shows that the rate

differences are not the result of a lower quality of per-

formance in the written mode. The percentages of correct

responding are greater for written mode in all but one case.

The exception is Repeat Test II for the Classroom group.

This is the sane test that showed the highly reliable dif-

ference on Table IV, using rate as the unit of measurer-ent.

It appears that this test nay have come under an atypical

influence. This is taken up in the Discussion section, Chap-

ter IV.


Teists Ooicerninz the effects of Practice

If the effect of practice w'are the principal determin-

ant of the perfor.unce differences reflected in the 16 oral

response sessions, it should be possible to demonstrate this

in two ways:

1. By showing that the seven students of the Practice

group produced performance rates which were consistent-

ly andk reliably su. r'ior to the rates produced by the




















s--.




0


co !
r-,
.


S
~0
I
/
I
/
/
I
0~


H
0 P4
4 Z
4-' 0
c S
0 bfL
0


S


I I


4-3
\4-3
\ Q


0 0 0
o o N


n < _-- Z'2 l


C4



M
MM


C/2
H




0






z
04 0
N
r.3







Z







XH


40 0DU
4- a)m
0400

,o 60o



OCd 0
00





;.4 ;4
a)i 0 $
4-1 m

r-l4
04 (f 0


41)0)0




OoUS
rto .1






.4$4o-%
4-> dt

z --

H O)
r- 1 o
A


1-i


-I I






















*
Ca
a)

4-> V

0 03


o 4-1 4->< a r0 Vl\

0-1 a) D\l C-1 v




C) d r-z; L
a> -'+ 4CD* Ov)



+*C 0 5.C "
0(-
4-1a(- 0




a)1) 0)
cDd P a

o) -r-1 M? 0

C +) -5 4H C/2 Q
g Cl C UiL'J







m :) a) vc'j
o .4cr a) 4> Cd







H4-) >'
0 4-' 4->
0-Pc~ D0C+
rq (1) ()









0 0C
~r *CD C + H

D 0> .- CD

pCD hfD -P d







Crtq CD V,0 o-l )
OW en 4'' H CD- H a


d+'' 3 Ccy2xJ^





5. 5-. 441 CD



0 a)i +
rCHd H > a -1

0~ ~ 44-1 U 0*































4-' -P a)
-p-'2
00202
. 0
00
cl-4
002
0 w
ar) U'o
EO V a
0) o >
tt E r-4
P,-IX


o00r
4 0
P, 4o
'2)

0 (.0





P 4-1
03
c g


0 m
.r-1


00a
U2 0)
C) >i Ci
04 4->

Cd 0
o cP


E Ca 4-)

0 (1)
0 14t


CH
NT-.



0
0




















C4-4
02











rx4
Cd3


'-I
M "











seven students in the No Practice group.

2. By shoin ,- that tre was no reliable difference

between the 'o Practice group and the 16 students of

the Control -roup, who also performed under conditions

of no practice.


i r

The mean vectors of the Practice and No Practice groups

are plotted in Figure 9. ?,- small number of subjects in

the two groups prevents the use of the profile analytic pro-

cedure used earlier, ven if profile analysis were .'-*J.1-

able, it is doubtful that the mean vectors would pass the

test of arallelis.,. Seven of the scores are in the unex-

pected direction.

A test of the consistency with which the practice ef-

fect produces superior scores can be made by hypothesizing

no difference in the probability of a superior score between

groups (H : P = .5). 1,ith nine superior for the Practice

group and seven not s1' ,erior, a z value of .27 is obt-inezl,

insufficient to reject the null hypothesis. There is no

evidence for t'he test of consiste-ncy of superior scores as-

sociated with practice effect.

It appears that the lecture units offer the best chance

of findings reliable differences between the means of the

Practice aid No P.t.ce ro Lectues 3, 5 and 6, and

Book 10 sho'.... -ily goo saration. ;i,-. results of t tests









































































I I i I


P, x O xS ; -4 ;:! CE) N


SON


0

-t4


0

0
*-I



0
S0)





0 P


0 co
cS t






co
Cdr. i


H

o
o
(3) C)



o

0

0

+o
4







\0
C 0


0 )
o o^
0 4P












0)


0 V ::;
04 d
PL p, ^
a0 0



0
Z c


I I


I I I


C O'
ola


!


= < E-i !xl cn










on the means for each unit are shown in Table VIII. Clearly,

there is no support in this series of test. for the notion

that practice produces reliably superior performance rates.

Only one of the tests achieves reliability at the .05 level.

This is hardly more than chance alone would produce in a

series of 16 trials.


Con oa oi of" the N: FrFc -c' ar- ., Con .o'ol Cro'os

The more powerful profile analysis test is suitable

when the No Practice group is compared to the Control group

(n = 16). Figure 10 shows the mean vectors to be roughly

parallel. The test of parallelism confirms this conclusion
2
(t = 1.58, d." = 15/7, n.s.). The test of the level of mean

vectors shows quite emphatically that the seven Classroom

students who performed without practice were superior to the

Control group at a high level of reliability (t = 10.49,

df = 16/6, P < .01).

The evidence from both comparisons (Practice versus No

Practice, No Practice versus Control) supports the conclusion

that the reliably higher achievement of the Classroom grour

cannot be attributed to practice benefit. Practice benefit

in this case is defined as the benefit a student derives

from proctorir,- the performance of another student prior to

taking his own performance test on the same unit. It will

be recalled that the experiment was designed to reduce the

effects of practice by having Second Perform students take


















4J
H4



















o- o o.-
.- * * * * 0 .

0V4
-o > mn ca m m w m t o w co &a mo w 0o w












S(1)
-r- 0 :5 C\ CJ\Q \0 N 1- 0(\ 0 H(C"N C~lcoC\1 Hi0,
4-11~. -P --, 00 n~ nV co. n -'4 \Q 4 N 4
a)C C) H 0 * * N * *
a-l 01)





4-i 0P



H ca)
H 4-1Z C
H Cd ::
:> +'jcIuU)


lxI 03 a) C\l O\ C- \0s O- C", CdJ C- OD r-i 0I
< C 0 0 Cii * * * * * * * *0
E-4 ri*F. ::S r N -N ~~N ClC N HC4 NNC'4C\J C 4U
:> +3 0 0 4 \ -








.ri a) ;11



\0 V -,
0 a)r











--1
'M 04-1












\OC'-N CQ cf \ C'-H 0C- nC o\N\ON
CdC -" c )i\C\O C i c
tH 0










to H- ; N0
M~ 4-> r- N CT"co 0 -
14 dL t- A -


















































6~
4%
S.
S.


0
I
4-)

0


P4
0
4
t^


m t m I I mI


C'-








rt4


H
0
,-,I



0

C
E!
~ci





OR
0





U,
4-,




o
C )




~cd
o o
0


4.

'0
00





cO









OR
S0}
( M






+3
0 04









I>4-3
to
X; o
4-3'





%o
(164-
4Q
o e





03
PB bj
z- *
CdfC


t !


!


a, fl :n,- T 0 OW ,:7 < Z- 0 rKI CS < S-4 -:*q III}










an alternate form of the test. The evidence leads to the

conclusion that this strategy was successful, and that an

explanation for the superior achievement of the Classroom

group must be sought elsewhere.


Validati cn of StuSeenV-Prcct'red qPe:-for'w .,e Se32ios

Another factor which might reasonably be expected to

account for the between-groups perforrz-.rnce difference is the

tendency of human beings to collaborate when it is mutually

advantageous to do so. The experimental design included two

written performance samples for purposes of testing this

question. These were taken under standard examination con-

ditions, the only proctor being the instructor,

In the analysis of this data, the use of adjusted dif-

ference scores is inappropriate. It has been shown that the

written ex
review tests taken in the oral response mode. It was also

established that these differences in performance rates were

not related to higher frequency of errors. The mean per-

centage of correct responses was generally higher for the

written mode than the oral mode. It appears that the dif-

ferences may be the result of the longer time it takes to

write an answer than to speak it. In view of this inequity,

the present analysis utilizes percent correct as the unit of

measurement.

If members of the Classroom *:, ? had collaborated to










report higher performance rates than they had actually earn-

ed, evidence of this should be seen in comparisons of oral

response performance and written performance. Collaboration

was possible in the former case, not possible in the latter.

Review of the section on written exam construction

(p. 38) will affirm that the written examinations were ad-

ministered in the same way as the oral performance sessions,

except for the response mode and the duration of performing.

Students were required to take each item in turn and could

not attain a hich-r percentage correct by skipping around,

looking for familiar items. It might be noted also that the

nature of the task should have effectively inhibited any last

minute attempts to master material which had not indeed been

mastered during the preceding oral performance sessions.

The 124 items for each validation exam were randomly select-

ed from pools of more thr.n 500 constructed response items.

To cram effectively for a performance task of this magnitude

would have been, first, exceedingly difficult, and, second,

not very re,:?.'ding. The written exams counted for no more

than any other perforr'-,, ce sample (five percent of the

grade).
Based on these considerations, the assumption was made

that written exam performance would provide a valid certifi-

cation of prior performance records if the written exam

score was approximately equal to the mean of the preceding

oral response units. In other words, if members of the











Classroom group had reported performance rates that were

based on something other than their true capability, it

should be seen in comparisons of oral response performance

and written performance. For this purpose, it is necessary

to make individual rather than group comparisons. Group

validation of the kind provided above in Table VII (p. 54)

might obscure the fact that one or two students enjoyed un-

intended advantages that the group as a whole did not.

Table IX shows the mean percent correct for the eight

oral response performance sessions on which Exam I was based.

The second column shows percent correct on the written exam;

the difference between the two is shown in the third column,

The same data is given for both the Control and Classroom

groups, and the procedure is repeated in Table X for the

second set of oral response rates and Exam II.

The question of what criterion to accept for test valid-

ation can be safely ignored. The differences in test per-

formance are very slight, more often positive than negative,

and of .i-,ater magnitude in the positive direction than the

negative. Only one set of scores out of 60 shows a percent-
age loss in excess of 10 percent. This student is In the

Control group, where the question of collaboration is riot at

issue. There is virtually no difference between groups in

this comparison. The data Doint to the conclusion that the

performance differences shown in previous analyses cannot be

attributed tc collaboration on the part of Classroom sta-J-nts.










TABLE IX


Comparison of the percent correct on eight oral response
samples with percent correct on the midterm written
review exam, by Classroom and Control groups
CLAS3OC -: CONTROL
Mean Percent Gain in Mean Percent Gain
Oral Correct Midterm Oral Correct Vidte:
recent iid term over Percent midtermm over


)rrect-


.9 ;; 5
.8954
.9533
.7432
.9:57
.7917
.8396
.8951
.7247

.9395
.8293
. 883763


Oral


+.0984
+.0308
+.1199
+.0681
+.0838
+.2067
-.0478
+.0613
+.1118
+.0986
+.0307
+.0472
+.1344
+.0479


Correct


.6849
.8775
.7225
.8596
.6800
.7404
.6730
.7745
.8308
.8000
.7761
.7505
.6885
.7151
.4539
.8152


P6
C<


.6927
.9277
.7755
.9152
.6644
.7090
.8395
.7783
.7833
.6758
.7540
.8923
.0949
.8397


in
rm


Oral


+.0826
+.0250
+.0675
+.0193
+.0594
-.0068
+.1088
-.1561
+.0965
+.0333
+.1200
+.1623
+.1461
+.1765
+.1304
+.0307


Nr Students showing:
Loss Gain


Class
Control


Mean Percentage of:
Loss Gain

-.0478 +.0877
-.0815 +.0899


Exam

.7675
.9025
.7900
.8789
.7394
.7336
.7818
.6184
.9273
.8333
.8961
.9128
.8346
.8916
.5843
.8459










TABLE X
Comparison of tht percent correct on eight oral response
samples with percent correct on the final written
review exam, by Classroom and Control groups


Mean Percent


Oral Correct Final


Percent Final
Correct "xaM

.7828 .7754
.9574 .9848
.8333 .8546
.9619 .9545
.7320 .6900
.7305 .8800
.8727 .8463
.8376 .8058
.8382 .8958
.8050 .7897
.8035 .8945
.9009 .9683
.6994 .7467
.7573 .7506


Gain in


ov-er
Oral


CONTROL
Mean Percent


Oral
Percent
Correct


-.0074
+.0274
+.0213
+.0229
-.0420
+.1495
-.0264
-.0318
+.0576
-.0153
+.0910
+.0674
+. 0473
-.0067


.7352
.9193
.8258
.7920
.7575
.7253
.6576
.6495
.8007
.8789
.7362
.8426
.8142
.7422
.5138
.8539


Correct
Final
Exam


.8107
.9200
.7336
.7748
.8474
.7506
.7593
.5583
.9200
.8140
.8147
.9356
.7484
.8107
.6666
.7748


~3u:. ArC!


Nr Students Showing:
Loss Gain


Mean Percentage of:
Loss Gain


8 -.0216
10 -.0684


Gain in
Final
over
Oral

+.0755
+.0007
-.0922
-.0172
+.0899
+.0253
+.1017
-.0912
+.1193
-,0649
+ 0785
+.0930
-.0658
+.0685
+. 1528
-.0791


Class
Control


+.0606
+.0805











Proctor Trainr.in .: .n tent Performance

From the foregoin; it might appear that the Control

group proctors contrib',t,2 nothing of consequence to student

performance. Such a conclusion is not dictated by the data.

What the study shows is that the performance of the Class-

room group was superior to that of the Control group, and

that neither the effects of practice nor collaboration ac-

count for this difference. No warrant is provided for as-

suming that previously trained proctors contribute nothing

to student performance. It would be as reasonable to assume

that currently enrolled proctors contribute as much to their

students as their previously trained counterparts do.

A measure of this variable, albeit a weak one, is

available within the experimental design. Proctoring for

the Control group was carried out under two levels of proc-

tor sophistication--a fairly high level of expertise for the

book units, and no prior experience with material in the lec-

ture units. Assuming that the Control group proctors gave

substantial tutorial assistance to their students on the book

units and almost none on the lecture units, it follows that

the proportion of incorrect lecture ite:is on the written re-

view test should be higher than the proportion for book items.

Unfortunately, a test of this kind does not take into account

the possibility of unequal levels of difficulty between lec-

ture and book units. If book units are generally the more

difficult of the two, the hT:,othesize. effect would tend to










be cancelled out; conversely, if lecture units were more

difficult, the effect would be exa.-erated. A test is still

possible, however, if the classroom group is used as a con-

trol. The tutorial benefit provided by Classroom proctors

would be based on equal familiarity with lecture and book

units alike; hence, free of the systematic variance which is

presumed to be present in the Control group.

Table XI shows chi square contingency tables for the

first and second review examinations. Expected frequencies

for the null hypothesis are shown in the cell insets and

observed frequencies in the cells proper. For the first

test, a chi square of 4.632 is obtained, sufficient to re-

ject the null hypothesis (P < .05). In the second test, the

differences between expected and observed frequencies are

negligible (X2 = .102, df = 1, n.s.).


Review, of Results
1. Classroom group performance in the sixteen oral re-

sponse sessions was superior to Control group performance at

a moderately high level of reliability (P < .025).

2. On the review tests, the mean performance score for

the Classroom group was higher in four out of four cases;

however, only one of these differences was reliable using

individual t tests. This was the second test of oral re-

sponiJi:-.:,, in which the Classroom group was reliably hijher

(P < .02).
























W
C4
0

0^ >
4+' 0)

" a)
0 ;4
o 4
4 0z
2.-1
4-4 4-- Pq
0+
r-

0
ca G. r-
0)
0- 0 0
GS V04-P
P4. Z
e o
0l (U


0+

(40
COW0
>^ C) U









o~
0 ca (co
o1 x
a: (1)







Mr- o>



4--l
0,4 0
0 to
0 co
z Cd

C6 *
4-3 4-P

0 >
C)3

(tf 0
za)

*H0


0C
a4 X


0OCN
'










3. In an effort to rule out undesirable influences

that might account for the above results, two possibilities

were evaluated, the effects of practice and the effects of

collaboration:

a. Evidence that the experimental design success-

fully controlled the effects of practice was of two

kinds. First, there was no appreciable difference

between the performnnce of the Practice group and the

No Practice group. Second, the performance of the No

Practice group was superior to the Control group at a

high level of reliability (P < .01).

b. Successful control of possible student collab-

oration was controlled through an inspection of dif-

ferences between oral and written performance. Col-

laboration was possible for the Classroom group in

one condition but not the other. A comparison of the

Classroom andr Control groups showed the two to be vir-

tually identical.

4. Qualified support was found for the hypothesis that

Control group students would not do as well with lecture

items- on the review tests as Classroom students. The under-

lying sutposition was that the Control proctors, having had

no experience with the lecture materials, would provide poor

tutorial assistance for the lecture units, and that this

would be reflected in the written exam results. A reliable

difference (P < .05) was observed for data from the midterm

exam but not for the final.
















CHAPTER IV


DISCUSSION


The purpose of this study was to evaluate one of the

principle components of an instructional system that has

shown exceptiolLal merit in college clhssroo:., (Keller, 1968a;

14c'-.ichael and Co-ey, 1969; Johnston and Pennypacker, 1970).

The component of interest is the proctoring arrangement.

This has been generally accepted as one of the cornerstones

of the persona-alizd instruction method and has been the sub-

ject of an extraordinary volume of descriptive comment. The

eulogizing of the proctor has focused on his tutorial skills,

his competence in handling a variety of student problems,

and his capacity to engage peers in the kind of interperson-

al relationship that is seldom possible for the instructor

of a lar-e class. Diagnosis and remediation are within his

range; individualized instruction, through timely curricu-

lum modification, appears to be part of his routine (cf.,

Johnston ani Pennypacker, 1970).

Yet the proctor differs from the currently enrolled
student r.ctinly b.- virtue of his having completed the course

a term or -two earlier. Any qualities he might have achieved










beyond that single distinction might as readily be fund in
the currently enrolled student. It has been the thesis of

this study that if a previously trained peer can effectively

participate in the training of a college student, a current-

ly enrolled classmate can, too; that the amount of learning

accomplished by the currently enrolled student is only weak-

ly related to the proctor's command of the subject matter,

if at all; and that what the currently enrolled student

loses by not having a previously trained proctor, he recov-

ers through his own experience of proctoring another. These

points are discussed in the light of the experimental evi-

dence.


The Effectiveness of,a System that Uses Currently Enrolled
Class'-at- t prs

Perhaps the strongest result in the study is the reli-

able difference between the group proctored by previously

trained students and the group proctored by currently en-

rolled classmates. In 17 of 20 performance measures, the

Classroom stu- cents achieved higher performance rates than

the students .who were proctored by "experts" (those who had

previously demonstrated a high degree of competence with the

material). With the possible effects of practice and col-

laboration ruled out as sources of this difference, the var-

iance may be assumed to lie within the different treatments.

However, there is no justification for concluding that the










currently enrolled student is a more effective tutor than

the previously trained student. :he two proctoring systems

include differences other than the tutorial assistance var-

iable (see Table I, p. 23), and the experiment does not ade-

quately test the componrLents operating within the system,

only the global effects of the systems themselves.

The stated purpose of the study was to evaluate a prag-

matic alternative to an established proctoring method. To

meet its objective, it need only show that a proctorin- sys-

tem utilizing currently enrolled students as proctors can be

as effective as a system that uses previously trained stu-

dents. This objective has been met. The evidence is con-

sistent across a relatively large number of performance sam-

ples and is reliable.


The Proctor T_-.inin-'v Variab'le

As noted above, the experiment does not provide an ade-

quate test of the question whether a proctor's prior train-

ing in the subject matter favorably influences the perform-

ance of his students. However, weak evidence was found for

the obverse proposition: That a proctor's lack of training

in the subject matter can have an adverse effect on the per-

formance of his students. In the first of two written re-

view tests, the Control group students showed a significantly

greater proportion of incorrect responses among lecture

items than the Classroom grouo did. It was on the lecture










items that the Control group proctors had received no prior

training; and, presumably, these items on which they would

have been able to give their students no tutorial assist-

ance.

Had the same result been observed on the final exam,

the finding could be stated with greater conviction. A

possible explanation of the discrepancy is that the novelty

of the proctoring arrange rent had diminished by the second

half of the course, thereby decreasing its tutorial effect-

iveness. A decrease in tutorial effectiveness would have

reduced the bias favor~n7 book items. This in turn would

result in the nearly equal proportions that were found on

the final exam. Diminished tutorial effectiveness would

also account for the slightly greater performance loss that

the class experienced in the final exam (cf., Table IX and

Table X).

In any event, the evidence on this point is inconclu-

sive. A good test could be made by adding completely naive

proctors to the model which uses previously trained and cur-

rently enrolled proctors. If the other conditions of the

performance session were held constant, between-group dif-

ferences could be more closely tied to the proctor training

variable. It should not be surprising to find no difference

at all. One is hard pressed to find a consistent relation-

ship between student perforn:-ance and the sophistication of

the instructor; why should it be expected between proctor










and student? The facilitation of performance appears to be

a complex phencrmenon, to say the least.
Ferster (1968) puts the question of proctor sophistica-

tion in perspective. In the course he has reported, it is

the function cf the responder to respor.nj and the listener to

listen. One might assume that the responder learns to re-

spond by responding. The presence of an interested listener

is enough to maintain the behavior for most students. The

point is that one needs no special training to become an in-

terested listener. A currently enrolled student should be

able to do this as well as anyone else. Certainly he has

the incentive.

Another point should be made before leaving this topic.

The proctor training: variable has been dichotomized between

those who were previously trained and those who were crrEnt.-

ly enrolled. This does not mean that the currently enrolled

students were untrained vis-a-vis the previously trained

proctors. The Classroom group proctor was required to pre-

pare for the performance sample in just the way that his

protege was. What he lacked in depth of experience, com-

pared to the Control jroup proctor, he probably znade up in

the immediacy of his contact with the material.

In sum, the net effect of the proctor training vari-

able is thought to have been very slight in the present ex-

periment. It is necessary to look elsewhere for the vari-

ables that me the difference,











_ _. '._ _l: . _[ l~ :" ".. ... ^'. -*_ ... ... _._ L _' .. ....
T h. _ - c fZ-2 >
Perfc. -

It has been unrderstoc.d from the bgLr.ir, (Keller,

1968a) that a proctor's duties strengthened his own learning
I I
as well as t. .at of hris protege, Hoe.ver, the contingencies

are not ar-.r -.d to assure this result. What the previously

trained proctor learns in performance sessions is almost in-

cidental to his true purpose, more unavoidable than planned.

This co.nLrasts sharply %;ith the c'-,,_--:,-ntly enrolled proctor,

He goes into the proclcrinr assignment under an alto either

different set of cont, -...ncies. He kno.:s that s:,.-: of the

material he listens to will appear on a review test he will

take. i.a:, of the items he hears ,,ill confirm or clarify

his understand ng of the tria he has prepared for his

own pcrfor n session, He has an incentive to listen

thoughtfully to th'e performance of his protege', even though

his own pe- "or:-ance test will be composed of a different set

of items.

T-.se contingencies remain in effLec not only during

the timed portion of t,. r-rformance session but thi c-."1hout

the entire 50 minute period, T-.' discussion is maintained

in strength because eveo-. one has an interest in maintaining"

it, '- cover< *- and exch.an._re of information is further

enhanced by the rotation of students for the second perfcrm-

ance s....e... If Stucent A is procro red by Student -3 in the

first sa1.7 h, .vill b Z da ired vih S`.u d nt C for th se- -











ond. Y meanwhile, Student B will be taking his performance

test with Student D. Both C and D will have brought to

their discussions the information and ideas that they ex-

changed with Students E and F during the first half of the

performance period.

It is in the redund-ncy of the arranemzant, in the

overlapping and inseparability of roles, that a powerful

opportunity for strengthening performance appears to reside.

The contingencies are arranged to make the most of it. Each

student is reinforced both as speaker and listener. One may

speculate that the group pools its resources and cross-

fertilizes its members in a way that is not possible for

students who keep appointments for individualized instruc-

tion.

Unfortunately, the experimental design is silent on

this question. The foregoing description is not based on

experimental evidence but on observation of how the system

worked once it was set in motion. This was unknown prior

to the time it was tried; indeed, it was an empirical ques-

tion whether it would work at all. The experimental results

should be clarified in respect to one point, however. It

has been shown that the practice effect was successfully

controlled in this experiment, yet the suggestion is now

made that it was precisely the additional discussion, ex-

change of information bet.'een students--i.e., "practice"--

that accounts for the superior performance rates of the

Classroom grou.p.











To resolve this parent contradiction, it is necessary

to make a distinction between desirable and undesirable prac-

tice. In the former case, the practice is based on the stu-

dent's Tr-, :-ration of the assignment (reading the book,

studying the lecture notes, etc.). It is just this kind of

practice that the course is designed to rI::iri:e. It was

the practice on specific test items that had to be controlled

in the Classroom model, Keller comments on this distinction

in the folloowing passage, part of which was quoted in the

introductory chapter;

The 'response' is not simply the cc. pletion of
a prepared statement through the insertion of a
word or r;'.e Rather, it may be thought of as
the resultant of many such responses, better de-
scribed as the un uerstanding of a principle, a
formula, or a concept, or the ability to use an
experiroental technique. Advance ,within the pro-
gram depends on so e thing more than the apca'r-
ance of a confirrin "ord or" the presentation of
a new frame . . (Keller, 1968a, p. 84)

In other words, perormo ance on the test items represents an

understanding of the material, not simply an ability to mem-

orize responses to stimulus items. If this were not so, it

would be satisfactory to distribute copies of the one thous-

and or more items and let the students commit them to mem-

ory.

T:, test of ,_ctice effect consisted of demonstrating,

first, that students who proctored first and then took their

performa.-:,ce teszt did not sho,: significantly better perform-

ance; and, second, that the argin of superiority over Con-











trol group students was maintained even when the group that

could have benefitted from practice (the Second Perform

group) was removed from the comparison. Attacking this ques-

tion from different fronts not only makes the point emphat-

ically but also yields information with different shades of

meaning. In the first instance, it is shown that students

will not always do better simply by listening first to an-

other student's perforra:ance; in the second, it is shown that

the variables that produced a reliable difference between

Control and Classroom groups were at work during the entire

performance period, not just the last 25 minutes of it (when

the Second Perform group took the perfornince test).

This is where the data stops--close to an answer but

incapable of providing it. It might be possible to answer

it conclusively by setting up still another treatment group,

one that received proctoring under Classroom group condi-

tions but did not proctor in turn. Comparing a group such

as this to Control and Classroom groups would tell what part

of the variance, if any, can be attributed to proctoring as

a learning experience (strengrthene-r of performance). If the

No Proctorin: group came up with scores Jra.n from the same

population as the Control group, the hypothesized value of

proctoring as a learning experience would be confirmed; if

drawn from the same pouul-ition as the ClassroTom group, re-

jected; and if drawn fromr a population somewhere between the

two, it would be confirmed but would not tell the whole story.











If this last alternative were the case, it might be

fruitful to look also at the role of respondents in depres-

sing performance. Some students firn4 the unfamiliar per-

formance session anxiety-producing. A situation that evokes

respondents can seriously weaken performance (Ferster and

Perrott, 1968, p. 130). The question is whether sitting

alone in an experimental atmosphere with a previously trained

peer (one who has already achieved success with the material)

is more anxiety-producing than taking the performance ses-

sions in the more familiar classroom along with others who

are in the same boat. In this study, at least three people

were enough bothered by this factor to complain of it open-

ly. Two of these were in the Control group, one in the
Class.


A Gratuitous Finding': The Effect of Tirre on Practice
Benefit

It has been shown that the experiment successfully con-

trolled the effects of practice on specific stimulus items.

Demonstrating that an effect did not take place is appropri-

ate in an experiment that specifically sets out to assure

this result; however, the conclusion would be strengthened

if information were available to show what happens when the

effect is not controlled. There is one test among the per-

formance samples that may provide information on this point.

The observation was made earlier (p. 50) that an atyp-










ical influence appeare-cl to be at work for the Classroom

group in Re:--:t j.-t II, This influence could very well

have been the effect of practice on specific stimulus it.rs.

It will be noted ( 'jure 1, p, 22) that Repeat Test II was

scheduled at the midpoint of the seccru] half of the course.

This limited to four the .i..,Qer of performance units from

which rep-eat te-'Lt could be chosen. Consequently, the

Classroom group students would have !2W.st:ned, within the

preceding two weeks, to the same items on which they were

now taking the performance test. This would not be true for

the Control group because of t reoeqi'_, .ment that all stu-

dents use the alternate form of the unit on which they had

performed most poorly, J.',- Classroom group students would

have had the sam '.-.* \e in Repeat Test I; however, this

test care at the end of the first half of the course. This

meant that- some of the students would be repeating units

that had not been seen in several weeks (by actual count,

the m.an nuiiber of units separating the performances for Re-

peat Test I was 4.2; for Repeat Test II, 2.3).

Tl-e foregoing is offered as a -:'atuitous result. The

sche. il':." of iepsat Test II was not an experimental manipu-

lation but a nmtIjtter of necessity. If the above interpreta-

tion is correct, the retention of repeat material across

time period:, might be an interesting variable to investi-

gate, his wo'.ld be eSpecily true if a cc...'rison of the

two yp.e of pr cti b 'efit discussed above (specific item

versus to lbe o;.:p........tally tested.










Departures from the Bghav1ora! iz' el

Ironically, this evaluation of the b-havioral approach

to college instruction has consisted of stripping it of its

uniquely behavioral elements. The objection might be raised

that the suspension of unit mastery and self-pacing so al-

ters the format of the personalized instruction -odel that

the test of the proctoring question cannot produce a valid

answer. Keller (1968b) warned of the consequences of chang-
ing components within the system, stating that "a change in

one area may conceivably have bad effects in others" (Keller,

1968b, p. 11). He called particular attention to the most

distinctive features of the system: Self-pacing, unit mas-

tery, and student proctors0 The present study has changed

one of these and had to abandon the other L.-o in order to

ev...l-i( te the change,

It is possible that the effectiveness of previously

t!.w;.d proctors cannot be demonstratCed unless the integrity

of the system is maintained. -"'- repetition of unmastered

units may reflect the salutary effects of tutoring better

than the two review exams used in this experiment. However,

there is not much support for this view, in data given by

Johnston and Fennypackerz, (1970). In their studies, it was

repeatedly sho,,_n that the mean gain realized on r-. ..t per-

formances vwas modest, especially in terms of the reduction

of errors (mean error reductions of 5 perce-nt or less on

three studi es). orkin* n. thin thi; kind of : -.'in, it










would be exceedingly difficult to demonstrate the benefit of

tutorial assistance.

In any event, the critical thing to rern-enber is that

there was not a shred of evidence anywhere in this study

that would ds-omean the ability of the previously trained proc-

tor. The evidence does not refute the assumptions on which

Keller (1968a) and Johnston and Pennypacker (1970) have

based the selection of proctors; it does, hc',ever, suggest

that whatever it is that proctors do to facilitate student

performance, the currently enrolled student can do as well.

It appears that the self-contained class, using cur-

rently enrolled students for proctors, can be as effective

as systems which utilize experienced proctors.


Anolication o; th Fini-T.in-s

Nothing in the above should be taken to mean that the
experimental method used in this study is the best way to

organize a college class. Decisions on how0 to use compon-

ents of the syz:tn-: must be made individually to suit local

circumstances and curric;lu.-u. objectives. For example, a

distinction was made earlier between the behavior principles

subject matter-' and the non-sequential material of the ex-

ceptional child survey course. A criterion objective of 90

percent correct responding, may be appropriate for behavior

principles, just as an accuracy criterion of 100 percent

might be required for a pa.rracist mixing medicinal com-










pounds. Neither of these cases should govern the selection

of criterion for the exceptional child course. The nature

of the subject matter, the objectives of the instructor, the

needs of the individual students--these are the proper deter-

minants of such questions. If it is appropriate to bring 90

percent of the class to criterion, unit mastery and self-

pacing should be used. However, it should be kept in mind

that unit mastery is expensive in terms of the number of

test items required to use it properly. If the student sees

the same items tha second time round that he saw the first,

he may be learning little more than discrete responses to

stimulus items.

It is in view of considerations such as these that the

method used in this study represents a pragmatic alternative

to established proctoring patterns. The contention here is

that the behavioral or personalized approach to college in-

struction may be more flexible than its application to date

has shown it to be. It would seem that the key to its

broadest and most successful use will not be found in a dog-

matic adherence to established methods but in a willingness

to explore the many possible co-binations of components, and

how those combinations will produce a variety of effects in

student performance.
















CHAPTER V


SU "\?.Y AND CC- 'LUSIC. .


Tri) study vwas concernedC with Keller's (1968a) behavior-

al approach to college instruction. UsinT Johnston and Pen-

nypacker's (1970) ve1w: 1 response unit as the measure of

performance, the experiment tested the question whether cur-

rently enrolled students could, as a pragmatic alternative

to the establishedJ model, fulfill proctoring- role in a

self-contained class, To alloy.; a proper test of the ques-

tion to be 1-ade, it w.;as necessary to ,,ithhold use of the

unit msstery and self-pacin' features, The results were:

1, The [i...p usinT currently enrolled rotorss achiev-

ed reliably better perfo.-'crmance results than the group using

previously i. ied proctors,

2. J. :- effects of practice on specific stimulus items

were successfully cont.-olled, ruling out this undesirable

source of ;, '.riance as a possible cause of the diff. .:-nce
be t v.een gzou' c\:

3. The effects o' possible collaboration between stu-

dents :2ho v- proctoring each othez ' source of t;'e difference by: t,.,,en grour'a.










4. The rate of oral responding on five minute repeat

performance samples was found to be reliably higher than

written responding on 30 minute review tests.

5. The majority of students achieved higher percent-

ages of correct responses on the written review tests than

on the eight oral response performances that preceded them.

The mean gain in correct percentage was of greater magnitude

than the mean loss.

From the results, the following conclusions were drawn:

1. The applicability of the behavioral approach (Kell-

er, 1968a; Johnston and Pennypacker, 1970) to college in-

struction has been successfully extended to another subject

matter field.

2. Proctoring the perfoi'rnance of other students appears

to provide a powerful opportunity for strengthening perform-

ance, particularly for students who are reinforced for usil-,

it as such; i.e., currently enrolled students.

3. The combination of learning as student proctor and

as student responder leads to higher rates of accurate re-

spondi-ig than does lezrznng as student responder alone.

4. The proctor-responder combination is a feasible

classroom arrangement, and can be used without lessening the

quality of performance through illegitimate collaboration or

practice on specific stimulus items.

5. The method described in this study is a suitable
p:jL-..-;itic alternative to the Personalized Instruction method






85


of Keller (1.968a) and the Behavioral Approach of Johnston
and Pennypacker (1970), and may be used with or without the
unit mastery and self-pacing features.

































APPENDICES




























APPENDIX A

Student Handbook
(a) Notes to the Student
(b) Rules of the Game
(c) Schedule










NCTE3 TO THE STUDE.Ni

The procedure we will follow this quarter has already

been explained in class. These additional comments will tell

you more about the system. We hope you will see your part in

the course more clearly and understand some of the reasons we

have set it up this way. Bear in mind that there is much to

be learned about precision teaching in the classroom. Your

comments, criticisrms and suggestions are earnestly solicited.

Feel free to contact the instructor at any time. His name is

John Gaynor. He can be found at the Learning Center and Spec-

ial Education Dep3i-:.,t.ent, Room 43, Norman Hall.


Frequent e&rfora.ince ij-.r Ar-oorae Reinfcrcemrnent

Precision teaching is a product of behavior analysis. It

seeks to apply experimentally derived learning principles to

the classroom. Although it is similar in some respects to

programmed instruction, it comes closer to being a middle step

between conventional methods and the teaching machine. The

science of teac-In is exploited but so is the human element.

You will probably have more interpersonal contact in this

course than you do in others.

Teachers have knoi.n for centuries that children learn by

doing. So do adults, college students, pigeons and rats. Yet

much of our education sy ter: fails to provide sufficient

opport'. itty for the student to perform-to "do"-under ap-

propriate reinforcer:-nt conditions; that is to say, under









conditions which attach real consequences to the performance.

A real consequence in this context is the grade that is earned

In the course. The typical collegP course provides two such

performance opportunities, a midterm and final exam. In the

course you are about to take, "doing" occurs mo're frequently,

twice a week rather than twice a quarter, Instead of two mas-

sive doses of performance, there are twenty bite-size doses.

One advantage of having many small performance sessions

is that you get a lot of information (sometimes called feed-

back) on wher- you stand with respect to the grade you wish to

earn. If the preparation you make doesn't get you the grade

you want, you don't have to wait until after you've bombeJ the

midterm to find it out.


Statinr Objectives in Behavioral Terns

There is growing recognition Of the importancE of stating

course objectives in behavioral terms. If you plan to teach

in Florida, you will almost certainly become better acquainted

with this facet of the behavioral approach to instruction.

Florida school teachers are now ernaged in revising public

school curricula along these lines. Initially, there was a

mandate to complete the job by December, 1969. Now the tire

period has been extended to the end of the current school term,

The point is that we are talking about something that is very

much here and now. Your exposure to a behavior analytic teach-

ing system will put you in a good position to judge its worth,









to understand its limitations and to work out its logical ex-

tensions when ycu have a class of your own.


Verbal Respondi:--: -th? Coerational Unit

Ideally, the teacher should be able to say exactly what

the student will be able to "do" after completing the course.

In this course the expectation is that you will be able to talk

about exceptional children with the facility of people who reg-

ularly work in the field of Special Education. The tricky part

is to translate this kind' of statement into operational terms,

We have chosen verbal responding as the most suitable operation-

al unit.

Obviously, responding to stimulus items isn't the same as

spontaneous shop talk in the natural environment. But it comes

closer than, say, a 40 item multiple choice test, which is what

we used last year at this time. And it has this singular ad-

vantage: It is something we can do in the classroom setting.

Evaluating the facility of your on-the-job performance in the

real world is what we hope to do ultimately, but we simply

haven't reached that degree of sophistication yet.

Hence, we will infer your ability from performance samples

taken continuously over the next ten weeks, Your rate of cor-
rect responding will tell something about your grasp of the

material (the better you know it, the faster you go); and the

incorrect rate will tell something about the accuracy and com-

pleteness of it. Taken together, they should render a fairly









good account of what you are learning from week to week.


Research Function of the Precision Class

Surely you recognize that we wouldn't go to all this

trouble just to figure out what grade to give you. College

instructors have been giving grades since the beginning of

time. The science of grade giving is not the issue here--

the application of learning principles to the classroom is.

We are actively engaged in research that may help to improve

the quality of classroom teaching. Consequently, what you do

this quarter may be of greater than usual importance. With

your cooperation and a little bit of luck, we stand a chance

of making a contribution to the science of teaching.

Your participation in the project requires nothing ex-

traordinary. The time requirement for this course is no

greater than for others. In fact, it may be less, for there

are no papers to be written, no abstracts, no outside reading

assignments, no cramming for midterm or final exams. If any

special demand is made, it is that you try to meet your per-

formance sessions on schedule. There are several reasons for

this, but none more important than the involvement of others

--your manager, the student you manage, or both.

The only other thing we would ask is that you try to ap-

proach this course in the spirit of scientific inquiry. The

cornerstone of the entire project is the collection of accu-

rate data. One of the questions we will try to answer is