Citation
Enhancing Learning Trials

Material Information

Title:
Enhancing Learning Trials Examining the Effects of Increasing Opportunities to Respond on Active Student Responding and Student Behavior
Creator:
Haydon, Todd
Place of Publication:
[Gainesville, Fla.]
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (147 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Special Education
Committee Chair:
Scott, Terry M.
Committee Members:
Conroy, Maureen A.
McLeskey, James L.
Sindelar, Paul T.
Conwill, William Louis
Graduation Date:
8/9/2008

Subjects

Subjects / Keywords:
Classrooms ( jstor )
Educational research ( jstor )
Experimentation ( jstor )
Learning ( jstor )
Learning rate ( jstor )
Mathematical dependent variables ( jstor )
Special education ( jstor )
Special needs students ( jstor )
Students ( jstor )
Teachers ( jstor )
Special Education -- Dissertations, Academic -- UF
active, behavior, choral, disorders, opportunities, respond, responding, student, to
Genre:
Electronic Thesis or Dissertation
born-digital ( sobekcm )
Special Education thesis, Ph.D.

Notes

Abstract:
ENHANCING LEARNING TRIALS: EXAMINING THE EFFECTS OF INCREASING OPPORTUNITIES TO RESPOND ON ACTIVE STUDENT RESPONDING AND STUDENT BEHAVIOR A key characteristic of students with or at-risk for emotional or behavioral disorders (EBD) is displaying off-task, disruptive, and aggressive classroom behaviors. Furthermore, many students with or at risk for EBD are behind academically, and over a period of time the discrepancy between their skill level and the level of their normally achieving peers widens. In addition, students with EBD may be part of numerous confrontations in the classroom, interrupt the flow of instruction, and may affect the behaviors of other students creating a chaotic environment for their teachers and all students in the classroom. To address the academic and behavioral needs of students with or at-risk for EBD, this study utilized an alternating treatments design to investigate the effects of three types of opportunities to respond (OTR) procedures on the disruptive, off-task behavior, and active student responding (ASR) of high-risk students during group instruction in a 2nd grade general education classroom. Results of this study suggest that choral responding is a more effective instructional strategy than individual responding in terms of decreasing disruptive and off-task behavior. In terms of disruptive behavior specifically, mixed responding appears to be a more effective instructional strategy than either choral or individual responding alone. Results for off-task behavior and ASR are less clear. Results from this study replicate and extend earlier research in which authors found similar results for disruptive and off-task behavior. Future research should compare the three types of OTR with students of different ages and across various subject areas such as math and science, and with children identified with various learning disabilities or with autism. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2008.
Local:
Adviser: Scott, Terry M.
Statement of Responsibility:
by Todd Haydon.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Haydon, Todd. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Classification:
LD1780 2008 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

1 ENHANCING LEARNING TRIALS: EXAMIN ING THE EFFECTS OF INCREASING OPPORTUNITIES TO RESPOND ON ACTIVE STUDENT RESPONDING AND STUDENT BEHAVIOR By TODD HAYDON A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2008

PAGE 2

2 2008 Todd Haydon

PAGE 3

3 To my wife Kathleen and son Christopher

PAGE 4

4 ACKNOWLEDGMENTS I would like to thank the fo llowing people for guiding and helping m e through the process of obtaining a doctorate in Special Education. Fi rst, gratitude goes to Kathleen (wife) and Christopher (son) for allowing me to pursue my passion for learning and thinking. I appreciate the opportunity and resources Terry Scott provided me in atte nding the University of Florida. I valued the commitment, dedication, expertise, an d support from Maureen C onroy; she is a great mentor and has allowed me the opportunities to be a researcher and writer, and has turned out to be a good friend. Thanks go to James McLeskey who provided insightful advice on writing, publishing, and job searches. Special thanks go to Paul Sindelar who agreed to help with my dissertation study and showed patience while I de veloped my writing style and asked questions about research methodology. I valued the input from and conversations with William Conwill on families with antisocial youth. I thank Rich Mancil for providing guidance throughout the doctoral program and for our conversations on single subject design, a nd applied behavior analysis. I appreciated all the help the office st aff; Shaira Rivas-Otero, Michell York, and Vicki Tucker has provided for me. I will fondly remember my experience as a doctoral student at the University of Florida.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........8 LIST OF FIGURES.........................................................................................................................9 ABSTRACT...................................................................................................................................10 CHAP TER 1 INTRODUCTION..................................................................................................................12 Overview of a Learning Trial................................................................................................. 12 Statement of the Problem....................................................................................................... .15 Significance of the Study........................................................................................................16 Purpose of the Study........................................................................................................... ....17 2 LITERATURE REVIEW.......................................................................................................18 Conceptual Model of Opportunity to Respond....................................................................... 18 Teacher Questioning........................................................................................................ 20 Importance of Increasing Students Opportunities to Respond ....................................... 22 Type of Students Who Benefit from Recei ving Increased Opportunities to Respond....23 Opportunities to Respond: A Critical Synthesis of the Literature .......................................... 25 Method Used to Select Reviewed Studies....................................................................... 25 Literature Search............................................................................................................. 26 Results........................................................................................................................ .....28 Dependent Variables....................................................................................................... 30 Independent Variable: Increased Rates of OTR..............................................................30 Description of Implementers........................................................................................... 31 Research Designs.............................................................................................................32 Discussion...............................................................................................................................34 Faster Presentation Rate.................................................................................................. 34 Opportunities to Respon d and Choral Responding .........................................................38 Error Correction...............................................................................................................40 Errorless Learning........................................................................................................... 44 Social Validity.................................................................................................................44 Treatment Integrity.......................................................................................................... 48 Threats to Internal Validity............................................................................................. 50 Generality and Threats to External Validity.................................................................... 52 Future Research Directions.....................................................................................................54 Summary.................................................................................................................................56 Statement of the Problem....................................................................................................... .58 Purpose of the Study........................................................................................................... ....60

PAGE 6

6 3 METHODS.............................................................................................................................72 Method....................................................................................................................................72 Participants......................................................................................................................72 Setting and Materials....................................................................................................... 73 Measurement Procedures................................................................................................. 74 Experimental Procedures................................................................................................. 76 Design..............................................................................................................................80 Interobserver Agreement........................................................................................................80 Treatment Integrity.......................................................................................................... 81 Social Validity.................................................................................................................82 4 RESULTS...............................................................................................................................87 Intervention Results................................................................................................................87 Rate of Disruptive Behavior............................................................................................ 88 Percentage of Off-Task Behavior.................................................................................... 92 Percentage of Active Student Responding......................................................................95 Treatment Integrity............................................................................................................ .....99 Social Validity......................................................................................................................101 Summary...............................................................................................................................102 5 DISCUSSION.......................................................................................................................112 Overview Findings................................................................................................................113 Disruptive Behavior....................................................................................................... 114 Off-Task Behavior.........................................................................................................115 Active Student Responding (ASR)................................................................................116 Other Considerations.....................................................................................................117 Social Validity......................................................................................................................118 Teachers Perceived Effectiveness of the Three Types of OTR..................................... 118 Teachers Likelihood of Using the Intervention in the Future...................................... 119 Interpretation of Findings.....................................................................................................119 Implications for Practice...................................................................................................... .122 Limitations.................................................................................................................... ........123 Implications for Future Research.......................................................................................... 126 Summary...............................................................................................................................128 APPENDIX A SAMPLE LESSON TRIAL.................................................................................................. 130 B CODING MANUAL............................................................................................................ 132 C CODING SHEET.................................................................................................................136 D TREATMENT INTEGRITY SHEET..................................................................................137

PAGE 7

7 E SOCIAL VALIDITY FORM............................................................................................... 139 REFERENCES............................................................................................................................141 BIOGRAPHICAL SKETCH.......................................................................................................147

PAGE 8

8 LIST OF TABLES Table page 2-1 Description of studies examining effe cts of increased opportu nities to respond ............... 65 3-1 Descriptions of participants...............................................................................................84 3-2 Interobserver agreement data............................................................................................. 85 4-1 Means of rate of disruptive behavior, pe rcen tages of intervals off-task and active student responding (ASR)................................................................................................ 104 4-2 Social validity results.................................................................................................... ...106 4-3 Treatment integrity results............................................................................................... 107

PAGE 9

9 LIST OF FIGURES Figure page 2-1 Learning trial............................................................................................................. .........62 2-2 Classifications of studies in the literature review .............................................................. 63 2-3 Choral responding literature.............................................................................................. 64 4-1 Rate of disruptive behavior per minute. Open circles =indivi dual responding, closed squares = choral responding and open triangles = m ixed responding............................. 109 4-2 Percentage of intervals off-task. Op en circles = individu al responding, closed squares= choral responding and open triangles = m ixed responding.............................. 110 4-3 Percentage of active student respon ding. Open circles = individual responding, closed squares = choral responding and open triangles = m ixed responding..................111

PAGE 10

10 Abstract of Dissertation Pres ented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy ENHANCING LEARNING TRIALS: EXAMIN ING THE EFFECTS OF INCREASING OPPORTUNITIES TO RESPOND ON ACTIVE STUDENT RESPONDING AND STUDENT BEHAVIOR By Todd Haydon August 2008 Chair: Terrance Scott Major: Special Education A key characteristic of students with or at-ris k for emotional or beha vioral disorders (EBD) is displaying off-task, disrup tive, and aggressive classroom behaviors. Furthermore, many students with or at risk for EBD are behind academically, and over a period of time the discrepancy between their skill level and the le vel of their normally achieving peers widens. In addition, students with EBD may be part of numer ous confrontations in the classroom, interrupt the flow of instruction, and may affect the behaviors of other stude nts creating a chaotic environment for their teachers and all students in the classroom. To address the academic and behavioral needs of students with or at-risk for EBD, this study utilized an alternating tr eatments design to investigate th e effects of three types of opportunities to respond (OTR) pro cedures on the disruptive, o ff-task behavior, and active student responding (ASR) of high-risk students during gr oup instruction in a 2nd grade general education classroom. Results of this study suggest that choral responding is a more effective instructional strategy than individual respondi ng in terms of decreasing disrup tive and off-task behavior. In terms of disruptive behavior sp ecifically, mixed responding app ears to be a more effective instructional strategy than either choral or individual respond ing alone. Results for off-task

PAGE 11

11 behavior and ASR are less clear. Results from th is study replicate and ex tend earlier research in which authors found similar results for disruptive and off-task be havior. Future research should compare the three types of OTR w ith students of different ages and across various subject areas such as math and science, and with children identified with various learning disabilities or with autism.

PAGE 12

12 CHAPTER 1 INTRODUCTION The purpose of this chapter is to provide a brief overview of th e litera ture and rationale for enhancing learning trials with stude nts identified with or at-risk for emotional behavior disorders (EBD). Specifically, this literature overview hi ghlights the importance of increasing the number of opportunities to respo nd (OTR) by using choral responding procedures (CR), error correction procedures, and decreasing the amount of time in between learning trials (an intertrial intervalITT). Results from studies show a functional re lation between increased rates of OTR and an increase in correct responses and a decrease in disruptive behavior and off-task behavior for students with or at risk for EBD. The introduction concludes with a discussion of the contributions this study makes to existing rese arch and is followed by the studys research questions. Overview of a Learning Trial One way to conceptua lize instructional practices and learning behavior s is to use a learning trial. A learning trial consists of a three term, stimulus-response-consequent contingency sequence (Skinner, Fletcher, & Hennington, 1996). An example of a learning trial is when a teacher presents a science word on a flash card (i .e., stimulus), the student recites the word aloud (i.e., response), and the teacher then says, Good answer. (i.e., consequent) (Skinner et al., 1996). Researchers have shown that improving the qua lity and increasing th e quantity of learning trials results in higher learni ng rates (Barbetta & Heward, 1993; Carnine, 1976; Miller, Hall, & Heward, 1995). A qualitatively strong er learning trial would require fewer repetitions to meet a criterion level, whereas increasing the quantity of a learning trial would result in the completion of more learning trials in a fixe d amount of time resulting in an increase in student learning rates without increasing allocated time (Skinner et al.).

PAGE 13

13 Skinner et al. (1996) claimed that previous re search showed that in creasing the number of learning trials could increase learning leve ls during the acquisition, fluency building, and maintenance stages of learning. However, the types of questions teachers ask their students also influence student outcomes. For example, students who have skill deficits are more likely to respond correctly to fact questions (questions that allow students to recall and prac tice previously learned material and typically require one to three word answers). Hi gher achieving students may respond to higher order questions that require time to process and assimilate new information (Sitko & Slemon, 1982). Skinner et al. (1996) describe d four procedures that have been shown to increase both learning trial rates and learning rates during teacher-led instruc tion. Two of the four methods pertain to this study. The first stra tegy is to reduce an intertrial interval (ITI); that is, the time between the end of one trial (i.e., consequent or feedback delivered) and the beginning of the next trial (i.e., antecedent pres ented). Carnine (1976) demonstrated that reducing ITIs could lead to increased learning trial rates, correct answering rates, and in creased on-task behavior levels. Furthermore, Carnine suggested th at during longer ITIs, some st udents might misbehave and as a result not attend to later instru ction. A second approach is to u tilize choral responding. Choral responding occurs when all students are asked to actively respond followi ng the presentation of an instructional stimulus. When teachers us e choral responding, the number of students responding per learning trial incr eases (Miller, et al., 1995; Sindelar, Bursuck, & Halle, 1986). In choral responding (CR), all students in the classroom respond in uni son to the teachers question (Heward, 1994). An example of CR is wh en students respond, after the teacher asks the entire class, What is 4 times 4 ? Another example is when the students say, Tallahassee in response to the teachers question, What is the cap ital of Florida? The

PAGE 14

14 purpose of using CR is to increase the number of active student responses (ASR) and as a result increase the number of correct responses and the amount of time students are engaged during instruction. Results from several studies indicate a positiv e relationship between an increased rate of choral responding on students co rrect responses, on-task behavior, and disruptive behaviors of students (Carnine, 1976; McKenzie & Henry, 1979; Miller et al ., 1995; Sainato, Strain, & Lyon, 1987; Sutherland, Alder, & Gunter, 2003). McKenz ie and Henry claimed that testlike events (their term for choral responding) might be an effective practice in maintaining and sustaining attention of all students in a large setting while allowing the teacher to monitor each students understanding of each question. The results of th e literature indicate th at engaging students during instruction by using choral responding increases their academic achievement and may help teachers reduce the rate of student disruptive behavior in their classrooms while keeping students on-task. Furthermore, researchers have demonstrated clear effects of using choral responding across students with va rying characteristics (male/fema le, age groups, students with learning disabilities, different iating IQ levels), settings (r esource rooms, self-contained classrooms, and small groups in regular education clas srooms), and subject areas, such as math, geography, and health science (Barbetta & He ward, 1993; Carnine, 1976; Skinner, Smith, & McLean, 1994). One of the strengths of choral responding is that teachers can assess if students understand the content of the lesson, becau se they receive immediate feedback from student responses. Often students are hesitant to admit that they do not understand the content of the lesson in front of an entire class; therefore, a benefit of usi ng CR is that a teacher can observe a particular student in the context of a group response and determine if a student has verbalized an incorrect

PAGE 15

15 response. Then, the teacher can cue the entire cl ass several times by saying, Again class, while providing additional practice and observing the re sponses of the partic ular student (Heward, 1994). In summary, students are more likely to demonstrate correct responses and increases in academic achievement in a classroom environmen t where a teacher incorporates CR in their instructional activities (Heward, Courson, & Na rayan, 1989). In addition, teachers can use CR during academic instruction to assess student un derstanding and provide immediate feedback to students responses. When students are engaged and actively responding to questions the teacher can focus on academic content rather than being c oncerned with inappropriate student behaviors. The result is that lessons are more engaging, delivered at a brisk pace, and reinforcing for students. Another positive outcome is that stude nts are less likely to engage in disruptive behavior. Statement of the Problem A key characteristic of students with or at-risk for EBD is displaying off-task, disruptive, and aggressive classroom behavi ors (Kauffm an, 2005). Furthermor e, many students with or at risk for EBD are behind academically and over a period of time the discrepancy between their skill level and the level of their normally achieving peers widens (Lambert, Cartledge, Heward, & Lo, 2006). In addition, students with EBD may be part of nume rous confrontations in the classroom, interrupt the flow of instruction, and affect the behaviors of other students creating a chaotic environment for their teachers and students in the classroom (Sutherland, Wehby, & Yoder, 2002). A few researchers (Gunter et al., 1993; Suth erland et al., 2002) hypothesize that some students with EBD may display di sruptive behavior as a result of poor academic instruction, ineffective teacher feedback, and minimal pos itive feedback. Other students with EBD may

PAGE 16

16 exhibit disruptive behavior before the developmen t of academic difficulties (Gunter et al., 1994; Gunter & Coutinho, 1997; Sutherland et al., 2003). In addition, these researchers have suggested that there is a strong inverse re lationship between high rates of problem beha vior and low rates of instruction. Therefore, effec tive instruction is one instruc tional method available to reduce negative behavior in the classroom (Engelmann & Colvin, 1983). For example, teachers who are able to engage students and make them successf ul during instructional time may reduce these students frustration, while increasing their par ticipation and success in classroom instructional activities. Significance of the Study Eighteen studies were included in the review for the current study and can be divided into two broad categories based on the type of responses of the students (verbal or written) and further divided by arrangem ent of students (i.e., studies involving the en tire class or a small group). The results of the literature review indica ted that four studies were implemented with an entire class. In these 4 studi es, researchers employed one expe riment in a general education classroom (McKenzie & Henry, 1979), two in a sp ecial education classroom (Sainato et al., 1987; Sutherland et al., 2003), and one in a combin ation of both settings (Miller et al., 1995). The proposed study extends the learning trial literature in several ways. First, the effectiveness of decreasing students disruptive and off-task be havior as well as increasing students ASR was examined by comparing three t ypes of OTR (individual, choral and a mixture of 70 % choral responding and 30% individual responding) in a 2nd grade general education classroom setting. Secondly, the three types of OTR represente d the use of an antecedent instructional strategy in the beginning of a l earning trial as opposed to an error correction strategy at the end of a learning trial. Third, the three types of OTR were used with students identified as at-risk for EBD.

PAGE 17

17 Purpose of the Study The purpose of this study was to investigat e the following research question: How does a choral responding procedure com pare to an individual responding procedure and a mixture of choral and individual responding procedure during group instruc tion in a general education classroom on the disruptive, off-task behavi or, and active student responding of high-risk students?

PAGE 18

18 CHAPTER 2 LITERATURE REVIEW The purpose of this chapter is to examine the literature and provide a critical review on the effects of oppor tunities to respond on academic and behavioral outcomes of students identified with disabilities or at-r isk for emotional and behavioral disorders (EBD) in classroom settings. Fi rst, a conceptual model of OTR is presented and defined. Next, a discussion of two types (drill and higher cognitive) of teacher questions (i.e., opportunities to respond) is presented. Then, results from studies are analyzed and presented. Finally, the literature review concludes with a summary that synthesizes major trends and patterns on the topical strategy of increasing academic opportunities to respond (OTR). Conceptual Model of Opportunity to Respond Over the years, a num ber of researcher s have defined opportunities to respond (OTR) in various ways. Greenwood, Delquadri, and Hall (1984) first defined opportunities to respond as the interaction between: (a ) teacher formulated in structional antecedent (the materials presented, prompts, questions asked, signals to respond etc.), and (b) their success in establishing the academic respondi ng desired or implied by the materials (p.64). Sutherland, et al (2003) defined OTR as when the teacher asked a question (of an individual or the group) that required a specific re sponse or was open-ended, with the purpose of having a student explain his or her thought pro cess (p.241). Heward et al (1989) stated that teachers give OTR by usi ng choral responding (CR) and defined CR as all students in the groups orally responding in unison to a teacher-posed question (p.72). Stanley and Greenwood ( 1983) define OTR as the ra te of frequency at which students engage in specific academic responses (p.370). Hall, Delquadri, Greenwood, and

PAGE 19

19 Thurston (1982) described OTR as student s rate of responding during academic instruction. Clearly, the above definitions vary in their descripti ons, depending on whether OTR was defined as a teacher or student behavior. When OTR is defined as a teacher behavior, it is described as a type of questioning pr ocedure, prompt, or cueing technique. When OTR is defined from the point of view of st udent behavior, it is defined as a type of response to a teacher question. For the purposes of this review OTR is defined as teacher questioning behavior and an instructional antecedent stimulus that begins a learning trial or ends a learning trial as an error correcti on technique (Ferkis, Be lfiore, & Skinner, 1997; Sutherland et al., 2003). A schematic overview of this definition is provided (Figure 2-1). Giving students OTR is an instructional st rategy used in the direct-instruction model (Carnine, 1976). This model uses teacher explanations and modeling combined with student practice and feedback to teach con cepts and procedural skills. According to Rosenshine (1986), direct instruction can be di vided into six teacher functions: review of previous material, presentation of new materi al, guided practice, feedback and corrections, independent practice, and week ly and monthly reviews. Revi ew of previous material checks prerequisite skills and knowledge. Pr esentation of new material gives students additional explanations and several exampl es, and checks for student understanding. Guided practice (via giving students OTR) pe rmits teachers to supervise initial student practice. Providing feedback confirms st udent understanding. Independent practice provides the additional practi ce students need to acquire a skill. Weekly and monthly review offers additional successful pract ice and monitors student progress.

PAGE 20

20 Ferkis et al. (1997) refers to the compone nts of a learning trial as a three-term contingencyantecedent, response, and conseque nce (A-R-C). In this case, the OTR is the antecedent stimulus in the learning trial model. After an OTR, a verbal or written student response occurs (Figure 2-1). The learning tria l concludes with corre ctive feedback given by the teacher, which becomes the consequence. Learning trials may be repeated and the latency between the end of one learning trial (i.e., the consequent stimulus) and the beginning of the next learning tria l (i.e., antecedent stimulus) is called an intertrial interval (ITI) (Skinner et al., 1994). Because an OTR is delivered in a question format, it is important to understand if qualita tive differences in teacher qu estions have been discussed in the literature. The followi ng sections provide an overview of the teacher questioning literature. Teacher Questioning The topic of OTR has its origin in the teacher questioning literature. In 1912, Stevens (one of the early researchers to study teacher questions) (as ci ted in Brualdi, 1998) stated that approximately 80% of instructi on consisted of teacher questions. Researchers have since defined and developed many systems for classifying teacher questioning (e.g., Gall, 1970; Samson, Sirykowski, Weinstei n, & Walberg, 1987). Sitko and Slemons taxonomy of teacher questions (developed in the 1970s) first consisted of seven categories: (a) affective judgment, (b) discrimination, (c) recall, (d) sequencing/paraphrasing, (e) conceptual relatin g, (f) inference, and (g) problem solving. However, Sitko and Slemons taxonomy was later changed to four categories (i.e., discrimination, recall, relating concepts, a nd problem solving) when teachers had difficulty coding several cognitive categories (Sitko & Slemon, 1982). Since the 1980s,

PAGE 21

21 researchers have simplified these existing co ding systems and have classified teacher questions into two major categories: factual and higher cognitive quest ions (Gall, 1984). Factual questions are types of questions that allow students to recall information that was previously presented. Examples of OTR th at are factual questi ons are: What is 4 times 4? What is kinetic energy? Higher order questions are c ognitive questions that require students to analyze, evaluate, or manipulate information and use independent thinking skills (Gall, 1984). An example of an OTR that is a higher cognitive question is: What do you think can be done to slow down global warming? How did you arrive at that answer? The literature on the effectiveness of f actual questions in comparison to higher cognitive questions has mixed results. In a re view of three large co rrelational studies, Rosenshine (1976) (as cited in Gall, 1984) concluded that students learn best when they are provided narrow (his term for factual questions) que stions. Winne (1979) reviewed the same studies as Rosenshine, plus two a dditional experiments and concluded that the type of question makes little difference on student achievement. However, Redfield and Rousseau (1981) concluded th at lesson plans that consis t of predominately higher cognitive questions have a positive effect on student achievement. In response to these contradictory results of earli er studies, Gall concluded that the contradiction could be resolved by examining the type of student s who participated in these studies. For example, in the Rosenshine study, part icipants consisted of low-income, primary grade level students, while participants in the other two studies repr esented a greater range of economic status, ability level, and grade level. Therefore, Gall (1984) concluded that: (a) elementary aged novice learners respond prim arily to fact questions that promote basic

PAGE 22

22 skill building; and (b) students with average or high cognitive ability respond to higher cognitive questions that foster independent thinking required for students to be successful at the secondary level. Brualdi (1998) provided a descriptive anal ysis of teacher questions and divided teacher-questioning techniques in to good and bad categories. According to Brualdi, teachers who use good questioning techniques : (1) elicit a high perc entage of correct responses, (2) allow sufficient wait time, and (3) give feedback to student responses. The result is higher student achievement and a greater number of positive student-teacher interactions. Bad questioning techniques c onsist of: (1) asking va gue questions (e.g., What do you think of the author of the story? ), and (2) asking questions that are too abstract for students level of understandi ng (e.g., asking a kindergarten class the following question: Why do we use daylight savi ngs time?). It is important that teachers are aware of the type of ques tions to ask students depending on whether new or previous information is being practiced (Gunter, Reffel, Barnett, Lee, & Patrick, 2004). In summary, factual questions are questions that allow students to recall and practice previously learned material while higher or der questioning allow students to process and assimilate new information, and require mo re elaborate and exte nsive answers. In addition, factual questions typica lly require one to three word answers. Students who have skill deficits are more likely to respond corre ctly to factual questions than higher order questions (Sitko & Slemon, 1982). Importance of Increasing Students Opportunities to Respond Giving students sufficient opportunities to re spond is im portant because researchers suggest that OTR is linked to on-task be havior and engagement during instruction (Carnine, 1976; Sainato et al ., 1987; Sutherland et al., 2003). When presented with OTR,

PAGE 23

23 particularly in the form of factual questi ons, elementary aged students who are slow learners are more likely to answer a question and have a correct response in comparison to being asked higher cognitive type questions. As a result they are able to stay on-task and remain engaged during instruction (Gall, 1984; Rosenshine, 1980; Gunter, Shores, Jack, Denny, & DePaepe, 1994). In addition, when OTR occurs, teachers can give students practice and feedback by using factual quest ions and thereby quickly assess student understanding. During reading inst ruction, the use of OTR in the form of factual questions can cue students and help focus their attenti on on particular passages in textbooks. Finally, the call and response format used when teach ers ask factual questions closely resembles the format of short answer and multiple choi ce questions of conventional tests that are used to determine the amount of learning at the end of a curriculum unit (Gall; McKenzie & Henry, 1979). In summary, if teachers want to increas e the active engagement, number of correct responses and decrease disruptive behavior s for students with skill deficits, providing OTR in the form of asking factual questions, which cover information that has recently been reviewed in textbooks, or providing students enough information within the question itself is an effective prac tice (Gunter et al., 1994). Type of Students Who Benefit from Recei v ing Increased Opportunities to Respond Although all students may benefit from r eceiving OTR, researchers suggest that students who have skill deficits benefit the most, because they receive increased chances to learn and demonstrate their understanding of instructional materi al (Gall, 1984). Good (1970) found that students, particularly stude nts who are low achievers, are not provided equal opportunities to respond. Specifically, Good suggested th at teachers may fear that low achieving students could expe rience criticism from their peers when they have an

PAGE 24

24 incorrect response or teachers are concerned that low achieving students will not have the correct answer and in turn interrupt the flow of instruction. When the class is asked to volunteer to an opportunity to respond (e.g., Cla ss, can anyone give me the definition of photosynthesis?), low achievi ng students are passed over by the teacher and higher achieving students are typically called on by the teacher. As a result, low achieving students may often fail to receive the practice and feedback that is necessary for achievement gains. Good makes an analogy of the above situation to a baseball team, where regular players (i.e., high achievers) get more playing ti me and reserve players (i.e., low achievers) get very little playing time. The former analogy is relevant to students with emotional or behavioral disorders (EBD ) because they exhibit both academic and behavioral deficits, and these dual deficits can make it diffi cult for teachers to provide effective instruction (Kauffman, 2005). Good and Brophy (2003) encouraged teachers to call on students who do not (volunteer) raise their hands when given an OTR, in order to help maintain their focus. Some support for the authors suggestion may be found in one study. Jones and Gerig (1994) used qualitative and quantitative methods to examine classroom interaction patterns among middle school students who were identified as si lent (i.e., non-hand raisers) and non-silent (i.e., hand raisers). Results of their study showed that the four teachers who participated in the study dire cted their questions proportionally for both types of students to increase learning. However, researchers have shown in other studies, that students at risk for developing more challenging behaviors did not receive e qual chances to respond to teacher questions. Two studies using lag se quential analysis (a technique to study

PAGE 25

25 interactions and behavior be tween individuals by calculating the probability of one event preceding or following another event) showed that students at-risk for EBD, received fewer OTR and made fewer academic res ponses (Gunter, Jack, Shores, Carrell, & Flowers, 1993; Van Acker, Grant, & Henr y, 1996). A study by Carr, Taylor, and Robinson (1991) had similar results. These authors found that teachers provided less instruction to those children who engaged in problem beha vior during instruction time than those children who typically did not demonstrate problem behaviors during instruction time. Giving students OTR is an engaging teaching strategy that teachers can use to keep students on-task. In addition, when teachers give low achieving students OTR after an incorrect response, there is a greater probability that these students will emit a correct response. When students who are at-risk for sc hool failure receive corrective feedback the implications are that they may experience more success in school, receive additional instruction time, have fewer disruptive behaviors, and experience additional positive interactions with their teachers. In the fo llowing section a review of the literature examining OTR will be presented and the implications for using this teaching strategy with students who have learning difficulties and challenging behavior were examined. Opportunities to Respond: A Critic al Synthesis of the Literature Method Used to Select Reviewed Studies Inclusion criteria The purpose of this review is to provide a critical analysis on the effects of opportunities in response to academ ic requests on the academ ic and behavioral outcomes of students identified with disabili ties or at-risk for EBD in classroom or analogue settings. The studies included in th e review were select ed based on a priori determined criteria of relevancy and met hodological sufficiency. The criteria selected

PAGE 26

26 included studies that examined increasing ra tes of OTR through chor al responding or hand raising or comparing choral re sponding to individual responding. The literature review consisted of peer reviewed, published studi es that examined the effectiveness of increasing rates of t eachers use of OTR (independent variable) on students academic and behavioral outcomes (dependent variables) (i.e., sight word mastery, written math problems, on-task behavior, correct responses, disruptive behavior). Participants in the studies included st udents, grades PK-12, with EBD, or who demonstrated problem behaviors that may place them at-risk for EBD (e.g., off-task, disruptive, or aggressive beha vior), or Learning Di sabilities (LD), or children identified with autism. Studies across a variety of setti ngs were included (i .e., regular education classrooms, special education cl assrooms, or analog, clinical settings). Case studies and experiments using single subject methodology were included. Studies that examined teachers utilizing response cards (a different type of response behavior than verbal responding) to increase rates of OTR were excluded from the review. Thus, studies comparing response cards with hand raising or choral responding were also excluded. Literature Search To identify articles for inclusion in the re view, several search strategies were used. First, the au thor searched the following com puterized databases: ERIC, Academic Search Primer, PsycInfo, Psychology and Behavioral Sciences Collection, and Wilson Web. Keywords used to search the databases in cluded: opportunities to respond, active teaching, teacher questioning, choral responding, unison responding, ordered responding, individual responding, active student responding, active learning, and academic responding. Next, the author conducted ancestral sear ches of reference lists to find other relevant studies that met inclusion criteria. In addition, an ancestral search of a referen ce list of an earlier

PAGE 27

27 review on the topic of opportunities to respond (Sutherland & Wehby, 2001) was conducted. Third, a prominent book was examined to obtain references: Behavior Analysis in Education : Focus on Measurably Superior Instruction (Gardner, Sainato, Cooper, Heron, Heward, Eshleman, & Grossi, 1994). Four th, a manual and online search of five journals was conducted. These jour nals were selected based on an earlier limited literature review on examining the effects of OTR by Sutherland and Wehby (2001), and because these journals contain articles on instructional strategies specif ically with students with or at-risk for EBD. Because of their availabi lity and access in the university library, the following journals were hand searched from January 1985 to November 2006: (a) Education and Treatment of Children and (b) Behavioral Disorders The Journal of Behavioral Education was hand searched from years 1991 to 2006. The Journal of Applied Behavior Analysis was hand searched from Spring 1968 to November 2006, and Preventing School Failure was searched on-line from Winter 1990 to Fall 2006 Eighteen studies met inclusion criteria a nd were included in this review. These studies can be divided into tw o categories based on the type of responses of the students (verbal and written) and further divided by form at (i.e., studies invol ving the entire class or a small group) (Figure 2-2). Six of these studies were most relevant to the proposed investigation, because they were implemented w ith an entire class or compared choral and individual responding. Of these six, researcher s implemented one in a general education classroom (McKenzie & Henry, 1979), three in a special education classroom (Sainato et al., 1987; Sutherland et al., 2003; Wolery, Ault, Gast, & Griffen, 1992), one in a combination of both types of settings (Miller et al., 1995), a nd one in a clinical setting (Sindelar et al., 1986).

PAGE 28

28 Five studies (Barbetta, Heron, & Heward, 1993; Ferkis et al., 1997; Sindelar et al., 1986; Skinner & Shapiro, 1989; Skinner et al., 1994) examined the effects of increased rates of OTR on sight word mastery. Four st udies (Carnine, 1976; Sa inato et al., 1987; Sutherland et al., 2003; West & Sloane, 1986) ex amined the relationship of increased rates of OTR on student disruptive behavior, on-task behavior, and correct response performance. Three studies (Miller et al ., 1995; Skinner, Belfiore, Mace, WilliamsWilson, & Johns, 1997; Skinner, Ford, & Yunker, 1991) examined the effects of increased rates of OTR on written multiplication performance. One study examined the effects of increased rates of OTR on acquisition and maintenance of health facts (Sterling, Barbetta, Heward, & Heron, 1997). One study examined th e effects of increased rates of OTR by using an error correction procedure duri ng a geography lesson (Barbetta & Heward, 1993). Another study examined the effects of improving the quality of OTR through a talk/mand procedure on student disruptive behavior (Gunter et al., 1994). Koegel, Dunlap, and Dyer (1980) examined the effects of in creased rates of OTR on the performance of various tasks (verbal imitati on, object discrimination etc.). One study (Wolery, et al., 1992) compared choral and i ndividual responding on commun ity-sign words. McKenzie and Henry (1979) examined the effects of teach er questions (testlike events) on attention and correct responses during a science lesson. Findings and ke y characteristics of these 18 studies are presented (Table 2-1). Results Partic ipants. A total of 127 (55 in one experi mental study and 72 across 17 single subject studies) served as pa rticipants across the 18 studi es. Twenty-five subjects were female, 47 male, and 55 participants gender wa s not reported. The number of participants in each study ranged from 1 to 55. Two studies reported race of each participant (i.e., 5

PAGE 29

29 Non-white and 47 Caucasian; 8 African American and one Caucasian student) (McKenzie & Henry, 1979; Sutherland et al ., 2003). Full-scale intelligen ce quotients (IQ) scores were reported in ten studies (i.e., Ba rbetta & Heward, 1993; Barbetta et al., 1993; Gunter et al., 1994; Koegel et al., 1980; Ski nner et al., 1989; Skinner et al ., 1991; Skinner et al., 1994; Sterling et al., 1997: West & Sloan, 1986; Woler y, et al., 1992). IQ scores were measured by the Kaufman Assessment Battery for Childre n, Standord-Binet or using the Wechsler Intelligence Scale for Children-Revised (WISCR), and the range of IQ scores among the participants was 33 to 115 (M = 78.3). Twenty six students we re identified as having an EBD, 9 as Learning Disabled (LD), 10 as Educable Mentally Retarded (EMR), 4 as moderate mental retardation (MMR), 3 were diagnosed with autism, 7 as Developmental Handicapped or Developmentally Delayed, one as Intellectually Handicapped, another as Attention Deficit Hyperactive Disordered (ADHD), and one student as Oppositional Defiant Disorder (ODD). Three students in the participant pool received special education services in reading in a self contained classroom according to their Individual Education Programs (IEP) and 6 students received part tim e reading resource services in a resource room. The grade level of the par ticipants ranged from PK to 12th grade (M = 4th grade). Setting of Studies. OTR research has been conducted across settings including selfcontained classrooms, clinical settings and general education classrooms. Seven studies were conducted in self-contained classrooms for students with EBD, MMR, or students with reading disabilities (Bar betta & Heward, 1993; Barbetta et al., 1993; Gunter et al., 1994; Sainato, et al., 1987; Sterling et al., 1997; Sutherland et al., 2003; Wolery, et al., 1992). Six studies (Koegel, et al., 1980; Sindelar et al., 1986; Skinner et al., 1997; Skinner et al., 1991; Skinner et al., 1994; West & Sl oane, 1986) were carried out in separate

PAGE 30

30 rooms, such as a coatroom, testing room, obs ervation room, or a classroom adjacent to the students main classroom. In one study (Skinner & Shapiro, 1989), the setting was unspecified but was conducted in a locati on at a University Affiliated School for behaviorally disordered students. Only tw o studies (Carnine, 1976; Ferkis et al., 1997) were conducted in a genera l education classroom duri ng small group or individual instruction, but not during la rge group instruction. One st udy (McKenzie & Henry, 1979) was conducted in a general education classroom setting. In another study (Miller et al., 1995) one experiment took place in a self-cont ained classroom and another in a general education classroom. Dependent Variables Researchers in two stud ies examined the ra te of disruptive behavior (Gunter et al., 1994; West & Sloane, 1986). In addition to di sruptive behavior, researchers measured onor off-task task behavior in 6 of the 18 studies (Carnine, 1976; Mc Kenzie & Henry, 1979; Miller et al., 1995; Sainato et al., 1987; Si ndelar et al., 1986; Sutherland et al., 2003). Six of the 17 studies (Barbetta & Heward, 1993; Gunter et al., 1994; McKenzie & Henry, 1979; Sainato et al., 1987; Skinner et al., 1991; Sterling et al., 1997) measured active student responding. In 17 of the 18 studies, researchers measured the number of correct responses (i.e., the number of sight words mastered in various subjects as math, geography, and health science or community-s ign words). Finally, researchers in one study (Koegel et al., 1980) measured vari ous skills (e.g., verb al imitation, object discrimination, number discrimination etc.). Independent Variable: Increased Rates of OTR An opportunity to respond was the independent variable across all studies. However, researchers utilized several m ethods to incr ease teachers rates of OTR during academic

PAGE 31

31 lessons. For example, Sutherland et al. (2003) used an observation feedback procedure, consisting of several components to increase the classroom teachers rate of OTR. Three studies (Ferkis, et al., 19 97; Sainato, et al., 1987; West & Sloane, 1986) used a set criterion (pre-determined) rate and utilized a stop-watch and prompts as methods to increase the rate of OTR per minute during in tervention phases in co mparison to baseline phases. In four studies (Barbetta & Hewar d, 1993; Barbetta, et al ., 1993; Gunter, et al., 1994; Sterling, et al., 1997), researchers implemented an error correct ion technique after a students incorrect response as a strategy to increase the teachers use of OTR and student correct responses. For example, researchers repeated questions (OTR) until the student responded with the correct response. In 8 st udies (Carnine, 1976; Koegel et al., 1980; McKenzie & Henry, 1979; Sindela r, et al., 1986; Skinner & Shap iro, 1989; Skinner, et al., 1991; Skinner, et al., 1994; Ski nner, et al., 1997), researcher s manipulated the presentation rate of questions during lessons to increase the rate of OTR. In one study (Miller et al., 1995) the authors utilized a combination of a faster presentation rate and error correction procedure (choral responding). A faster presentation rate of OTR was achieved through (a) less delay or no delay between teacher questi ons, or (b) having less delay or no delay between student responses and introduction of the next teacher question. In addition, students may have more OTR when teachers us ed choral responding (i.e., every student simultaneously receives an OTR) compared to individual responding. Description of Implementers Classroom teachers were used to increase the rates of OTR (independent variable) in 7 studies (Gunter et al., 1994; McKenzie & Henry, 1979; Miller et al., 1995; Sainato et al., 1987; Sterling, et al., 1997; Sutherland et al., 2003; Wolery, et al., 1992), while in 11 studies (Barbetta & Heward, 1993; Barbetta et al., 1993; Carnine, 1976; Ferkis et al.,

PAGE 32

32 1997; Koegel, et al., 1980; Si ndelar et al., 1986; Skinner & Shapiro, 1989; Skinner et al., 1991; Skinner et al., 1994; Skinner et al ., 1997; West & Sloane, 1986), the primary researcher or graduate students implemented the OTR intervention. Research Designs In one study (McKenzie & Henry, 1979) the researchers ut ilized a group design (i.e., random assignment procedures for two third-grade classrooms) to cr eate two theoretically comparable treatment groups. In the other 17 studies, researchers used various single subject designs to demonstrate a functional re lation between the independent variable and dependent variables. Adapted alternating treatment designs (Sindelar, Rosenberg, & Wilson, 1986) were used in 6 out of 17 studi es (Sindelar et al., 1986; Skinner & Shapiro, 1989; Skinner et al., 1991; Skinne r et al., 1994; Skinner, et al., 1997; Wolery, et al., 1992). Withdrawal designs (Kazdin, 1982) were used in 4 studies (Car nine, 1976; Gunter et al., 1994; Miller et al., 1995; Su therland et al., 2003). An alternating treatment design (Kennedy, 2005) was employed in 4 studies (Barbetta & Heward, 1993; Barbetta et al., 1993; Ferkis et al., 1997; Sterli ng et al., 1997). One study (Saina to et al., 1987) made use of a changing criterion design. While another study (West & Sloane, 1986) used a multielement single subject design and Koegel et al (1980) utilized two multiple treatment reversal designs, an ABABCBC design in the first experiment and an ABABC design in the second. To illustrate the use of different designs, consider the following examples. An alternating treatment design (ATD) was used to compare the effects of OTR and no OTR on student behavior. For example, Barbetta et al. (1993) compared the difference between active student response (ASR) e rror correction and no-response (NR) error correction on sight word acquisition. Adapted alternating treatment designs (AATD) (a modified version

PAGE 33

33 of the alternating treatment design) were utilized to de monstrate the effects of two instructional methodologies on tw o different, but equi valent instructional sets with the same level of difficulty of items. Sindelar et al. (1986) also used an AATD design to determine the effects of a choral vs. an indi vidual mode of questioning on two distinctive instructional sets. The two instructional sets were two sets of 5 sight words of equal difficulty. Different rates of acquisition on the two sets of words were compared to the two modes of responding. Wolery et al. (1992) used an adapted alternating treatments design in three experiments to evaluate th e use of choral and individual responding in teaching word reading to students with mode rate mental retardation. Eight communitysign words were targeted for instruction, four taught with choral res ponding and four with individual responding. An ABAB withdrawal design wa s used to demonstrate two separate replications of the intervention. The first inst ance was when the baseline was reintroduced (A-B-A) and the second case was when the intervention wa s reintroduced (A-B-A-B ) (Sutherland et al., 2003). A functional relation betwee n the independent variable and dependent variable was demonstrated because there were two repli cations within the same study. For example, Sutherland and colleagues demonstrated that when the teacher increased the rate of OTR to a class of students with EBD, there was an increase in correct responses, and on-task behavior, and a decrease in disruptive behavior. When th e intervention was withdrawn, there was a decrease in correct responses and on-task behavior, and an increase in disruptive behavior. When the intervention was presented a second time, the initial results were replicated.

PAGE 34

34 A multi-element design was used to demonstrate a functional relation among independent and dependent variables by alte rnating between at least two different intervention conditions. For example, West a nd Sloane (1986) compared four treatment conditions (high point rate/fas t presentation, high point rate /slow presentation, low point rate/fast presentation, low point rate/slow pr esentation) on the rate of student academic responses opportunities. Changing criterion designs were used to demonstrate a change in behavior as it improved incrementally to match a specified pe rformance level. For example, Sainato et al. (1987) assessed the effectiv eness of choral responding on pre-school students correct responses by changing the level of gr oup responding from 3 OTR/minute to 5 OTR/minute. Discussion The results of all the studies indicated a positive relationship be tween an increased rate of OTR and ASR, on-task behavior, corr ect responses, and fewer disruptive behaviors of students. A synthesis of the 18 articles yielded 4 categories of OTR to codify the results of each study: (1) faster presentation rate, (2) choral responding, (3) error correction, and (4) errorless learning. Thus, the studies will be discussed in relation to these four categories. Faster Presentation Rate Carnine (19 76) showed that a faster pres entation rate of OTR (presenting a new question immediately following a student correc t response) resulted in higher percentages of correct responses (from 41% to 85%) across students than a slower presentation rate (waiting 5 seconds after a stude nt correct response before pr esenting the next question). The mean number of seconds per task for th e three fast-rate phase s was 5.0; the mean

PAGE 35

35 number of seconds per task for the slow rate phases was 14.2. Theref ore, during the fast pace condition the rate of OTR was 12/min and during the slow pace condition the rate of OTR was 4.26/min. Koegel et al. (1980) had similar results to the Carnine study and demonstrated that short intertrial interval s (1-second) produced highe r levels of correct responding than longer intervals (minimum of 4 seconds) across a ll students identified with autism. In the first shor ter intertrial interval conditi on, correct responding increased by an average of 20% (from 40% during the longer intertrial c ondition) across three students and in the second shorter phase correct responding increased by 40% (from 20% in the second longer intertrial condition). In addition, there were improving trends in student performance and rapid acquisition of tasks and words with the short intertrial intervals, in contrast to minimal or no change with the long intervals. However, a study by Skinner et al. (1994) ha d results that conflicted with findings from the Carnine (1976) and Koegel et al. (1980) study. Skinners fi ndings did not support that increasing the pace of inst ruction was more effective than a slower pace instruction. In Skinners study, the authors compared a rapid pacing intervention by using an immediate inter-trial interval (ITI) to a 5 second ITI. In the immediate ITI, the experimenter gave the next wo rd to the student immediately af ter a learning trial, in the 5second ITI, the experimenter waited 5 seconds before pres enting the next word. Both conditions produced the same results for th e number of reading words mastered; thus, there were no differences in student correct responses. However, the slower presentation rate took an average of 103 seconds longer pe r session than the fast er presentation rate; and indicates a slightly less efficient use of instructional time. Ther efore, teachers could cover more or review more material by utilizing a faster presentation rate.

PAGE 36

36 Skinner et al. (1991) demonstrat ed that a faster presentati on rate resulte d in greater math fluency (digits correct per minute) and accuracy (percentage of problems correct). The authors compared a verbal cover, copy, and compare (VCCC) condition with a written cover, copy, and compare (WCCC) condition with two students with EBD. Although the length of time for each session wa s held constant across conditions (4 minutes), the VCCC condition yielded twice the number of correct responses than during the WCCC conditions. The findings showed that the increase in math accuracy and fluency was due to the faster presentation rate (increase rate of OTR) under the VCCC condition. The results of the Skinner study supports Carnines (1976) findings that academic interventions with low levels of act ive responding (AR) are not as effective as instruction with higher levels of AR at a faster presentation rate. Skinner et al. (1997) showed that two students had improved performance in mathematics during a VCCC condition compared to the WCCC. During a time held constant phase (204 sec.), the first student completed 86 learning trials using the VCCC intervention compared to an average of 26 l earning trials during the WCCC procedure. The second student completed a mean of 83 learning trials during the VCCC intervention compared to 33 during the WCCC intervention. During the trials held constant phase, the first student took an average of 143 seconds longer using the WCCC intervention to complete the learning trials than the VCCC intervention. A second student had similar results and took 102 seconds longer to co mplete the learning trials using WCCC intervention than the VCCC intervention. The authors concluded that verbal responding was an efficient method of instruction, because the amount of learning trials was increased and students demonstrated an improvement in learning levels in less time.

PAGE 37

37 Skinner and Shapiro (1989) set a faster presentation rate of OTR by using taped words and drill interventions (two OTR, the students read the list of words twice) and continuous and intermittent asse ssment (one OTR, the list of words were read once). For all 5 students, correct oral reading rates we re higher during the taped word and drill condition than the continuous and intermitte nt assessment condition. During the tapedword condition, mean numbers of words read correctly was 78 words per minute and for the drill condition the mean was 78. For the continuous assessment condition, the mean number of words read correctly was 59 and for the intermittent assessment the mean was 50. Because reading rates were similar unde r the taped-word and drill condition, and higher in both conditions than the continuous and intermittent conditions, the authors stated that the improved performance was likely due to the increased rate of OTR, rather than the mode of intervention. West and Sloane (1986) supported and extended the Carnine (1976) study by demonstrating that a faster presentation rate set at a criterion level of 3 OTR per minute was related to lower rates of disruptive behavior and more correct responses in comparison to a slower presentation rate (one OTR per minute). During the slow presentation rate (one OTR pe r minute), the mean percentage of intervals scored for all combined categories of disruptive behaviors ac ross 5 participants was 33%, compared to 18% during the faster presenta tion rate. Students demonstrat ed 2.4 correct responses per minute (rpm) during the faster presentation condition and 0.9 rpm during the slower presentation rate condition. The authors noted, however, that there was no clear difference in the two conditions regarding percentage of correct responses (i.e., students in the fast presentation rate had more correct responses but all had more errors).

PAGE 38

38 Opportunities to Respond and Choral Responding Sutherland et al. (2003) utilized an observation feedback proce dure to increase the teachers rate of choral responding (OTR). The m ean rate of OTR per minute for the baseline phase was 1.68 rpm and this rate increased to 3.52 rpm during the first intervention phase. During the withdrawal pha se, the teachers mean rate of OTR per minute decreased to 2.25 rpm and this rate increased to 3.49 rpm dur ing the reintroduction of the intervention. The result s of the study implied that th ere was a functional relation between an increased rate of OTR and more correct responses, fewer disruptions, and increases in on-task behavior The authors demonstrated by using an ABAB withdrawal single subject design that duri ng the first (B) and second inte rvention phases (B) students responded on average about 1.32 more correct responses per minute than during baseline (A) and withdrawal phases (A). The percentage of correct responses increased during the first intervention phases by 3.7 % compared to the baseline phase and increased by 18.3 % during the reintroduction of the intervention pha se compared to the withdrawal phase. The mean rate of disruptive behaviors decreased by 0.63 per minute from the baseline to first intervention phase and 1.14 per minute from th e withdrawal phase to reintroduction of the intervention phase. Finally, the percentage of on-task intervals for students increased by 23.5% from baseline to first intervention phase and by 17.2% from the withdrawal phase to the reintroduction of the intervention phase. The results of the study support instructional theory that hypothesizes that when teachers use fast paced instructional practices and give students high rates of OTR, the result s are improved behavioral and academic outcomes for students with EBD. Sainato et al. (1987) investigated the use of two rates of choral responding with preschool children with signifi cant behavioral and developmen tal delays. The results of

PAGE 39

39 the study were similar to Carnine (1976) and show ed that at a faster presentation rate of 5 OTR per minute produced more student correc t responding for three st udents than the 3 OTR per minute condition. Average rates across three students were 0.8 rpm for the baseline condition, 2.47 rpm for the 3 OTR pe r minute condition, and 4.58 rpm for the 5 OTR per minute condition. However, the 3 OT R per minute condition had slightly better results for on-task behavior than the 5 OTR per minute condition (90.3% compared to 81%). McKenzie and Henry (1979) showed by using a chi-square test, X 2 (1) = 4.99, < .05. that more pupils were off-task in the indi vidually addressed question group than in the test-events (unison hand raising) group. Sindelar et al. (1986) compared two mode s of responding: ordered and choral. The authors found a slight but si gnificant difference between sight words mastered across all three groups of students during the choral responding conditi on than the ordered response condition. On a post-instruction test, the stud ents in the choral responding had a higher percentage of words read correctly than th e students in the ordered responding condition (group 1 had 14% more, group 2 demonstrat ed 6% more, and group 3 displayed 15% more). There was not a substantial differen ce in the percentage of on-task behavior between conditions (83% for the choral re sponding condition and 79% for the ordered responding condition). These fi ndings support the finding by McKenzie and Henry (1979). In three experiments, Wolery et al. (1992) compared choral vs. individual responding in small group arrangements. In Experiment 1, the effects of the two conditions were compared where the number of exposures was equal across conditions but the number of OTR was greater in the chor al responding mode. In Experiment 2, the

PAGE 40

40 number of exposures was greater in the i ndividual responding mode but the amount of OTR was equal across conditions. In Experime nt 3, the more effective conditions from Experiment 1 and 2 were compared. In Experiment 1 the results indicated that choral responding was the more effective condition fo r 3 of the 4 students. In Experiment 2, individual responding was more effective for all students. In Experiment 3, when the more effective conditions were compared (exposures in the individual to choral conditions was 2:1 and the ratio of OTR in the individual to choral conditions was 1:2), the two types of responding produced relatively equal learning and only a slight difference in effectiveness and efficiency were found. Based on results from this study, the authors have some support to make the following recommendation to teachersif all children in the group need to learn the same skills, then choral responding may be appropriate. However, if students are at different lear ning levels and learning differe nt skills, then individual responding is more appropriate. Error Correction Ferkis et al. (1997) exam ined the efficiency of instruction on sight word mastery in two studies. In study 1, the authors compared a single respon se condition and a repeated response condition. The single response c ondition (ASR) consisted of one response opportunity (one OTR) per le arning trial, while the repeated response condition was identical to the single response condition except for when the student made an error. When an error occurred, the investig ator provided feedback until the student gave the correct response, then the investigator prompted the student to cite the correct word 4 more times. Therefore, the repeated response co ndition took more time to implement. Results from study 1 indicated that ther e was an equivalent number of words mastered for three participants in the tw o conditions. However, the single response

PAGE 41

41 condition was more efficient to implement because it took fewer training sessions to master an equivalent amount of sight words and the training time spent on each word was considerably less. In study 2, the authors again compared two conditions involving variations of repeated sets of learning trials on sight wo rd acquisition. Single le arning trials repeated three times (i.e., 3 x A-R-C) were compared to three repeated sets of learning trials with repeated response opportunities at the end of the learning trial (i.e ., 3 x A-R-C-R-R-R-R). The experimental procedures in this study were consistent with study 1 in that the ratio of response opportunities was the same only each condition was multiplied three times. The results of this study were also similar to study 1. That is, in the two conditions an equivalent number of sight words were mastered. The authors concluded from the results of the two studies that incr easing the number of response opportunities (OTR ) as in the repeated response condition does not increase the effectiveness of the instructional procedure because the repeated procedure ta kes more time to complete than the single response procedure. Barbetta et al. (1993) used an alternating treatments design to compare the effects of active student response (ASR) followed by e rror correction with a no-response (NR) condition. In the ASR error-correction condi tion, each trial ended with the student responding with a correct response after a teacher prompt, while in the no-response condition the teacher provided the correct resp onse and the student passively attended. Results of the study showed that students de monstrated more correct responses during the ASR error correction than in the NR error correction. For all 6 student s, the mean of the same day test scores, mean of next-day te st scores, number of correct responses, were

PAGE 42

42 higher for all 6 students during the ASR error correction than the NR error correction. In all students but one, the maintenance of all learned words was higher in the ASR condition than the NR condition. For one student, the ra te of maintenance was the same for both conditions (i.e., 78%). The results of this study supported research that showed a functional relation between AS R and academic achievement. Similar to the Barbetta et al. (1993) study, St erling et al. (1997) compared the effects of ASR and on-task (OT) inst ruction on the acquisition and ma intenance of health facts by students identified as having le arning disabilities. During ASR instruction, the student ended each learning trial by re peating the correct answer three times, while during OT instruction the student passively watched the teacher make a correct response. The results of the study indicated that th e students learned and maintain ed more health facts taught under the ASR condition than the OT condition. The authors concluded that having students actively engaged during instruction through active responding was more effective than having students passively watch and hear th e teacher give instruction. The results of the study support findings from earlier studies (B arbetta et al., 1993; Si ndelar et al., 1986) and indicated that there is a functional relation between high rates of academic responding and achievement of learners with disabilities and that choral res ponding is an effective method to increase ASR. Barbetta and Heward (1993) used an alte rnating treatments design to compare the effects of ASR error correction and NR e rror correction during a geography lesson with three students with learning disabilities. Th e procedure for the ASR and NR condition was the same as in the Barbetta et al. (1993) study. The mean nu mber of ASR, across the three students, under the ASR erro r correction was 21, while th e mean number for the NR

PAGE 43

43 condition was 7.3. On same day tests, students had higher scores (66% of the time) under the ASR error correction instruction than unde r the NR error correc tion instruction. On next day assessments, students had higher scores (77% of the time) under the ASR error correction instruction than unde r the NR error correction in struction. Maintenance tests were given to the three stude nts one week after instruc tion, and tests showed that maintenance under number of capitals learned for all three students overall was ASR = 83% in comparison to NR = 69%. The results of this study were similar to Barbetta et al. That is, students learned and maintained mo re capitals taught with ASR error correction than with NR error correction. The presen t study extended Barbettas earlier study by including a different populati on of students. In this study the students were ages 10-11 years old with learning disabilities, while the Barbetta et al. study included students ages 8-9 years old with developmental disabilities. Miller et al. (1995) used multiple treatment reversal de signs (ABABCBC and ABABC) and demonstrated that the first gr ade and special education students wrote answers to the math facts at the highest corr ect rates and highest level of accuracy during the time trials with error-co rrection than time tria ls without error correction. For the first grade students, this rate wa s 13.3 correct answers per minute compared to 4.8 per minute in the baseline condition and 7.3 per minute compared to the time trial without error correction. Students in the speci al education classroom also had highest rates of correct responses during time trials with error-correction (17.3 per minute), compared to 8.4 per minute in the baseline condition and 13.2 per minute compared to the time trial without error correction condition.

PAGE 44

44 Errorless Learning Gunter et al. (1994) used an ABAB withdrawal design to evaluate the ef fects of a teacher using a talk/mand procedure on one st udents disruptive behavior. The essential component of this procedure was that the t eacher embedded the correct answer within the question so that the student ha d a greater probability of responding correctly. The results of the study indicated that the student had fewer disr uptive behaviors during the intervention condition. For example, the mean rate of disruptive behavior was 0.28 per minute during the baseline and withdrawal c onditions and this rate decreased to 0.09 per minute during the two intervention conditions. Gunter and colleague s hypothesized that the decrease in disruptive behavior was related to the teacher implementing the talk/mand procedure. In sum, the results of the st udies indicate that there is a positive relationship between an increased rate of OTR and on-task behavior, correct responses, and fewer disruptive behaviors of students. The following section will discuss the literat ure base according to social validity and treatment integrity. Social Validity Kennedy (2005) defines social validity as the estim ation of the importance, effectiveness, appropriateness, and/or satisfaction various people expe rience in relation to a particular intervention (p. 219). Accordi ng to Kennedy, social validity describes the procedures, or results of an experiment within a social c ontext (i.e., instruction in a classroom, passengers anxiety while flying in an airplane or players performance on a basketball court). Kennedy suggests that in the early stages of developing an intervention, some researchers conduct experiments w ith the primary focus of analyzing the effectiveness of an intervention and may not al ways include social validity assessments in

PAGE 45

45 their study. After positive results have been de monstrated in a few st udies, researchers will then assess or estimate whether an experiment has social importance and determine if the participants quality of life has improved. In classroom-based research it is important to determine the most efficient method of instruction in order to increase the like lihood that teachers will us e that strategy in the future. Comparing types of instructional st rategies is one way to determine which instructional strategies produce the best results. All but one study (Gunter et al., 1994) compared two types of interven tions (i.e., individual and choral fast and slow presentation rate; higher criterion rate a nd lower criterion rate; WCCC and VCCC). Gunter et al. investigated a talk/mand proce dure to verify a recent form ulated hypothesis (i.e., when a teacher presents a challenging academic task, the task becomes an aversive stimulus to the student). The student may then become disrup tive in order to avoid the teachers task demands because the responses needed to an swer the questions correctly are above the skill level of that student. In this literature review, 9 of the 18 studies 50.0% (Barbetta & Heward, 1993; Carnine, 1976; Ferkis et al., 1997; Gunter et al., 1994; Skinner & Shapiro, 1989; Skinner et al., 1994; Skinner et al., 1997; Sterling et al., 1997; West & Sloane, 1996) did not assess social validity. In one study, Sainato et al. (1987) included a social validity assessment and asked 10 regular education kindergarten teache rs (outside judges) to observe the three students during baseline condition, a 3 OTR per minute condition and a 5 OTR per minute condition. All 10 teachers rated the students be havior at the highest appropriate level, during the 5 OTR per minute condi tion. In addition, the teachers st ated that if the students

PAGE 46

46 performed in their classroom as they did in the 5 OTR per minute condition, they could be mainstreamed into general education settings. Sindelar et al. (1986) compared the effect s of choral and ordered responding in a classroom environment and after reviewing the results discussed the feasibility and usefulness of incorporating c horal responding into small gr oup instruction using teacher prepared lesson plans. In addition, Sindelar and colleagues surveyed 24 special educators and asked them if the results of the study we re significant enough to incorporate choral responding in their lesson plans. The mean response by the teachers i ndicated that choral responding did produce enough positive effects a nd that they would incorporate this strategy in their instru ctional strategies. Sutherland et al. (2003) noted the increased rate of OTR may not be socially valid in special education classrooms, because the teacher in the study did not maintain an increased rate of OTR from the intervention phase over to the wit hdrawal phase in the experiment. Therefore, the authors concluded that most special education teachers would not easily adopt this strategy in their classr oom environment. The authors also speculated that decreases in disruptive behavior and increases in correct responses and on-task behavior were not enough of a reinforcer for the teacher to incorporate higher rates of OTR into his teaching strategies. In the study by McKenzie and Henry (1979), prior to the post te st, students were asked to indicate on a 5-point Likert scale instrument (the anchors indicated by faces ranging from a broad smile to a deep frown) (a) how well they thought they would do on the test, (b) how hard they thought the test would be, and (c) how worried they were about

PAGE 47

47 taking the test (a measure of anxiety). However, the researchers did not report results from this scale. Miller et al. (1995) report ed results from student opi nion surveys given to the students in the general educati on classroom and special education classroom. Results from the surveys showed that in the general e ducation classroom, the majority of students enjoyed grading their own paper, liked the 1-minute time-trial procedure with selfcorrection over the other two procedures, felt that procedure helped them the most, and that out of the three procedur es they would prefer to do the 1-minute time trial with selfcorrection. Results from the survey given to th e student in the specia l education classroom produced similar results. In the study by Skinner et al. (1991), the procedure of VCCC was found to be more effective and efficient than the WCCC proce dure. However, the authors claimed that the two procedures should be socially validated in a classroom environment because students work quietly under the WCCC procedures a nd do not emit loud responses like in the VCCC procedures. In another study, West and Sloane (1986) did not use social validity assessments however, they speculated that teach ers would think that a faster presentation rate is a superior instructional strategy to using aversive consequences for students with disruptive and off-task behaviors during r eading instruction. Barb etta et al. (1993) estimated that the results of their study were socially valid and concluded that because the results of using ASR error corr ection had positive results and was shown to be an efficient strategy, teachers could easily adapt ASR during various instru ctional activiti es in their classrooms.

PAGE 48

48 It is important that researchers incorporate social validity assessments in future research studies because teachers can provide feedback on the feasibility and acceptance of implementing OTR. For example, a teacher who has a classroom with a history of high rates of disruptive behavior may prefer a WCCC procedure instead of a VCCC (even though VCCC is more efficient) because th e WCCC does not require a verbal response and the teacher may fear that the class may become boisterous when the whole class responds. Therefore, if researchers use social validity assessments they may incorporate new information into their research questions and perhaps find more efficient and effective methods to implement OTR. Treatment Integrity Treatm ent integrity is the extent to whic h the independent variable is implemented according to the intention of the research ers (Gresham, Gansle, & Noell, 1993). The results of a synthesis of the 18 reviews showed that 11 out of 18 studies (61.1%) (Barbetta, et al., 1993; Barbetta & Heward, 1993; Ferkis, et al., 1997; Keogel et al., 1980; Sainato, et al., 1987; Skinner, et al., 1994; Skinner & Shapir o, 1989; Skinner, et al., 1997; Sindelar, et al., 1986; Sterling, et al., 1997; Sutherland et al ., 2003; Wolery, et al., 1992) provided a discussion of treatment integrity in their studies. This percentage (61.1%) is high in comparison to the findings of a more general review conducted by Gresham et al. (1993). In their review of applie d behavior analysis studies with children as subjects that had been published in the Journal of Applied Behavior Analysis between 1980 and 1990, treatment integrity had been measur ed in only 16% of the studies. In this review, the authors used various te rms (i.e., procedural re liability, procedural integrity, procedural fidelity, or teacher training) to indicate treatment integrity. Skinner et al. (1994) and Skinner and Shapiro (1989) used treatment inte grity checklists to ensure

PAGE 49

49 that the primary researcher engaged in th e proper order of trea tment procedures. In addition, trained independent observers reco rded accuracy of time elapsed across each intervention session, the accuracy of words during assessments and across intervention sessions, materials used and instructions read as planned by the researchers. The primary experimenters in the Skinner et al. (1997) study used event recording and verbal prompts to ensure students followed correct procedures. A second experimenter provided IOA on the time require d for students to complete assessments and treatments. In the Sindelar et al. (1986) st udy, two observers ensured that the teachers for the three groups accurately presented instruct ions, feedback and type of responding mode (choral or ordered). Ferkis et al. (1997) merely stated that procedural integrity was 100%, but did not provide information about how tr eatment integrity was actually evaluated and measured. Barbetta et al. (1993) and Sterling et al. (1997) re ported that a second observer used a frequency count to record the o ccurrence or nonoccurrence of the essential components of the instructiona l procedures implemented by the experimenter. Sainato et al. (1987) trained the teacher to implement an increased rate of OTR (3 OTR/min and 5 OTR/min) and provided feedback until a criteri on level of 90% accuracy was achieved. In the Barbetta and Heward (1993) and Koegel et al. (1980) studies, trained independent scorers recorded the experimenters impleme ntation of the independent variable from video taped recorded sessions. Wolery et al (1992) measured teacher behaviors (cueing students, providing wait time, asking the questions and waiting during the intertrial interval) for procedural fide lity during probe sessions and instructional sessions. Unclear operational definitions of the inde pendent variable limit the researchers ability to conclude that the changes in the dependent variable was related to the

PAGE 50

50 manipulation of the independent variable (Gresham et al., 1993). Gunter et al. (1994) reported that the operational definition of the talk code needed to be clarified and differentiated between talk related to instruct ional information and social talk therefore, this limitation was a threat to the internal validity of their study. In 11 studies, researchers reported high IOA scores during treatment integrity checks. The scores ranged from 90% to 100%. Six studies (Koegel et al., 1980; Skinner et al. 1997; Skinner & Shapiro, 1989; Sterling et al., 1997; Sutherland et al., 2003; Wolery, et al., 1992) reported that treatment inte grity was measured on 42%, 18%, 17%, 20%, 22%, and 37% of the sessions across conditi ons using a treatment integrity checklist. Threats to Internal Validity In a carefully designed experim ent, research ers are able to demonstrate with a high degree of certainty that an independent variab le was the primary influence in changing the dependent variable(s). The extent to which researchers are able to rule out alternative explanations for the change in the dependent variable is called inte rnal validity (Kazdin, 1982). Alternative explanations other than the independent variable that could account for a deviation in the dependent vari able is a threat to the internal validity of the study. Kazdin describes 8 threats to internal validity, and an interpretation of the synthesis of the 18 studies yielded 3 types of threat s to internal validity (history, selection bias, and testing). These explanations will be discussed next. History. Four studies (Carnine, 1976; Skinne r et al., 1994; Ski nner et al., 1997; Sutherland et al., 2003) describe history effects in their studie s. History effects arise from events that occur at the same time of the in tervention that have the potential to alter the results of the experiment. Sutherland et al. (2003) reported several historical events other than increased rates of OTR (i.e., a combin ation of OTR, teacher use of praise, and

PAGE 51

51 increased rate of correct responses) that may have contributed to a decrease in student disruptive behavior. Skinner et al. (1994) and Skinner et al. (1997) reported that the increase in reading accuracy of three students might have been influenced by learning outside the experimental c onditions, although Skinner does not provide enough detail to interpret this statement. Ca rnine (1976) suggested that students verbally copying one another during oral responding could impose a potential history thre at to any study. He stated that in his study verbal copying was an incompatible behavior with on-task behavior (the primary dependent variable); therefore, it was unlikely to have occurred. However, researchers may want to consider investig ating the effects of copying during student responding in future studies. Finally, Wolery et al. (1992) stated that in their study history was controlled for by implementing the individua l and choral conditions in an alternating manner across days. Selection bias. Two studies (Skinner, et al., 199 7; West & Sloane, 1986) discussed that selection bias may have been a possibl e threat to the studys internal validity. Selection bias occurs when subjects diffe r from one another and the results of the dependent variables vary because of these in itial differences. Skinner indicated that the increased learning rates during the verbal responding conditi on as opposed to the written responding condition may have been due to th e strengths in the st udents ability to verbally process information. West and Sloane indicated that the point delivery system (a schedule of reinforcement and a component of the intervention) had little effect on the student performance accuracy and response rate because of selection bias. The students selected for the study had high rates of disrup tive behavior, and the point delivery system was not strong enough of a reinforcer to make an impact on on-task behavior.

PAGE 52

52 Testing. Researchers in three studies (Skinne r et al., 1991; Ski nner et al., 1997; Skinner & Shapiro, 1989) discussed the possibility of testing as a threat to internal validity of their studies. During an experiment, testing takes place whenever a change occurs in the dependent variable that may be due to repeated assessment (i.e., behavior in a participant can change simply as a result from tes ting) (Kennedy, 2005). For example, Skinner & Shapiro (1989) and Skinner et al. (1991) and Skinner et al. (1997) cautioned that in their studies, continuous assessment may have influe nced the participant s reading performance and increased accuracy of multiplication probl ems rather than the intervention. Finally, Sterling et al. (1997) indicated that for unknown reasons, inters ubject variability existed in the data. For example, a student with a learni ng disability and who was frequently absent during the study performed better than th ree other students with developmental disabilities. Miller et al. ( 1995) reported that the unknown ro le of practice effects, the extent to which a students performance impr oves as a function of th e practice because of repeated measurement may have been a s ource of threat to internal validity. In summary, threats to the internal validit y of a study limit the extent researchers can demonstrate that the intervention accounted fo r a change in the de pendent variable. If researchers cannot demonstrate that the interv ention accounts for change in one study then it will be difficult to extend the re sults to other persons or settings. Generality and Threats to External Validity The purpose of external validity is to assess whether the resu lts of an intervention in a sample are representative of results that would be found in a larger population. However, a studys external validity is based on its strength of internal validity, systematic replication, and the power of demonstrating a functional rela tion between the independent and dependent variable (Kennedy, 2005).

PAGE 53

53 Kazdin (1982) summarizes 9 threats to ex ternal validity and five of these threats (generality across subjects, across settings, across times, across response measures, and across behavior change agents) are discussed in the section below. Six studies (Gunter et al., 1994; Koegel et al., 1980; Mi ller et al., 1995; Skinner et al., 1991; Skinner et al., 1997; Sterling et al., 1997) re ported threats to generality across subjects Gunter et al. (1994) reported that the findings of his study are di fficult to generalize to a larger population, because there was only one participant. In ad dition, direct and systematic replication was needed to increase the generality of the findi ngs, because the student left the study early during the second intervention phase. Koegel et al. (1980) discussed that the implications from the data of the study pertained to only those participants (children identified with autism) and task combinations. Miller et al. (1995) describe that the results of the study are confined to the pa rticipants age (range 6-12 years old). Skinner et al. (1991) reported that the results favoring a VCCC over a WCCC instructional strategy could not generalize beyond the two elementary school aged child ren with EBD in the study and reported a need to conduct research with more and varied students. Skinne r et al. (1997) hesitated to recommend VCCC over WCCC as an instruct ional strategy, because only two students participated in the study. Fina lly, Sterling et al. (1997) stated that additional studies are needed to ascertain if ASR would generalize beyond the participants in their study (i.e., students with developmental disabilities). Two studies, McKenzie and Henry (1979) and Sutherland et al. (2003) reported threats to generality across settings. McKenz ie and Henry recognized that their findings might not generalize to other settings and lessons, while Su therland et al. claimed that generalizing the findings of the study (located in an inner-city community) to classrooms

PAGE 54

54 with students with EBD in suburban or rural communities could be problematic. Sainato et al. (1987) stated threats to generality acro ss times and discussed the limitations of generalizing the findings outside a 15-minute, pre-school c ircle time activity. Finally, two studies (Ferkis et al. 1997; Barbetta & Heward, 1993) desc ribed threats to generality across response measures and commented on limitations of extending error correction procedures beyond academic tasks that require one-word responses (i.e., sight words or names of capitals). The authors stated that the effects of erro r correction could not generalize to more complex tasks, such as, ru les for mathematical computation, definitions of science concepts, and sight word master y within a context of a reading excerpt. In the study by Carnine (1976), two different types of teachers, a certified special education teacher and a non-certif ied university student were ab le to instruct using the fast-rate presentation. Therefore, Carnine s uggested that the resu lts of the study could extend beyond the conditions of the experiment and that various types of teachers could have the same results (generalizability across behavior change agents). In conclusion, when researchers control for threats to internal and external validity, have clear and precise operational definitions, and collect data on the implementation of the independent variable, then the experiment has a high standard of rigor and the results of the study may have strong implications fo r practitioners (i.e., social validity). Future Research Directions A synthesis of future directions of res earch reco mmended by the authors in the 18 articles produced two noteworthy areas of future research: (1) systematic replications investigating characteristics related to instruct ional strategies used to increase rates of ASR and correct responses, and (2) systematic replications related to subject areas and populations.

PAGE 55

55 Based on the literature, there is a need to examine alternative strategies to increase rates of ASR and correct responding. Particular ly, the efficacy of teachers utilizing choral responding could be adapted from a small group setting to a large group setting and its effects on ASR and rates of correct responses could be investigated (Bar betta et al., 1993; Sindelar et al., 1986). Conducting choral res ponding or mixed responding in a large group setting is important because teachers woul d have the opportunity to assess for learning with all students in the classroom. Moreove r, conducting further research to examine optimal ratios of responding modes is another area of further research. At present, an optimal ratio of choral and individual res ponding modes for teachers in applied settings has not been substantiated although a ratio of 70:30 choral to individual responding has been suggested by Stevens and Rosenshine (1986). In their ar ticle, the authors hypothesized that target stude nts could benefit from fre quent practice of choral responding, while teachers could gain inform ation on individual performance by using individual responding. Testing the generalization of the effects OTR across subjects is another area of research that may be warranted. For example, researchers could examine the effects of ASR and NR error correction on correct responses that require more than one word (i.e., rules for math computation and science definitions and concepts) (Barbetta & Heward, 1993). Specifically, research is needed to determine how many ASR after incorrect responses are needed for each learning trial (i .e., one, two, or more) in order to achieve an optimal correct response rate (Sterling et al., 1997). Examining the implementation of ASR and error correction on a larger scale across various settings and populations is another needed area of research (Ferkis et al., 1997), and is discussed in the section below.

PAGE 56

56 Subject areas and populations of participants. Systematic replications should also be conducted examining the effects of OT R on different subject ar eas as well as with different populations. Currently, there is a gap in the research in r eading interventions and OTR, particularly with students with learni ng disabilities and examining the effects of error correction on sight word acquisition and reading comprehe nsion (Ferkis et al., 1997; Skinner & Shapiro, 1989; Skinner, et al., 1991). Researchers may want to consider conducting studies investigati ng OTR across various content areas (math word problems), settings (general education classrooms) and populations (a t-risk youth) (Carnine, 1976; Skinner et al., 1991). In summary, a future rese arch direction could extend the learning trial literature by comparing the three types of OTR (individual, CR and a mixture of 70 % choral responding and 30% indi vidual responding) on decreasing students disruptive and off-task behavior, as well as increasing AS R with students identified at-risk for EBD. Summary In this chapter, the literature on the eff ects of increased opport unities to respond to academ ic requests was reviewed. Researchers in vestigated these effects on academic and behavioral outcomes of students identified w ith various disabilities in several different classroom settings. The majority (66.1%) of stud ents in this literature review were male, and 43.3% of the students were identified as EB D. All the researchers were interested in measuring student academic outcomes, and 17 out of the 18 researchers measured the frequency of correct responses. Other dependent variables of interest were student on-task behavior and frequency of di sruptive behaviors. Researcher s manipulated these variables with various methods to increase rates of OT R (setting a criterion level, repeating learning trials, utilizing error correction, and faster presentation rates of OTR). In 7 of the 18 studies (38.8%), the teacher implemented the hi gher rate of OTR. Various single subject

PAGE 57

57 designs (i.e, adapted alternat ing treatments, alternating tr eatments, withdrawal, changing criterion, and multi-element designs) were used in all of the studies. Social validity assessments were reported in 9 of the 18 studies (50.0%), and there was a discussion of treatment integrity in 11 of the 18 studies (61.1%). History, selection bias, and testing were reported threats to internal validity, while generality across subjects, across settings (i.e., different t ypes of instructional settings and classroom settings), across times, across response measures, and behavior change agents were reported threats to external validity. Carnine (1976) and Skinner et al. (1994) suggested extending their studies by examining if think time (giving students 23 seconds after a teacher question) is an effective strategy in maintaining high rates of correct responses and minimizing off-task behavior. Researchers in two other studies (Barbetta et al ., 1993; Sindelar et al., 1986) suggested investigating the us e of choral responding in a large group format for future studies. Finally, researchers could examine incr eased rates of OTR in various subject areas as reading fluency and science. Although researchers examining the effects of OTR on student academic and behavioral outcomes have shown positive effects, no clear trends in this line of research have been established because of the limitations of the studies. These limitations include: (a) settings of studies have been mostly an alogue or small group, (b) the Wolery et al. (1992) and Sindelar et al. (1986) studies would be difficult to re plicate in a natural setting because of the length of time and number of OT R involved, (c) systematic replications of earlier studies needs to be implemented so that effective teaching practices can be established, (d) validation of teacher use of c horal vs. individual or mixed responding still

PAGE 58

58 needs to be validated in natural settings; and (e) there currently exis ts no clear evidence of whether teachers are (or how often) spontaneously giving st udents high rates of OTR in school settings. Statement of the Problem Giving students high rates of OTR is an e ngaging practice that allows teachers to teach more in less time (Barbetta & Hewar d, 1993). Therefore, students at-risk for EBD (who are also likely to have academic delays) may increase their skill levels if this practiced is used. When students at-risk for EBD do not make sufficient academic progress, they are more likely to: disrupt environments, threaten others, fail to complete assignments, fight with peers, and argue with teachers (Nelson & Roberts, 2002). Therefore, from a negative reinforcement pe rspective (Gunter & Coutinho, 1997), when a teacher presents a challenging academic task, the task has a likelihood of becoming an aversive stimulus to the student; thus increa sing the probability of the student engaging in problem behavior to avoid the task. If th e student engages in problem behavior and continues to avoid the task, the result is loss of valuable instruction and in the long-term a higher probability of school failure. Choral responding (CR) is one type of OTR that has been demonstrated to increase student engagement and correct responses, a nd decrease problem behaviors. To date, researchers primarily have investigated th e use of CR in small group settings; however, instruction often occurs with in a large group setting, partic ularly in general education classrooms. Currently there is a small body of literature that has inve stigated th e use of CR in large group settings, and no research ers have compared choral with mixed responding. The results of this literature re view indicated that only one of 18 studies examined CR in a large group and general education setting (Miller et al., 1995).

PAGE 59

59 Moreover, only two studies (Carnine, 1976; Ferkis et al., 1997) were conducted in general education classroom settings, and one of th ese studies examined CR during small group or individual instruction (Carnine, 1976). Ferkis et al. uti lized an error co rrection procedure during a small group setting. McKenzie and Henry (1979) examined unison hand raising (a nonverbal type of choral responding) and was the only study conducted in a general education classroom. The other studies em ployed CR in special education classrooms (Sainato et al, 1987; Sutherland et al., 2003) a nd one (Miller et al.) a combination of both settings. Miller et al. (1995) utilized CR in a gene ral education classroom in one experiment as an error correction procedure rather th an as an antecedent strategy. In this study, students chorally responded after incorrect responses while grading worksheets. At present, there is some evidence to support the positive effects of CR as an antecedent procedure used in the beginning of a learning tr ial before errors are made. It is important to investigate CR as an antecedent strategy because manipulating antecedent events within learning trials involves maximizing the likelihood that the st udent will respond correctly when presented with a stimul us (in this study, a sight word ). Furthermore, based on the results of this literature revi ew, researchers have yet to examine the effects of CR in a general education classroom during large group instruction with students at-risk for EBD using single subject methodology. This literature review did not consist of any studies that compared an optimal ratio of using a combination of individual or chor al responding. Sindelar et al. (1986) compared the effects of individual and choral responding on the number of sight words mastered with a small group of students. Theses aut hors cited an earlier article by Stevens and

PAGE 60

60 Rosenshine (1981) who suggested investigating whet her 70% choral to 30 % individual is an optimal ratio for teachers to utilize dur ing instruction. The purpose of the individual turns allows for testing specific children and gain information on individual performance. It is the purpose of this study is to investigate this area of research. Finally, in a study by Anderson, Everts on, and Brophy (1979), the authors found that teachers achieving the highest scores on th e choral responding (CR) variable utilized that CR once every four minutes. Still, there is a lack of evidence of teachers natural rates of giving OTR, specifically CR during large gr oup instruction. Determining this rate is important so that future researchers will have some idea of how much to contrast baseline rates of OTR to rates during intervention phases. At the mo ment, one study Sainato et al. (1987) has given some indication that 3 or 5 OTR per minute is sufficient to increase correct response rates and decr ease disruptive behavior, ho wever this study was conducted in a pre-school setting with ch ildren identified with develo pmental delay (DD) and future research is needed to determine if this rate is adequate for other settings and participants. This study extended the OTR literature by co mparing three types of OTR individual, choral and mixed responding in a 2nd grade classroom with 6 st udents identified at-risk for EBD. Researchers compared three types of re sponding procedures during the experimental phase of the study (individual, choral, and a mixed responding (70% choral and 30% individual). The rate of OT R per minute during the three procedures was 5 per minute and was based on the rate during th e treatment phase of a previous study (Sainato et al., 1987). Purpose of the Study The purpose of this study is to investig ate the following research question: How does a choral responding procedure com pare to an individual responding procedure and a mixture of choral and individual responding procedure during group

PAGE 61

61 instruction in a general edu cation classroom on the disruptiv e, off-task behavior, and active student responding of high-risk students?

PAGE 62

62 Figure 2-1. Learning trial

PAGE 63

63 Figure 2-2. Classifications of st udies in the literature review

PAGE 64

64 Choral Responding Choral Responding Un ison Hand raising Choral Responding Antecedent Strategy Antecedent Strategy Antecedent Strategy Error Correction Self-contained Self-contained General Educa tion General Education/self-contained 2/3 OTR per min 3/5 OTR per min Unison vs. Individual ITT 1 min vs. 1 min w/error correction Math Mo rning circle Geography facts Math Individual, Choral and Mixed Responding Antecedent Strategy General Education Individual, Choral vs. 70:30 ratio Sight words Figure 2-3. Choral responding literature Sutherland et al., 2003 Sainato et al., 1987 McKenzie & Henry, 1979 Miller et al., 1993 Haydon et al.

PAGE 65

65Table 2-1. Description of studies examining effects of increased opportunities to respond Reference Sample Single subject design Independent variable(s) Dependent variable(s) Results Carnine (1976) Two students (boy and girl) identified by teacher as having high rates of offtask behavior, (ages N/A), 1st grade A-B-A-B-AB Presentation rate (Baseline, 5 sec; Intervention, 1 sec) Percentage of: participation off-task and correct responses per session Increased OTR resulted in increased percentages of correct responses and participation and decreased percentages of offtask behavior West & Sloane (1986) Five students with EBD (2 boys, 3 girls), ages 7-9 MultielementPresentation rate (Fast, 20 sec; Slow, 60 sec); Point delivery rate fixed interval (High, FI a 60 sec; Low, FI a 240 sec) Mean percentage of: intervals with disruptive behaviors and academic accuracy, and rate of correct response per minute No difference among dependent variables between high and low point delivery; increased OTR resulted in lower disruptive behaviors and increased correct response rate; accuracy slight higher during slow presentation rate Skinner, Smith, & McLean (1994) Three students with EBD (2 boys, 1 girl), ages 9-11 Adapted alternating treatments 5-sec intertrial interval (ITT); 1-sec ITI, no treatment Number of words mastered per session/ per condition 5-sec and 1-sec ITI resulted in more mastered words than no treatment condition

PAGE 66

66Table 2-1. Continued. Skinner & Shapiro (1989) Five students with EBD, (gender N/A) ages 14-18 Adapted alternating treatments Continuous and intermittent assessment (one OTR per stimulus); taped words and drill (two OTR per stimulus) Words read correctly and incorrectly per minute Having 2 OTR resulted in more words read correctly and fewer read incorrectly, having 1 OTR resulted in fewer words read correctly and more read incorrectly Skinner, Ford & Yunker (1991) Two students with EBD, (boys), ages 9-11 Adapted alternating treatments Verbal cover, copy, and compare (VCCC; increased OTR) written cover, copy, and compare (WCCC) and no treatment Digits correct per minute (DCM); percentage of multiplica -tion problems correct per session Increased OTR resulted in an increase in correct problems and DCM; dWCCC and no treatment resulted in fewer correct problems and bDCM Skinner, Belfiore, Mace, WilliamsWilson, & Johns (1997) Two students with EBD (boys), ages 10-11 Multiphase alternating treatments and multiphase adapted alternating treatments cVCCC (increased OTR) and dWCCC Number of multiplicatio n problems correct and bDCM per session Accuracy and fluency higher for both students during cVCCC than dWCCC

PAGE 67

67Table 2-1. Continued. Sutherland, Alder, & Gunter (2003) Nine students with EBD (1 girl and 8 boys), ages 812 A-B-A-B withdrawal Criterion level of 3 OTR/min Mean rate of teacher praise per minute, rate of student correct responses per minute, mean percentage of correct responses per minute, rate of disruptive behaviors per minute, percentage of on-task intervals per session Increased OTR resulted in fewer disruptions, more correct responses and increased task engagement

PAGE 68

68Table 2-1. Continued. Miller, Hall, & Heward (1995) Fourteen students, eleven identified as developmentally handicapped (8 males, 6 females). Three boys in a 1st grade regular education classroom, eleven students ages 9-12 years old in special education classroom. ABABCBC design and an ABABC design 1-min time trials with next day feedback and 1 minute time trials with immediately followed by teacher directed feedback Rate of correct answers, percentage of answers correct, ontask behavior For the majority of students during the 1-minute time trial with next day feedback had the highest increase in rate of problems solved per minute without a decrease in accuracy, on-task behavior was also highest during this condition. Gunter, Shores, Jack, Denny, & DePaepe (1994) One student identified with severe behavior disorders (male), age 12 A-B-A-B withdrawal Talk/mand procedure Student disruptive behavior per minute The talk/mand procedure resulted in decreased amount of disruptive behavior

PAGE 69

69Table 2-1. Continued. Koegel, Dunlap, Dyer (1980) Three student identified with autism (2 males, 1 female), ages 7-11 A multiple baseline design was used in one study and a reversal design used in the other Long ITI (at least 4 seconds) and short ITI (one second) Various tasks (sequencing, verbal imitation, object, verbal and number discriminatio n, prepositions and color labeling) Shorter ITI resulted in increases in the average percent of correct responding across the various tasks McKenzie & Henry (1979) Fifty two 3rd grade students Two comparabl e treatment groups; one group experiment al the other control Individual question in control group; unison hand raising in experimental group Number of students ontask, mean expressed test anxiety level and mean achievement scores for treatment Results using a chisquare test, X 2 (1) = 4.99, < .05.showed that more pupils were off-task in the individually addressed question group than in the test-events (unison hand raising) group. Sindelar, Bursuck, & Halle (1986) Eleven students with mild disabilities (5 boys, 6 girls) ages 6.9-11.0 Eight were identified as LD, three as EMR Adapted alternating treatments Two levels of questioning: ordered and unison Number of sight words mastered daily under each condition Students learned words at a faster rate with unison responding than ordered responding

PAGE 70

70Table 2-1. Continued. Ferkis, Belfiore, & Skinner (1997) Three students receiving specialized reading services according to their IEP (2 boys, 1 girl), ages 11-12 Alternating treatments Study 1 (1 OTR per word; 5 OTR per word) Study 2 (3 x A-R-C; 3 x A-R-C-R-R-RR) Primary variable number of sight words mastered in daily sessions Single response condition led to word mastery in less time than repeated response condition Sterling, Barbetta, Heward, & Heron (1997) Five students one with LD, four with DD (3 boys, 2 girls), ages 911 Alternating treatments ASR (3 OTR per/health fact); OT (no OTR per/health fact) Mean number of health facts correctly identified on end of day tests Students learned and maintained more health facts under ASR instruction than OT instruction Barbetta, Heron, & Heward (1993) Six students with DD (4 boys, 2 girls), ages 8-9 Alternating treatments ASR (error correction after incorrect response); NR (no error correction) Primary variable number of correct responses per session ASR error correction resulted in more student responses (M= 30 per session); NR error correction (M=12.6) Sainato, Strain, & Lyon (1987) Three students with DD (two boys, one girl), ages 2 to 4 years Changing criterion Presentation rate (3 OTR per/min); (5 OTR per/min) Percentage of on-task behavior and rate of correct responding per/min Increased OTR resulted in increased correct responding and ontask behavior

PAGE 71

71Table 2-1. Continued. Barbetta & Heward (1993) Three students with LD (two boys, 1 girl), ages 10-11 Alternating treatments ASR (error correction after incorrect response); NR (no error correction) Number of correct responses during instruction, same day and next day tests ASR error correction resulted in more capitals learned and maintained than NR Wolery, Ault, Doyle, Gast, & Griffin (1992) Four students with moderate mental retardation (2 boys, 2 girls), ages 10-13 Adapted alternating treatments Three experiments comparing individual and choral responding and controlling for interactional effects of number of exposures and OTR with each condition Percentage of correct responses The use of choral or individual responding interacts with the ratio of exposures and the ratio of OTR Note Adapted from Sutherland and Wehby, 2001 aFI= fixed interval bDCM= digits correct per minute cVCCC= verbal, cover, copy, compare dWCCC= written, cover, copy, compare

PAGE 72

72 CHAPTER 3 METHODS The purpose of this chapter is to describe the procedures followed to conduct the current study. Specifically, this chapter desc ribes: (a) criteria for selecti ng the participants ; (b) settings, teachers, an d materials used to carry out the stu dy; (c) study procedures and research design; (d) dependent measures and behavioral coding definitions; and (e) da ta analysis methods, including procedures to collect interobserver agreement, treatment integrity, and social validity data. The intent of this study is to comp are the effects of three opportunitie s to respond (OTR) strategies (i.e., individual responding, c horal responding, and a mixture of choral [70%] and individual [30%] responding) delivered duri ng group instruction in a genera l education classroom on highrisk students disruptive behavi or, off-task behavior, and ac tive student responding. The study was conducted to answer the following research question: How does a choral responding procedure compare to an individual responding procedure and a combination of choral and individual responding procedure during group inst ruction in a general education classroom on the disruptive, off-task behavior, and activ e student responding of high-risk students? The results from this study compare the e ffects of three types of OTR: individual responding, choral responding, and a mixture of choral and indi vidual responding at a ratio of 70% choral responding and 30% in dividual responding on target stude nts disruptive, off-task behavior and active student responding. Method Participants In acco rdance with the policies set forth by the University of Florida Institutional Review Board, the experimenter obtained informed consen t from the teachers participating in the study and the parents of particip ating targeted students.

PAGE 73

73 Students. Six students who were identified as having chronic disruptive behaviors that placed them at risk for emotional or behavioral disorders (EBD) participated in this study. The following eligibility criteria were used to identify participants: (a) rated by the teachers as having high rates of disruptive behavior for more than one month according to the critical events index and combined frequency in dex on the Systematic Screening for Behavioral Disorders (SSBD), (b) enrolled in a 2nd grade general educat ion class, (c) between the ages of 7-8 years old, and (d) parental consen t to participate in the study. Five students were male (four African Ameri can and one Caucasian) and one student was female (African American). At the time of th e study, their ages ranged from 7 years 5 months to 8 years 1 month. Nominated students were sc reened using the SSBD (Walker & Severson, 1993). In stage one, the six students fell within the top 3 students in the classroom ranked by their teacher for externalizing behavior and were then selected to partic ipate in this study. The total maladaptive behavior score in stage two did not exceed 35 and therefore, the six students did not need additional observations in other settings and were not considered at an elevated risk for EBD. Teachers. Six teachers also served as participants in this study. Teacher participants: (a) had a minimum of 2 years of teaching experience, (b) used less than two OTR per minute during a pre-assessment condition, and (c ) consented to participate in the study. All six teachers were Caucasian, five of the six teachers were female. The average years of teaching was 3.0 years (range: 2 6 years), and all six teachers had taken a beha vior management class as undergraduate students. Setting and Materials Setting. The setting for the study was six 2nd grade general education classrooms in Alachua County, Florida. Two schools, one urban and the other suburban were selected. Class

PAGE 74

74 size consisted of 18 22 students. The racial/ethnic make up of the classrooms in the urban school was approximately 70% African American and 30% Caucasian, while the percentage in the suburban school was roughly 50% African Am erican and 50% Caucasian. This study took place during a large group instruction, teacher-direct ed academic activity that had the potential to have high rates of OTR (Skinner et al., 1996). Materials. During the targeted activity, materials th at are commonly used for language arts instruction were used in the st udy (e.g., flash cards). In order to control for poten tial sources of variability resulting from differences in mate rial, all teachers in the study used similar instructional materials. The primary experimenter developed, along with teachers, consistent lesson plans and instructional materials to teach content vocabulary and syllable practice; thus, all six teachers used sight words that were at an equivalent level of difficulty. All teachers utilized similar grade level content lesson plans a nd the materials (flash cards) were the same for teachers one and three and teachers four, five, an d six. Teacher two opted to use her own sight word cards, but the content of th e cards covered the same stories and review of previous spelling tests (See Appendix A). Measurement Procedures Dependent measures. T he dependent measures for th is study included the following student behaviors: (1) disrupti on, (2) off-task behavior, and (3 ) active student responding (See Appendix B for definitions and coding guidelines). Global operational definitions were used for each of the dependent measures. To ensure that all observers applied consistent judgments on the target behaviors, the same definitions applied to every target student. Disruptive behavior was define d as any behavior demonstrat ed by the target child that interrupts the flow of in struction or was disruptive to the on-t ask behavior of other students. The

PAGE 75

75 following behaviors are examples of disruptive behaviors: getting up from seat, touching others, speaking out loud without raisi ng hand, taking things from ot hers, throwing objects, making noise (tapping, banging), moving head up and down or from side to side, talking to others, rocking in chair, and so fort h (Armendariz & Umbreit, 1999). Off-task behavior was defined as when the ta rget student is not act ively directed (looking) toward the teacher (e.g., looking around the room, looking at another student, talking to another student, looking at or, drawing on th e desk, playing with materials, ha ir, or clothes, etc.) (Miller, et al., 1995). Active student response was defined as engaging in the behavior that was expected during that condition: (a) independent hand raising for the i ndividual responding, (b) responding in unison with the group for choral responding, or (c) a mixture of both in the combination responding condition (Godfrey, Grisham-Br own, Schuster, & Hemmeter, 2003). Recording procedures. To accurately capture the occu rrence of both discrete and continuous behaviors, different types of measurement strategies were used. Student disruptive behaviors were measured using a frequency count and translated into rate per minute using the following formula: frequency of disruption/total number of minutes (i.e., 8-minutes). Active student responses were measured using a percentage formula derived from counting the number of ASR responses following a teachers use of a specific OTR strategy (i.e., individual, choral or mixed responding) and dividing each of those num bers by the total number of questions the student was exposed to. Student off-task behavior was measured using momentary time sampling. Momentary time sampling is a common measurement strategy used to accurately measure continuous variables with long durations, such as off-task. An additional advantage of using momentary time

PAGE 76

76 sampling is when off-task behavior is observe d for only a moment, the possibility of observing and collecting data on several behaviors increa ses (Skinner, Rhymer, & McDaniel, 2000). Offtask behavior was reported using a percentage form ula: the total number of intervals of off-task behavior was divided by the tota l number of intervals observed. All observations lasted a total of 8 minutes. During this time, the primary researcher served as the primary observer and collect ed real time data using direct sequential recording of the teachers use of OTR followed by student activ e responding during the activity period. Student disruptive and off-task behaviors were also r ecorded during the activity period using direct recording. Data were collected using a paper/pencil data collecti on system (see Appendix C for a sample data sheet). On the data collection sheet, a plus (+) was used to indicate on-task behavior and a minus (-) was used to indicate off-task be havior. The sequence of teacher-student OTR and response behaviors was coded by circling the occu rrence of the teachers specific type of OTR followed by an active student response was written in as ASR. Disruption was coded as a check mark on the coding sheet. All behaviors were mutually exclusive. During the 8-minute observation period, the obser ver(s) continuously observed the teacher and target student. The observers were cued ev ery 20 seconds (by a tape d tone) to look at the targeted student and code if the student was offtask at that moment (G unter et al., 2003). Since the length of each session was 8 minutes, there were a total of 24 observations for offtask behavior. Student disruptive behavior and ac tive student responding were measured using continuous, sequential recording. Experimental Procedures This study included two phases: teacher training, and com paris on of the three interventions [i.e., individual responding vs. choral respond ing vs. mixed responding (70% choral responding and 30% individual)].

PAGE 77

77 Teacher training. The teacher-training phase consiste d of two stages: (a) informationsharing, and (b) practice until mastery occurr ed. Training was implemented during two 45minute practice sessions on two se parate days based on procedures employed by Sutherland et al. (2003). First, the primary researcher reviewed the operational definition of OTR (including choral and individual responding) with each teacher in dividually and then discussed its rationale and purpose for decreasing disruptive and off-task be havior and increasing ac tive student responses. Following this step, several video clips of teach ers using high rates of OTR (both choral and individual responding) were shown. In the second step, the teacher pr acticed and demonstrated using the choral responding, individual responding, and a mixture of choral and individual res ponding using appropriate materials (i.e., flash cards of sight words) in fr ont of the primary experimenter and two adults serving the role of students. For the choral responding condition, teachers used the following sequence of instruction: (a) explain the expect ations, procedures, and rules for the choral responding condition (specifically cu eing procedures); (b) show a si ght word card to the class; (c) cue the students verbally -4 -3-2-1 to allow adequate wait time for all students respond and say everyone; (d) provide feedback on whether the answer was correct or incorrect (e.g., that is correct or that is not correct. The correct answer is______); and (e ) select another sight word card and begin the next lear ning trial (Heward et al., 1989). In the individual mode of responding, the fo llowing sequence of instruction was used by the teacher: (a) review procedures expectations, and rules for th e mode of responding; (b) show a sight word card to the class and read the definition; (c) cue the students verbally -4-3-2-1 to allow adequate wait time for all students respond and select one student to respond; (d) provide

PAGE 78

78 feedback on whether the answer was correct or incorrect (one error correction was made by the teacher), and (e) presented another sight word card and begin the next learning trial (Randolph, 2007). Next, the teacher practiced and demonstrated using a mixture of bot h modes of responding at a ratio of 70 choral to 30 i ndividual. Teachers used the follow ing sequence of instruction: (a) explain the expectations, proce dures, and rules for the choral responding condition (specifically cueing procedures); (b) for choral responding the teacher said group, showed a sight word card to the class, for a question that required an indi vidual response, the teacher said individual; (c) cued the students verbally -4-3-2-1 to allo w adequate wait time for all students to respond and said everyone (for a choral response) and called on one student for an individual response; (d) provide feedback on whether th e answer was correct or incorre ct (e.g., that is correct or that is not correct the correct answer is______); and (e) select another sight word card and begin the next learning trial (Heward et al., 1989). An illustration of individual, choral responding only and the combination of choral and individual responding can be seen in sample lesson plans (see Appendix A). The choral, individual, and combination res ponding mode training demonstration sessions lasted 8 minutes each as measured through the use of a stopwatch. The primary experimenter played the role of the student along with two other adults and responded to the flash card and also cued the teacher every 60-seconds to in dicate that 5 OTR should have been given. Following each session, the resear cher showed the teacher his or her rate of OTR (individual, choral, or mixed mode) for the 8-minute session. Once the teacher had demonstrated the ability to use three types of OTR (individual, choral, and a mixture of individual and choral) according to the sequence outlined and at a rate of 5 per minute for 8-minute training sessions, then mastery

PAGE 79

79 had occurred and training was considered co mplete. The training took approximately two 45minute practice sessions on two separate days for each teacher. Comparison of three interventions. After teacher training, an experimental comparison of the three intervention conditions began. Ba sed on a randomized schedule, the teacher was instructed to implement eith er: (a) choral responding, (b) mixed mode responding, or (c) individual responding all at a ra te of 5 per minute. This rate was selected based on findings from Sainato et al. (1987) that suggested only slight differe nces between rates of 3 vs. 5 OTR/minute; therefore the faster rate of 5 OTR per minute was selected. Using an alternating treatments design, a comparison of the three OTR conditions was examined. When implementing each mode of responding, the teachers were instructed to follow the guidelines and procedures described above. For example during choral responding, the teacher followed the above procedures and cued the entire class to re spond and during individual responding the teacher cued one student at a time. In the combined mode condition, the teacher r ead from a list indicating the type of OTR, either a choral or an individual OT R. Using a ratio of 70% choral to 30% individual at a rate of 5 OTR per minute yielded 28 choral responses to 12 individual responses per lesson. For each individual response, the teacher said This is individual, showed a sight word card, read the definition, counted down from five, called on a student, and asked: What word? The number of exposures to questions (40) was approximately equal between the three treatments; however, the number of opportunities to res pond differed across the three conditions. In the choral responding condition, the number of OTR was 40, during individual respondi ng the number of OTR was 3 (the teacher was prompted to give the target ed student 3 OTR), and during mixed responding the number of OTR was 31 (i.e., 28 choral plus 3 individual = 31 total). Each session lasted 8-

PAGE 80

80 minutes in length. During this phase, the teachers use of OTR and all student behaviors were observed and measured. Because of the rapid alternating cond itions that exist in this de sign and the possibility that the effects on a behavior in one condition may influence the behavior in another condition, the three conditions were randomly assigned to control for interaction effects (Kennedy, 2005). However, an apriori decision was made not to ch ose one condition three times in a row, and in a few instances, conditions were purposely selected to achieve stability at the end of the study phase. Design An alternating treatm ents design (Barlow & Hayes, 1979) was used for this study. In an alternating treatments design, at least two different treatments are implemented within a short time span (Barlow & Hayes). At least two treatments are randomly alternated with each other, and the effects on one or several behaviors are observed (Kennedy, 2005). An advantage to an alternating treatments design is that random assi gnment of conditions or counterbalancing can neutralize confounding factors, su ch as time of administration or setting, which may cause variability in the data. Interobserver Agreement Inter-observer agreement (IOA). To provide evidence that the m easures of the dependent variables were accurate, secondary observer(s) colle cted interobserver agreement data on at least 25% of the sessions within each treatment of the study (Kennedy, 2005). IOA checks for the dependent variable of disruption were measured by exact event occurrence only formula. To calculate exact agreement, the interval agreement formula was used (i.e., A /A+ D 100%). An agreement was scored when two observers scored the same number of behavioral events during each interval of observation. Off-task behavior wa s also calculated using an interval agreement

PAGE 81

81 formula. An agreement was scored when both obser vers recorded off-task behavior or on-task behavior during each momentary time sample and then the number of agreements was divided by the number of agreements plus disagreem ents and multiplied by 100 (i.e., A /A+ D 100%). ASR interobserver agreement was calculate d by using a total agreement method (S/L 100%) where S is the smaller total and L is the larger total. Prior to beginning data collection and IOA data, the primary and secondary observer(s) were tr ained to a reliability of at least 85% for three consecutive sessions on each dependent measure. To control for observer drift (i.e., the cha nge in interpretation among observers on the occurrence of the target behavior ), the primary observer met with the secondary observer(s) on a weekly basis and/or repeated the training exercises once every 5 sessions. This ensured all observers remained in agreement about the defi nitions of targeted behaviors (Cooper, Heron, & Heward, 1987). Mean percentages of IOA across the three types of responding and the number of reliability checks are reported in table 3.2. In terobserver agreement was calculated on average during 33.8% of observations. Average inter observer agreement for disruption was 93.02% (range 75-100%), for off-task 91.5% (range 80.0100%) and for ASR 98.63% (range 90.47100%). Treatment Integrity Data were collected on the teach ers use of OTR and student behaviors and visually displayed through graphs after each session. The t eachers performed at a rate of at least 5 OTR per minute during each session with a high degree of fidelity. The primary experimenter utilized a behavioral consultant model described by Noe ll, Witt, Gilbertson, Rainer, and Freeland (1997) and Noell et al. (2005) after sessions to mainta in the teachers rate of 5 OTR/minute and control for treatment drift. This model consists of giving teachers verbal feedback on their performance

PAGE 82

82 (rate of OTR/min). Sessions continued until a stab le three to five data point trend in disruption was obtained. Direct measurement of the independent variab le (i.e., teachers implementation of the OTR procedure (i.e., individual, choral, or mixed mode at a rate of 5/minute) was conducted as a measure of treatment integrity on approxima tely 15% of the sessions by two secondary observers. A checklist sheet was used to record th e occurrence or non-occurre nce of each step of the OTR instructional sequence in the individual, choral, and mi xed modes as described above (see Appendix E). An OTR was recorded when the teacher asked a question to an individu al student or to the entire group (i.e., choral responding). Teachers rate of OTR was meas ured using a frequency count and recorded on the data collection sheet (see Appendix E). IOA for the fidelity of the teachers use of OTR was measured during each treatment condition across six teachers on at least 15% of the sessions (Ke nnedy, 2005). The accuracy of the teachers implementation of the individual, choral and mixed procedures (the four componentscueing students, allowing adequate wait time, (counting down by 5), asking questions, and providing feedback on student responses as well as the number of OTR per 8 minute session was calcu lated using the total agreement approach. In addition, the accuracy of the teachers start of the implementation of syllable practice after 4 minutes (within 10-s) was also calculated. During mixed responding two observers followed the teachers verbal prompt (i.e ., This is individual. This is group), and recorded on the treatment integrity checklist th e accuracy with which the teacher implemented the 70:30 ratio (as well as the number of que stions asked to the targeted student). Social Validity After the co mpletion of the study, the teachers wa s asked to complete three social validity surveys to obtain information about their percep tion of the acceptability and usefulness of each

PAGE 83

83 intervention (see Appendix F). The questions on the survey are presented (Table 3-3). Teachers rated questions using a 4-point Li kert scale, where (1) represents not at all and (4) very much.

PAGE 84

84 Table 3-1. Descriptions of participants Name Gender Ethnicity Age SSBD score Frank Male African American 7 years 6 months 25/35 DAndy Male African American 8 years 2 months 31/35 Monty Male African American 7 years 5 months 29/35 Teo Male African American 8 years 2 months 32/35 Amber Female African American 8 years 2 months 30/35 Mats Male Caucasian 7 years 6 months 27/35

PAGE 85

85Table 3-2 Interobserver agreement data Disruptive Off-task ASR Percentage agreement M= 93.0% (range 75100%) M= 91.5% (range 80.0100%) M= 98.6% (range 90.5100%) Individual Choral Mixed Percentage of reliability checks M= 36.1% (range 28.650.0%) M= 33.8% (range= 28.640.0%) M= 31.5% (range= 25.040.0)

PAGE 86

86 Table 3-3. Survey Questions 1. Which intervention was the most difficult to implement? (Individual Choral, or Mixed) 2. How difficult was it to implement the intervention? 3. How time-consuming was it to implement the intervention? 4. How helpful was the training session? 5. How helpful to your teaching instruction was the intervention? 6. After implementing the interv ention, did you see an increas e in the stud ents on-task behavior? 7. After implementing the interv ention, did you see a decrease in the students disruptive behavior than what you normally observe? 8. After implementing the interv ention, did you see an increas e in the students active responses? 9. How likely is it that you will us e the intervention in the future?

PAGE 87

87 CHAPTER 4 RESULTS The purpose of this study was to exam ine the effects of three types of OTR (individual responding, choral responding, a nd mixed individual and choral responding) on the disruptive and off-task behavior and active student responding of six children identified at-risk for EBD. These effects were determined by collecting behavioral observation data on the students academically related behaviors in a general education classroom using an alternating treatments design. Data were recorded and th en visually displayed on graphs for analysis (Figures 4-1 43). Treatment integrity data were collected to validate implementation of the studys procedures, and social validity data were collected to asse ss the teachers perceptions on three components of the study: social significance, social acceptability, and social importance of the interventions. This chapter reports the results of all of thes e efforts, beginning with the outcomes for each participant during the three conditions of the study. Intervention Results A summ ary of mean rates of disruptive behavior, percentage of intervals of off-task behavior, and percentage of active student responding (ASR) across conditions is presented for each participant (Table 4-1). Graphic displays were used to organize data during data collection, which helped in the analysis process by pr oviding a detailed description, summary and comparison of the three different types of OTR for each participant (Tawney & Gast, 1984). Visual analysis was used to evaluate changes in trend, level, and variability (Figures 4-1 4-3). Trend lines were determined by us ing a split-middle trend estimation line (for lines with seven or more data points), visual analysis for lines with six or less, and by using regression trend lines in Microsoft Excel. Data collection continued until there was a clea r separation (no overlap) in at least the last three data points of a data path and until there was a consistent pattern of low

PAGE 88

88 variability across data points in the primary dependent variable (disruption) in each condition (type of OTR). Finally, the degree (slight, moderate, large) of magn itude of difference, trend, and variability of data paths were reviewed and determined. Results for the six participants are summarized (Figures 4-1 4-3). These data allow for an examination of the overall performance betw een subjects as well as condition-by-condition comparisons within subjects. The results from the study indicate that mixed responding was associated with the lowest levels of disruptive behavior for five of the six students, and that none of the three types of responding c onsistently produced low levels of off-task behavior or high levels of active student responding (ASR) across a ll six participants. For one participant (Teo), the level of responding varied widely, and cons iderable variability was observed for the dependent variables of disruptive behavior a nd off-task behavior. In spite of the above differences, the data provide information on th e effectiveness of the three types of OTR. Rate of Disruptive Behavior Disruptive student behaviors were m easured using a frequenc y count and translated into rate per minute using the following formula: fre quency of disruption/tota l number of minutes (i.e., 8-minutes). Disruptive behavior was defined as when the target student performed a behavior that interrupts, or has the potential to interrupt, the inst ruction in the cl assroom or the learning of another student. Five out of six students demonstrat ed a lower rate of disruptive behavior in the mixed responding condition in comparison to the individual and choral responding condition. Based on mean values, one st udent (Teo) demonstrated a slightly lower rate of disruptive behavior in the individua l responding condition than the choral or mixed responding condition. Results for st udent disruptive behavior am ong the six participants are presented (Figure 4-1).

PAGE 89

89 Participant 1: Frank. The mean rate of disruptive behavior for Frank during individual responding was 1.54/min (range = 1.25 1.88/min), th e mean rate during choral responding was 0.71/min (range = 0.25 1.00/min), and during mixe d responding the mean rate was 0.16/min (range = 0.00 0.38/min). The magnitude of differe nce in level of rate of disruptive behavior between individual and choral re sponding and choral and mixed responding was small, while the magnitude of difference in level of rate of di sruptive behavior between individual and mixed responding was moderate. During individual res ponding, there was a small downward trend in disruptive behavior, with little variability, and no overlapping data poi nts with choral and individual responding. During chor al responding there was a slight upward trend in disruptive behavior, with little variability, and one data po int overlapped with two data points in the mixed responding condition. During mixed responding there was a slight downward trend in disruptive behavior, with little variability. Participant 2: DAndy. The mean rate of disrupti ve behavior for DAndy during individual responding was 0.89/min (range = 0.5 1.5/min), the mean rate during choral responding was 0.43/min (range = 0.250.75/min), and during mixed responding the mean rate was 0.9/min (range = 0.00 0.38/min). The magnitude of difference in level of rate of disruptive behavior between individual and choral respondi ng and choral and mixed responding was small, while the magnitude of difference in level of ra te of disruptive behavior between individual and mixed responding was moderate. During individual responding there was a slight upward trend in disruptive behavior, with little variability, and one data point overlappe d with three data points in choral responding. During chor al responding there was a slight upward trend in disruptive behavior, with little variabilit y, and three data points overlapped with four data points in mixed

PAGE 90

90 responding. During mixed responding there was a s light downward trend in disruptive behavior, with little variability. Participant 3: Monty. The mean rate of disruptive behavior for Monty during individual responding was 1.12/min (range = 1-1.5/min), the mean rate during choral responding was 0.81/min (range = 0.5 1.38/min), and during mixe d responding the mean rate was 0.49/min (range = 0.25 0.75/min). The magnitude of differen ce in level of rate of disruptive behavior in the last three data points between individual and choral re sponding and choral and mixed responding was small, while the magnitude of differen ce in level of rate of disruptive behavior in the last three data points between indivi dual and mixed responding was medium. During individual responding there was a ve ry slight upward trend in di sruptive behavior, with little variability, and three data points overlapped with three data poin ts in choral responding. During choral responding there was a ve ry slight upward trend in disr uptive behavior, with little variability, and two data point s overlapped with six data points in mixed responding. During mixed responding there was a very slight upward trend in disruptive behavior, with little variability. Participant 4: Teo. The mean rate of disruptive be havior for Teo during individual responding was 1.52/min (Mdn= 1.5) (range = 0.625 2.75/min), the mean rate during choral responding was 1.65/min (Mdn= 1.5) (range = 0. 38 4.13/min), and during mixed responding the mean rate was 1.61/min (Mdn= 1.375) (range = 0.88 2.75/min). Due to the variability in the data, no clear differences emerged in level of ra te of disruptive behavi or between individual, choral and mixed responding. Duri ng individual responding there was a moderate upward trend in disruptive behavior, with mode rate variability. During choral responding there was a moderate

PAGE 91

91 upward trend in disruptive behavior, with large variability. Du ring mixed responding there was a moderate downward trend in disruptive behavior, w ith moderate variability. Participant 5: Amber. The mean rate of disruptive be havior for Amber during individual responding was 1.36/min (range = 1.13 1.50/min), th e mean rate during choral responding was 0.9/min (range = 0.75 1.13/min), and during mixe d responding the mean rate was 0.44/min (range = 0.16 0.75/min). The magnitude of differen ce in level of rate of disruptive behavior in the last three data points between individual and choral re sponding and choral and mixed responding was small, while the magnitude of differen ce in level of rate of disruptive behavior in the last three data points between indivi dual and mixed responding was moderate. During individual responding there was a flat trend in di sruptive behavior, with ve ry little variability, and one data point overlapped with one data point in choral re sponding. During choral responding there was a very slight downward tren d in disruptive behavior, with very little variability, and one data point overlapped with one data point in the mixed responding condition. During mixed responding there was a very small downward trend in disruptive behavior, with little variability. Participant 6: Mats. The mean rate of disruptive beha vior for Mats during individual responding was 1.25/min (range = 1.00 -1.36/min), th e mean rate during choral responding was 0.35/min (range = 0.25 0.50/min), and during mixe d responding the mean rate was 0.08/min (range = 0.00 0.13/min). The magnitude of differen ce in level of rate of disruptive behavior in the last three data points betw een individual and choral resp onding was moderate while the magnitude of difference in level between chor al and mixed responding was small, while the magnitude of difference in level of rate of disr uptive behavior in the last three data points between individual and mixed responding was mode rate. During individual responding there was

PAGE 92

92 a flat trend in disruptive behavi or, with very little va riability. During choral responding there was a very slight upward trend in disruptive behavior, with very little variability. During mixed responding there was a very small downward trend in disruptive behavior, w ith little variability. There was no overlap in data points between the three types of responding. Percentage of Off-Task Behavior The percentage of target student off-ta sk behavior was based on a m omentary time sampling with 20-second intervals, collected duri ng eight-minute instructi onal sessions. Off-task behavior was defined when the target student wa s not sitting in his or her seat and was not actively directed toward the te acher. Five out of six student s demonstrated a lower mean percentage of off-task behavior in the mixed responding condition in comparison to the individual and choral responding condition. One student (Amber) demonstrated a slightly lower mean percentage of off-task behavior in the choral responding condition than the mixed responding condition. Results for student off-task behavior among the six participants are presented (Figure 4-2). Participant 1: Frank. The mean percentage of intervals Frank was off-task during individual responding was 56.25% (range = 45.83% 62.5%); similarly, the mean percentage of off-task behavior for choral responding wa s 32.73% (range = 25.00% 41.66%), and the mean percentage for mixed responding was 16.55% (range = 8.32% 25.00%). The magnitude of difference in level of off-task behavior in the last three data points between individual and choral responding and choral and mixed responding was mode rate, while the magnitude of difference in level of rate of disruptive be havior between individual and mi xed responding was large. During individual responding there was a slight downward trend in di sruptive behavior, with little variability, and no overlapping data points with choral and mixed responding. During choral responding there was a slight downwar d trend in off-task behavior, wi th little variability, and one

PAGE 93

93 data point overlapped with two data point s in the mixed responding. During mixed responding there was a moderate downward trend in off-task behavior, with very little variability. Participant 2: DAndy. The mean percentage of inte rvals DAndy was off-task during individual responding was 25.6% (r ange =16.66% 33.33%); similarl y, the mean percentage of off-task behavior for choral responding wa s 19.05% (range =12.50% 25.00%), and the mean percentage for mixed responding was 9.89% (range = 4.17% 16.66%). Due to the variability and overlap in the data, there was no clear differential effects in leve l of off-task behavior in the last three data points between i ndividual and choral responding and th e level of off-task behavior in the last three data points between individual and mixed responding and choral and mixed responding was moderate. During individual respondi ng there was a slight upward trend in offtask behavior, with little vari ability, and one overlapping data poi nt with four data points in choral responding and no overlapping data points with mixed responding. During choral responding there was a slight upward trend in off-task behavior, with little variab ility, and four data points overlapped with four data poi nts in mixed responding. During mixed responding there was a slight upward trend in off-ta sk behavior, with moderate variability. Participant 3: Monty. The mean percentage of intervals Monty was off-task during individual responding was 40.27% (range = 29.16% 62.50%); similarly, the mean percentage of off-task behavior for choral responding was 26.56% (range = 12.50% 37.50%), the mean percentage for mixed responding was 16.67% (range = 8.33% 37.50%). The magnitude of difference in level of off-task behavior in the last three data points between individual and choral responding and choral and mixed responding was mode rate, while the magnitude of difference in level of rate of off-task behavior between individual and mixed responding was large. During individual responding there was a moderate upward trend in off-task beha vior, with moderate

PAGE 94

94 variability, and five overlapping da ta points with four data points in choral responding and five overlapping data points with one in mixed re sponding. During choral responding there was a slight upward trend in off-task behavior, with moderate variability, a nd all eight data points overlapped with one data point in mixed responding. During mixed responding there was a small upward trend in off-task behavi or, with moderate variability. Participant 4: Teo. Because there was a great deal of variability in Teos, mean and median scores are reported. The mean percentage of intervals Teo was off-task during individual responding was 28.47% (Mdn= 22.92) (range = 4.16% 62.50%); similarly, the mean percentage of off-task behavior for choral respon ding was 31.24% (Mdn= 27.08) (range = 8.30% 62.50%), and the mean percentage for mixed res ponding was 22.02% (Mdn= 20.83) (range = 8.33% 33.33%). Due to the variability in the data, there were no clear differential effects in level of offtask behavior between indivi dual, choral and mixed res ponding. During individual responding there was a slight downward tre nd in off-task behavior, with large variability. During choral responding there was a moderate do wnward trend in disruptive be havior, with large variability. During mixed responding there was a small dow nward trend in disruptive behavior, with moderate variability. Participant 5: Amber. The mean percentage of inte rvals Amber was off-task during individual responding was 47.50% (range = 25.00% 66.70%); similarly, the mean percentage of off-task behavior for choral responding wa s 23.33% (range = 16.66% 33.33%), and the mean percentage for mixed responding was 24.30% (range = 16.66% 33.33%). The magnitude of difference in level of off-task behavior in the last three data points between individual and choral responding was large, while the magnitude of differe nce in level of rate of off-task behavior between choral and mixed responding and individual and mixed responding was moderate.

PAGE 95

95 During individual responding there was a moderate upward trend in off-task behavior, with moderate variability, and one ove rlapping data point w ith two overlapping data points in choral responding and one overlapping data point with three data points in mixed responding. During choral responding there was a flat trend in off-task behavior, with small variability, and six data points overlapped with six data points in the mixed responding. During mixed responding there was a moderate upward trend in off-task behavior, with m oderate variability. Participant 6: Mats. The mean percentage of intervals Mats was off-task during individual responding was 54.17% (range = 45.83% 66.67%); similarly, the mean percentage of off-task behavior for choral responding wa s 28.47% (range = 20.83% 45.83%), and the mean percentage for mixed responding was 23.33% (range = 20.83% 33.33%). The magnitude of difference in level of off-task behavior in the last three data points between individual and choral responding was moderate and the magnitude of diffe rence in level of off-task behavior between choral and mixed responding was sm all, while the magnitude of difference in level of off-task behavior between individual and mixed respondi ng was moderate. During individual responding there was a slight downward tre nd in off-task behavior, with moderate variability, and two overlapping data point with one data point in choral respondin g and no overlapping data points with mixed responding. During chor al responding there was a sma ll upward trend in off-task behavior, with moderate variability, and two data points overlapped with fi ve data points in the mixed responding. During mixed responding there was a small downward trend in off-task behavior, with moderate variability. Percentage of Active Student Responding The percentage of active student responding was defined as the target students hand raising during the 3-seco nd wait tim e during individual respo nding, verbally responding with the class after the teacher prompt everybody dur ing choral responding, and hand raising/verbal

PAGE 96

96 responding during mixed responding. All six students demonstrated a higher percentage of ASR in the mixed responding condition in comparison to the individual res ponding condition. Three students (Frank, DAndy, and Monty) demonstrated a higher percentage of ASR in the mixed responding condition in comparison to the chor al responding condition. Three students (Teo, Amber, and Mats) demonstrated a higher percen tage of ASR in the choral responding condition than in the mixed responding condition. Result s for active student responses among the six participants are pres ented (Figure 4-3). Participant 1: Frank. The mean percentage of Frank s active student responding during individual responding was 22.61% (range = 12.5% 46.15%); similarly, and the mean percentage of active studen t responding for choral res ponding was 69.34% (range = 44.73% 89.18%), the mean percentage of active st udent responding for mixed responding was 84.35% (range = 76.31% 94.73%). The magnitude of differe nce in level of ASR in the last three data points between individual and chor al responding was large, the magn itude of difference in level of ASR behavior between chor al and mixed responding was small, while the magnitude of difference in level of ASR between indivi dual and mixed responding was large. During individual responding there was a moderate upward trend of ASR, with m oderate variability, and there were no overlapping data points between choral responding and mixed responding. During choral responding there was a m oderate upward trend in ASR, with large variab ility, and four data points overlapped with fi ve data points in the mixed re sponding and during the last data point (marked by an asterisk) Frank was noticeably sleepy. During mixed responding there was a slight upward trend in ASR, with small variability. Participant 2: DAndy. The mean percentage of D Andys active student responding during individual responding was 89.32% (range = 74.28 94.73%); similarly, the mean

PAGE 97

97 percentage of active studen t responding for choral res ponding was 93.25% (range = 81.57% 100.00%), and the mean percentage of active student responding for mixed responding was 97.28% (range = 88.88% 100.00%). There were no clea r differential effect s in level of ASR between individual, choral and mixed res ponding. During individual responding there was a slight downward trend in ASR, with moderate variability. During choral responding there was a slight upward trend in ASR, with moderate variability. During mixed responding there was a slight downward trend in ASR, with small variability. Participant 3: Monty. The mean percentage of M ontys active student responding during individual responding was 60.19% (rang e = 33.33% 91.89%); similarly, the mean percentage of active studen t responding for choral res ponding was 84.20% (range = 71.42% 91.66%), the mean percentage of active st udent responding for mixed responding was 90.50% (range = 77.77% 100.00%). The magnitude of difference in level of ASR in the last three data points between individual and c horal responding was large and th e magnitude of difference in level of ASR behavior between choral and mixe d responding was small, while the magnitude of difference in level of ASR between indivi dual and mixed responding was large. During individual responding there was a steep downward tr end of ASR, with large variability, and four overlapping data points with seven data points during choral responding and three overlapping data points with one data point in mixed responding. During choral responding there was a slight downward trend in ASR, with moderate variability, and seven da ta points overlapped with five data points in the mixed responding. During mixed responding there was a flat trend in ASR, with small variability. Participant 4: Teo. The mean percentage of Teos active student responding during individual responding was 82.20% (Mdn= 78.75) (range = 52.63% 94.44 %); similarly, the

PAGE 98

98 mean percentage of active student responding for choral responding was 93.79% (Mdn= 96.24) (range = 82.60% 100.00%), and the mean percentage of active student responding for mixed responding was 84.28% (Mdn= 85.29) (range = 72.72% 94.17%). The magnitude of difference in level of ASR between indivi dual, choral, and mixed respondi ng was small. During individual responding there was a slight upwar d trend in ASR, with modera te variability. During choral responding there was a flat trend in ASR, with small variability. During mixed responding there was a slight upward trend in ASR, with small variability. Participant 5: Amber. The mean percentage of Am bers active student responding during individual responding was 58.10% (ra nge = 34.14 92.10%); similarly, the mean percentage of active studen t responding for choral res ponding was 96.38% (range = 94.87% 100.00%), and the mean percentage of active student responding for mixed responding was 87.65% (range = 80.55% 97.14%). The magnitude of difference in level of ASR in the last three data points between individual and chor al responding was large, and the magnitude of difference in level of ASR behavior between ch oral and mixed responding was small, while the magnitude of difference in level of ASR betw een individual and mixed responding was large. During individual responding there was a large downward trend of ASR, with large variability, and one overlapping data point with one data point during choral responding and four overlapping data points with th ree data points in mixed re sponding. During choral responding there was a flat trend in ASR, with small vari ability, and four data points overlapped with two data points in mixed responding. During mixed re sponding there was a slight downward trend in ASR, with modera te variability. Participant 6: Mats. The mean percentage of Mats active student responding during individual responding was 42.24% (range = 13.04% 67.50%); similarly, the mean percentage of

PAGE 99

99 active student responding for choral responding was 75.27% (range = 40.00 94.44%), and the mean percentage of active student responding for mixed responding was 62.86% (range = 56.41% 67.50%). There were no clear differential effects in level of ASR between individual and mixed responding. During individual respondi ng there was a flat trend in ASR, with moderate variability. During chor al responding there was a small downward trend in ASR, with large variability. During mixed responding there was a flat trend in ASR, with small variability. Treatment Integrity Treatm ent integrity is reported for the three types of treatment conditions (individual responding, choral responding, a nd mixed individual and choral responding). A checklist was used to measure: (a) rate of teachers OTR per/min, (b) percentage of the 4 step instructional sequence completed (cue, wait time, question and feedback), (c) occurren ce or non-occurrence of each step of the OTR instructi onal sequence, and (d) start of syllable practice (i.e., after 4 minutes teachers were cued to begin syllable pr actice). For each step in the procedure, a check was given if the teacher impl emented the step correctly. Teacher 1: Mrs. Pence. Treatment integrity was calculated on 1 of 6 individual responding sessions (16.7%), and 1 of 7 choral and mixed responding sessions (14.3%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.80/min ( range = 4.64 5.0 OTR/min). Integrity on sequence of steps averaged 98.5% (range = 94.03 100%), and integrity on steps in the sequence averaged 100 % for cue, wait time, questions, and 94.03% for feedback (range = 90.0 100%). Teacher 2: Mrs. Hill. Treatment integrity was calculated on 1 of 7 individual and choral responding sessions (14.3%), and 2 of 8 mi xed responding sessions (25.0%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.65/min ( range = 4.64 5.0 OTR/min). Integrity on sequence of steps averaged 99.8% (range = 99.2 100%), and integrity

PAGE 100

100 on steps in the sequence averaged 100% for cu e, wait time, questions, and 99.2% for feedback (range = 97.6 100%). Teacher 3: Mr. Clinton. Treatment integrity was calcu lated on 2 of 9 individual and mixed responding sessions (22.2%), and 2 of 8 c horal responding sessions (25.0%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.75/min ( range = 4.64 5.0 OTR/min). Integrity on sequence of steps averaged 99.7% (range = 98.68 100%), and integrity on steps in the sequence averaged 100% for cue, wait time, questions, and 98.68 % for feedback (range = 97.36 100%). Teacher 4: Mrs. Simpson. Treatment integrity was calcula ted on 1 of 6 individual and mixed responding sessions (16.7%), and 2 of 10 choral responding se ssions (20.0%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.76/min ( range = 4.64 5.0 OTR/min). Integrity on sequence of steps = 100%, and integrity on steps in the sequence averaged 100% for cue, wait time, questions, and feedback. Teacher 5: Ms. Mallory. Treatment integrity was calcu lated on 1 of 5 individual and choral responding sessions (20.0 %), and 1 of 6 mixed responding sessions (16.7%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.74/min ( range = 4.38 5.0 OTR/min). Integrity on sequence of steps = 100%, and integrity on steps in the sequence averaged 100% for cue, wait time questions, and feedback given. Teacher 6: Ms. Orwell. Treatment integrity was calculated on 1 of 6 individual and choral responding sessions (16.7 %), and 2 of 5 mixed responding sessions (40.0%). The average integrity for rate of OTR = (100%). The mean rate of OTR/min = 4.88/min ( range = 4.5 5.0 OTR/min). Integrity on sequence of steps = 100%, and integrity on steps in the sequence averaged 100% for cue, wait time questions, and feedback given.

PAGE 101

101 Social Validity The particip ants teachers completed Likert-t ype rating scales to determine the social validity of the instructional process and outcomes, respectivel y. Likert values ranged from 1, indicating the process and outcomes were not at all useful or difficult to implement 2, indicating somewhat useful or difficult to implement, 3 i ndicating fairly useful or difficult to implement and 4, indicating the process and outcomes were very useful, or difficult to implement to the teacher. Question 1. In response to which intervention was th e most difficult to implement, four of six teachers thought the mixed re sponding was the most difficult to implement, one teacher believed choral responding was the most difficu lt, and one teacher replied that individual responding was most difficult to implement. Question 2. In response to teachers perceived difficulty with the studys procedures, all six teachers responded (1 = not at all) for individual and choral responding, and for the mixed responding responses averaged 2.33 (range: 1-3). Question 3. In response to how much time teacher s perceived the study took away from their classroom instruction, the mean was 1.83 (ra nge: 1-3; 1 = not at all and 4 = very). Question 4. In response to teachers perceived help fulness of the training sessions, all six teachers responded (4 = very). Question 5. In response to the usefulness of the st udys findings to the teacher and student, the mean was 2.5 (between somewhat and fairly) (range: 2-4). Question 6. In response to teachers perceived decr eases in off-task behavior, the mean was 2 (somewhat noticed) (range: 1-3) during individual responding and 2.5 (between somewhat and fairly noticed) (range: 1-4) during choral and mixed responding.

PAGE 102

102 Question 7. In response to teachers perceived decr eases in disruptive behavior, the mean was 2 (somewhat noticed) (range: 1-4) for indi vidual responding, 1.8 (somewhat noticed) (range: 1-4) for choral responding, and 2.17 (range: 1-4) for mixed responding. Question 8. In response to teachers perceived increases in active student responding (ASR), the mean was 1.83 (somewhat) (range: 1-4) for individual responding, 3.0 (fairly) (range: 1-4) for choral responding, and averaged 2.17 (somewhat) (range: 1-3) for mixed responding. Question 9. In response to teachers perceptions of how likely they will use the intervention in the future, the question was not applicable for individual responding because all six teachers stated that they use individual responding. The mean was 2.83 (3.0 = fairly) (range 1-4) for choral responding, and 2.5 (3.0 = fairly likely) (range: 1-4) for mixed responding. Summary Based on the reported results of individual pa rticipants data, conclusions can be draw n across the six participants. With the exception of Teo, all participants demonstrated consistently fewer disruptive behaviors during the mixed resp onding condition in comparison to the choral or individual responding conditions. In addition, four out of six ta rget children displayed in a smaller percentage of intervals of off-task be havior during mixed responding in comparison to the other two conditions. Results fo r active student respondin g are not as clear in that stable data paths were not obtained and the means of ASR were higher for particip ants one through three (Frank, DAndy, and Monty) in the mixed re sponding condition when compared to the individual and choral responding conditions while the means of ASR were higher for participants four through six during choral re sponding compared to individual or mixed. However, for all six participants the percentages of ASR were higher during c horal and mixed responding in comparison to individual responding, supporting earlier research fi ndings (Sindelar et al., 1986).

PAGE 103

103 For participants one through three, a decrease in disruptive behavior (lowest rate during mixed responding) covaried with decreases in per centages of intervals of off-task behavior and increases in ASR. For participants four th rough six, collateral behavior between disruptive behavior, and percentages of intervals off-task, and ASR are not as clear. For participant four, the mean rate of disruptive behavior was sli ghtly lower during indivi dual responding, while the mean percentages of intervals of off-task beha vior was lowest during mixed responding and the highest percentage of ASR occurred during choral responding. For participan t five, the mean rate of disruptive behavior was lowest during mixe d responding while the mean percentage of offtask behavior was slightly lower during chor al responding than during mixed responding. The mean percentage of ASR was highest during choral responding. Finally, for participant six, the lowest mean rate of disruptive behavior and pe rcentage of off-task intervals was during mixed responding while ASR was highest during choral responding. Treatment integrity data revealed that all si x teachers were able to implement the three types of teaching strategies with a high degree of fidelity. Social validity data demonstrated that teachers perceived mixed responding as the mo st difficult to implement even though that teaching strategy was the most effectiv e in decreasing disruptive behavior.

PAGE 104

104Table 4-1. Means of rate of disruptive behavior, percentages of in tervals off-task and activ e student responding (ASR) Individual Choral Mixed Participant 1 (Frank) Disruption (Rate) Off-task (%) ASR (%) 1.54/min (range = 1.25 1.88/min) 56.25% (range = 45.83 62.5%) 22.61% (range = 12.5 46.15%) 0.71/min (range = 0.25 1.00/min) 32.73% (range = 25.00 41.66%) 69.34% (range = 44.73 89.18%) 0.16/min (range = 0.00 0.38/min) 16.55% (range = 8.32 25.00%) 84.35% % (range = 76.31 94.73) Participant 2 (DAndy) Disruption (Rate) Off-task (%) ASR (%) 0.89/min (range = 0.5 1.5/min) 25.6% 25(range =16.66 33.33%) 89.32% (range = 74.28 94.73%) 0.43/min (range = 0.25 0.75) 19.05% (range =12.50 25.00%) 93.25% (range = 81.57 100.00%) 0.19/min (range = 0.00 0.38) 9.89% (range = 4.17 16.66%) 97.28% (range = 88.88 100.00%) Participant 3 (Monty) Disruption (Rate) Off-task (%) ASR (%) 1.21/min (range = 1-1.5/min) 40.27% (range = 29.16 62.50%) 60.19% (range = 33.33 91.89%) 0.81/min (range = 0.5 1.38) 26.56% (range = 12.50 37.50%) 84.20% (range = 71.42 91.66%) 0.49/min (range = 0.25 0.75) 16.67% (range = 8.33 37.50%) 90.50% (range = 77.77 100.00%)

PAGE 105

105Table 4-1. Continued Participant 4 (Teo) Disruption (Rate) Off-task (%) ASR (%) 1.52/min (range = 0.625 2.75/min) 28.47% (range = 4.16 62.50%) 82.20% (range = 52.63 94.44%) 1.65/min (range = 0.38 4.13) 31.24% (range = 8.30 62.50%) 93.79% (range = 82.60 100.00%) 1.61/min (range = 0.88 2.75) 22.02% (range = 8.33 33.33%) 84.28% (range = 72.72 94.17%) Participant 5 (Amber) Disruption (Rate) Off-task (%) ASR (%) 1.33/min (range = 1.13 1.50/min) 47.50% 47 (range = 25.00 66.70%) 58.10% (range = 34.14 92.10%) 0.90/min (range = 0.75 1.13) 23.33% (range = 16.66 33.33%) 96.38% (range = 94.87 100%) 0.44/min (range = 0.16 0.75) 24.30% (range = 16.66 33.33%) 87.65% (range = 80.55 97.14%) Participant 6 (Mats) Disruption (Rate) Off-task (%) ASR (%) 1.25/min (range = 1.00 1.36/min) 54.17% (range = 45.83 66.67%) 42.2% (range = 13.04 67.5%) 0.35/min (range = 0.25 0.50) 28.47% (range = 20.83 45.83%), 75.27% (range = 40.00 94.44%) 0.08/min (range = 0.00 0.13) 23.33% (range = 20.83 33.33) 62.80% (range = 56.41 67.50%)

PAGE 106

106Table 4-2. Social validity results Question Response Question 1 Which intervention was the most difficult to implement? Mixed (N = 4) Choral (N = 1) Individual (N = 1) Question 2 Teachers perceived diffi culty with the studys procedures. Mean = 2.33 (range: 1-3 Question 3 Teachers perceived disr uptiveness of the overall study to the classroom. Mean = 1.83 (range: 1-3) Question 4 Teachers perceived helpfulness of the training sessions. All responded with (4) Question 5 Usefulness of the study s findings to the teacher and student. Mean = 2.5 (range: 2-4) Question 6 Teachers perceived decr eases in off-task behavior. Mean = 2 (range: 1-3) individual responding Mean = 2.5 (range: 1-3) choral responding Mean = 2.5 (range: 1-3) mixed responding Question 7 Teachers perceived decr eases in disruptive behavior. Mean = 2 (range: 1-4) individual responding Mean = 1.8 (range: 1-4) choral responding Mean = 2.17 (range: 14) mixed responding Question 8 Teachers perceived increas es in active student responding. Mean = 1.8 (range: 1-4) individual responding Mean = 3.0 (range: 1-4) choral responding Mean = 2.17 (range: 13) mixed responding Question 9 Teachers perceptions of how likely they will use the intervention in the future. Mean = N/A for individual responding Mean = 2.83 (range: 14) choral responding Mean = 2.5 (range: 1-4) mixed responding

PAGE 107

107Table 4-3. Treatment integrity results Teacher Number of OTR % of 4 step instructional sequence (3) OTR instructional sequence 4) Start of syllable practice. Mean OTR rate Mrs. Pence 100% Cue WT Question Feedback 100% 100% 100% M= 94.03% (range = 90 100%) M= 98.5% (range = 94.03 100%) 100% M= 4.80/min ( range = 4.64 5.0) Mrs. Hill 100% Cue WT Question Feedback 100% 100% 100% M= 99.21% (range = 97.63 100%) M= 99.8% (range = 99.21 100%) 100% M= 4.65/min ( range = 4.64 5.0) Mr. Clinton 100% Cue WT Question Feedback 100% 100% 100% M= 98.68% (range = 97.36 100%). M= 99.67% (range = 98.68 100%), 100% M= 4.75/min ( range = 4.64 5.0)

PAGE 108

108Table 4-3. Continued Teacher Number of OTR % of 4 step instructional sequence (3) OTR instructional sequence 4) Start of syllable practice. Mean OTR rate Mrs. Simpson 100% Cue WT Question Feedback 100% 100% 100% 100% 100% 100% M= 4.76/min ( range = 4.64 5.0) Ms. Mallory 100% Cue WT Question Feedback 100% 100% 100% 100% 100% 100% M= 4.74/min ( range = 4.38 5.0) Ms. Orwell 100% Cue WT Question Feedback 100% 100% 100% 100% 100% 100% M= 4.88/min ( range = 4.5 5.0)

PAGE 109

109 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1234567891011121314151617181920 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 12345678910111213141516171819202122 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1234567891011121314151617181920212223242526 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1234567891011121314151617181920212223 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 12345678910111213141516 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 1234567891011121314151617 Sessions Sessions Figure 4-1. Rate of disruptive behavior per minut e. Open circles =indiv idual responding, closed squares = choral responding and open triangles = mixed responding. Fran k DAnd y Mont y Teo Ambe r Mats

PAGE 110

110 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617181920 0 10 20 30 40 50 60 70 80 90 100 12345678910111213141516171819202122 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617181920212223242526 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617181920212223 0 10 20 30 40 50 60 70 80 90 100 12345678910111213141516 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617 Sessions Sessions Figure 4-2. Percentage of inte rvals off-task. Open circles = individual responding, closed squares= choral responding and open triangles = mixed responding. Fran k DAnd y Mont y Teo Ambe r Mats

PAGE 111

111 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617181920 0 10 20 30 40 50 60 70 80 90 100 12345678910111213141516171819202122 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617181920212223242526 0 20 40 60 80 100 1234567891011121314151617181920212223 0 10 20 30 40 50 60 70 80 90 100 12345678910111213141516 0 10 20 30 40 50 60 70 80 90 100 1234567891011121314151617 Sessions Sessions Figure 4-3. Percentage of active student re sponding. Open circles = individual responding, closed squares = choral responding and open triangles = mixed responding. Fran k DAnd y Mont y Mats Teo Ambe r

PAGE 112

112 CHAPTER 5 DISCUSSION The purpose of this chapter is to interpret and explain th e results of the current study, which was designed to investig ate three types of questioning procedures (individual responding, choral responding, and a m ixtur e of 70% choral responding a nd 30% individual responding) on disruptive behavior, off-task behavior, and active student responding (ASR) with students at-risk for emotional behavioral disorders (EBD). The discussion focuses on how these findings contribute to theory and practice. Finally, the chapter concludes with a discussion of limitations and suggestions for future research. The following research question was addressed: How does a choral responding procedure comp are to an individual responding procedure and a mixture of choral and individual res ponding procedure during group instruction in a general education classroom on the disruptive and, off-task behavior, and active student responding of high-risk students? Six second-grade students and six second-grad e teachers participated in this study. The students ranged in age from 7 years 6 months to 8 years 2 months and were nominated by their teachers as having had high rates of disruptive be havior for at least a month. The extent of problem behavior was verified as target students were then rated for externalizing problem behavior using the Systematic Screening for Be havioral Disorders (SSB D) (Walker & Seversen, 1993). A single subject alternating trea tments design was used to assess the effectiveness of three types of opportunities to respond (OTR) (indi vidual responding, c horal responding, and a mixture of individual and chor al responding) on student disrup tive behavior, and off-task behavior, and active student responding (ASR). There were two phases to this study: teacher

PAGE 113

113 training and implementation of the three types of OTR. The teacher-training phase consisted of two stages: information-sharing and practice un til mastery occurred. Af ter teacher training, a comparison of the three types of OTR began. Ba sed on a randomized schedule, the teacher was instructed to implement eith er choral responding, mixed mo de responding, or individual responding. Teacher presentation occurred at a rate of five per minute. Overview Findings This study identifies several key findings. First, it appears that choral responding is a more effective instructional strategy than individual respondi ng in term s of decreasing disruptive and off-task behavior. Five out of six participants had lower mean rates of disruptive behavior and lower mean percentages of inte rvals of off-task behavior du ring choral responding than during individual responding, replicati ng and extending earlier researc h. In this study, findings were similar to previous research (McKenzie & Henry, 1987; Sindelar et al ., 1986; Wolery et al., 1992), however the dependent variables of disrup tive behavior and ASR were introduced). In terms of disruptive behavior sp ecifically, mixed responding appear s to be a more effective and instructional strategy than either choral or individual responding alone. Five of six students had lower mean rates of disruptive behavior dur ing mixed responding than during choral or individual responding. However, differences between choral and mi xed responding are less consistent for off-task behavior and ASR. Four students had fewer in tervals of off-task behavior during mixed responding, and one student had fewer intervals of off-task behavior during choral responding. Similarly, three of six students had their hi ghest mean percentages of ASR during mixed responding, while three students ha d their highest mean percen tages during choral responding. Still, during individual responding, mean percentages of ASR were lo west for all six participants and mean percentages were highest for off-task behavior for all six participants.

PAGE 114

114 Disruptive Behavior Several con clusions from this st udy are especially clear. Firs t, five of six participants demonstrated lower rates of disruptive beha vior per minute during choral responding in comparison to individual responding. This finding extends earlier research on the effectiveness of choral over individual res ponding (McKenzie & Henry, 1987; Si ndelar et al., 1986; Wolery et al., 1992), which had focused more on academic wo rk. While the Sindelar et al. study found that 11 students with mild disabilities learned slightly more sight words during choral responding than during individual respondi ng, the current study found that choral responding produced lower rates of problem behaviors compared to individual responding. Second, for five of six participants, the mean rate of disruptive behavior was lowest during mixed responding in comparison to individual and choral responding. This finding supports Stevens and Rosenshines (1986) strong recommendation for the use of mixed responding (70% choral, 30% individual). These authors suggested that target students benefit from frequent practice with choral responding, but teachers coul d gain information on individual performance during individual turns. The findings from this study support the use of mixed responding to gain information on target students indi vidual behavioral performance. During mixed responding targeted students received 31 OTR, and during choral responding received 37 OTR. Results indicate that five of six students had lower rates of disruptive behavior with just 31 OTR per eight-minute session during mixed responding in contrast to 37 during choral responding. This finding supports previous research by Ferkis et al. (1997) who found that repeated practice while simp ly giving students more OTR (at the end of a learning trial) did not necessarily produce the highest number of cumulative words mastered and required more instructional time. Based on the above results, mixed responding appears to be a

PAGE 115

115 more effective and efficient instructional stra tegy (in comparison to individual and choral responding) in reducing rates of disruptive behavior. Off-Task Behavior Another notable finding from this study is that five out of six partic ipants had lower mean percentages of intervals of off-task behavior during choral responding than during individual responding. This finding again exte nds earlier research by Sindelar et al. (1986) and McKenzie and Henry (1979) who found that more students were off-task during individual than group responding. In the McKenzie and Henry study, those that were off-task were observed so twice as much during individual responding than dur ing group responding. In comparison, five of six participants had the lowest mean percentage of intervals of off-task behavior during mixed responding. The group mean for intervals of offtask behavior during mixed responding was 18.8%, the group mean for off-task behavior during choral responding was 26.9%, and the group mean for off-task behavior dur ing individual responding was 42.0%. Given the criterion of 90% for student on-task behavior by the Council for Exceptional Children (CEC, 1987), only the mixed responding condition (81.2%) somewhat appro ached CEC standards. Nevertheless, this finding lends some support to Stevens and Rosens hines (1986) recommendation for the use of mixed responding (i.e., monitori ng individual performance). However, for one participant (Amber), the mean percentage of off-task behavior was very slightly lower during choral responding (23.33% ) than during mixed re sponding (24.30%), and highest during individual respond ing (47.50%). One possible explan ation for Ambers lower rate of off-task behavior during choral responding and mixed respondi ng is that teacher feedback provided a stronger reinforcer for her than for the other participants. For example, after each choral response the teacher pr ovided feedback to the entire class whether the response was correct (i.e., yes that is co rrect or good answer). As a re sult Amber may have been more

PAGE 116

116 attentive and engaged so that she could respond to the next question. Her data indicate that, among all participants, she had the highest perc entage of ASR and correct responses during choral (ASR, M= 96.38%, range= 94.87% 100 .00%; correct responses M= 93.31%, range = 90.00% 94.87%). Therefore, Amber had a high-p robability of responding to the sight word cards (i.e., responding verbally afte r each teacher question) and was at a low-probability of being off-task (i.e., emitting incompatible behaviors such as looking around the room etc.). Despite Ambers results, these findings generally suppor t mixed responding as the most effective and efficient instructional strategy in reducing off-task behavior. Active Student Responding (ASR) Although percentages of ASR during individual responding were lowest for all six participants, results are less clear between choral and m ixed responding in terms of effects on ASR. For example, three out of six students had highest mean percenta ges of ASR during mixed responding (M= 90.7%), than choral responding (M = 82.3%), while three students had highest mean percentages of ASR during choral re sponding than mixed responding (M= 88.5%; M= 78.2%, respectively). However, in light of reco mmendations by CEC (19 87) during review of previous material, these percentages exceed or approach the 85% criterion for student correct responses (ASR). However, the group mean fo r ASR during individual responding was merely 59.1% and this percentage was well below the crite rion set by CEC. In addition to intersubject variability and differences in mean percentages of ASR across participants, variability in data paths exists within subjects. For example, a visu al analysis of the data paths during choral and mixed responding indicates several overlapping data points for each participant. The highest rates of ASR during choral responding suggest that, for some st udents, increased responding is most likely due to increased OTR and/or substa ntive teacher interacti on (Sindelar et al., 1986). That is, during choral respondi ng, students were given more OT R and thus responded more to

PAGE 117

117 teacher questions and as a consequence teac hers provided more feedback/reinforcement on student responses, which increased the proba bility of future student responses. For students who had the highest percentage of ASR during mixed responding, there is at least one possible explanation. Perh aps the number of exposures to questions was a more critical variable than the number of OTR and, thus, observational learning may have occurred (Skinner & Shapiro, 1989; Sterling et al., 1997; Wolery et al., 1992). For example, during mixed responding the number of times the target st udent observed other students responding was approximately eight per 8-minute session, wherea s during choral responding, the target student did not observe other students responding. In spite of this, these findings support previous research showing that instructional strategies that produce high rates of ASR are superior to those that produce low ASR in terms of decreasin g disruptive and off-task behavior (Skinner et al., 1991). Other Considerations Although not evaluated as a dependent m easure due to methodological considerations (i.e., unequal opportunities for co rrect responses across the three types of OTR), anecdotal data were collected on participants correct responses because of the importance this variable receives in the literature. The an ecdotal data suggests that all six par ticipants responded co rrectly to OTR in the individual response condition however, no conclusive differe nces were found in rate of correct responses between choral and mixed responding. In the fu ture, researchers may compare individual, choral, and mixed re sponding using a design wherein the number of exposures is not held constant and teachers are encouraged to co mplete as many trials as possible in each condition and each session (Wolery et al., 1992).

PAGE 118

118 Social Validity Social valid ity data reveal that the six teacher s felt the study did not disrupt their classroom environment, and that the training session was very helpful. All six teach ers stated that they currently utilized individual responding and indicated that choral responding was easy to implement, supporting earlier research wherein t eachers provided similar feedback (Sainato et al., 1987). Four of six teachers commented that mixed responding was the most difficult type of OTR to implement because they had to read a ra ndomized list. Instead, these teachers endorsed approximating the 70% choral to 30% individual ratio from memor y, indicating the acceptability of mixed responding as a teaching strategy (Schwartz & Baer, 1991). Teachers Perceived Effectiveness of the Three Types of OTR Interestingly most teachers perceptions of the effects of the three types of OTR on the dependent variables were not confirmed by the da ta. For example, among the five teachers where mixed responding produced the lowest rate of disruptive behavior, only Montys teacher, Mr. Clinton, had noticed decreases in disruptive be havior after implementing the mixed responding procedure; the other four teache rs believed choral responding pr oduced the largest effect. Three teachers (Mrs. Pence/Frank, Mrs. Hill/DAndy, Mr. Clinton/Monty) observed that the percentage of off-task behavior was lowe st during mixed responding, while Ma ts teacher (Ms. Orwell) did not notice the positive effect of choral res ponding on off-task behavior. She did, however observe, after implementation, a very noticea ble increase in Mats ASR during choral responding. Frank and Montys teac hers (Mrs. Pence and Mr. Clin ton) observed that ASR was highest during mixed responding, while Mrs. Hill did not observe that th e same was true for DAndy. Ambers teacher, Ms. Mallo ry, responded that she did not observe any positive effects from any condition on any of the dependent variables. Teos teach er, Mrs. Simpson, stated that his behavior during the study approximated his nor mal classroom behavior and that the 3 types

PAGE 119

119 of OTR did not have an effect. The fact that the teachers did not reliably discern the differential effects of the three different t eaching strategies makes a strong cas e for using data collection and using objective criteria to make decisions about student classroom behavior (Witt, VanDerHyeden, & Gilbertson, 2004). Teachers Likelihood of Using the Interventio n in the Future Teachers likelihood for using the various res ponding strategies, particularly choral and mixed responding, in the future was not strongly pr edictable. First, all si x teachers stated that they currently used individual responding, but at a lower rate than required in the study condition. The mean teacher response in favor of using choral responding in the future was only slightly higher than the mean response for us ing mixed responding and indicates that they may be slightly less than somewhat likely to use bot h types of responding in th e future. However, two teachers reported that they would be very like ly to use mixed responding in the future. Mats teacher, Ms. Orwell, commented that the mixed responding had an element of surprise because students did not know if they were called upon indi vidually until the very last second. After a visual inspection of the data, Franks teacher, Mrs. Pence, stated that that said she would be very likely to use mixed responding in the future. Implem enting increased rates of OTR that fit within the details of day-to-day classroom instructi on and that does not radi cally alter teachers curriculum are a few ways researchers can get te achers to maintain evidence based practices in their classrooms (Gersten, Va ughn, Deshler, & Schiller, 1997). Interpretation of Findings One of the studys findings, the lack of differe ntial effects across the three types of OTR on disruptive behavior and off-ta sk behavior for one student (T eo) and across choral and m ixed responding for off-task behavior for one student (Amber), and across individual and choral responding for off-task behavior for one student (DAndy) has seve ral implications and deserves

PAGE 120

120 further discussion. First, the resu lts indicate that for Teo, the mean rate of disruptive behavior was approximately equal across the three types of OTR (individual respo nding= 1.52/min; choral responding = 1.65/min; mixed= 1.61/min). Secondly, th ere is a great deal of variability in the data for disruptive behavior dur ing choral responding. Teos data for disruptive and off-task behavior among the three conditions have a large amount of vari ability, and a great deal of overlap. The high rates of Teos disruptive behavior and off-task behavior may indicate that the instructional intervention of mixed or choral responding was not powerful enough to decrease his disruptive and off-task behavior. A characteristic of Teo (impulsivity) may have prevented the effectiveness of the interventi on. In addition, there were several undetected variables in the environment (teacher attention; peer attention) that influenced the stability in his behavior (Sidman, 1960). For example, incidental observations indicate that during the teacher feedback procedure, Teo talked with a peer sitt ing next to him and the peer responded. Even though there were no clear effects, data collection was stopped for Teo because stability in the data could not be obtained. Even so, the findings from the other participants support earlier research that indicates choral /unison responding and increased rates of OTR decrease rates of disruptive be havior (McKenzie & Henry, 1979; Sutherland et al., 2003; West & Sloane, 1986). Teos disruptive and off-task be havior may have been altered by the presence of setting factors such as peer conflicts before entering the classroom (Dav is & Fox, 1999). Setting factors, those biological and environmental components (i.e., headaches, fi ghting with peers) that in a given context affect reinforcement contingencies, could also provide a possible explanation for variability in data. For example, Teo suffere d from migraine headach es and this was not discovered until half way through the study. Thus, migr aines (or lack there of ) could have set the

PAGE 121

121 occasion for Teo to be more or less disruptive (depending on their effects on his behavior) during any of the three types of responding. In addition, t eacher reports indicated that at times Teo had conflicts with his peers during transition time be fore entering the classroom for language arts instruction, which also may have served a setting factor functi on. Another likely explanation for Teos behavior could be explained by the teachers lack of providing effective consequences for Teos behavior. His teacher informally reported that she could not implement and follow up on negative consequences for his disruptive and off-t ask behavior because Teo was in her class for only an hour and a half per day and she was not able to communicate with his homeroom teacher. Unfortunately, no systematic data was co llected to evaluate the potential influence of these factors on Teos behavior. In addition to Teos disruptive and off-task beha vior there was a lack of differential effects across two types of OTR on the off-task behavi or for two students (DAndy and Amber). For example, DAndys data indicate overlap betw een individual and chor al responding across the last two data points, while Ambers data s how overlap between chor al and mixed responding across the last two data points. Possible explanations for DAndy s data could also be provided by setting factors (sleep depriva tion, fighting with peers), ease of distractibility and competing stimuli in the classroom, or lack of impulse cont rol (informal reports indicated he had Attention Deficit Hyperactivity Disorder-ADHD like symp toms) (Koegel et al., 1980; Skinner et al., 1994). Possible explanations for the overlap betw een choral and mixed responding in the data paths of Ambers off-task behavior could be e xplained by her self-stimulatory behavior, short attention span, and ease of distractibility (Koegel et al.). For example, incidental observations suggested that during mixed responding when othe r students had individual turns, Amber looked at and twirled her hair, stared out the window, or attended to any competing stimuli that occurred

PAGE 122

122 in the classroom (i.e., students ad justing their seating position, s queaky chairs etc.). Informal reports by her teacher confirmed that the above behaviors occurred freque ntly but that she had not been aware of Amber having a history of seizures or taking medications for ADHD. Implications for Practice The curren t study has practical implicatio ns for many students because it provides additional evidence of the importance of using c horal responding during la rge group instruction. These findings lend initial support for the use of mixed responding during large group instruction in a general edu cation setting. However, before implementing the mixed responding procedure, teachers could consider that mixed responding may be difficult to implement, while other teachers may prefer a less noisy instruct ional procedure such as individual responding. Furthermore, for some students who lack impulse control, choral respondin g may increase levels of excitement and off-task behavior, and th erefore precorrection stra tegies (i.e., reminding students to remain quiet after each response) may need to be implemented before utilizing choral responding. The long-term benefits of using a systematic questioning strategy may outweigh the initial time and effort involved to eff ectively implement mixed or choral responding. Some of the initial costs are using classroom management skills (reminding student s of classroom rules, using inside voices), and learning to use wait time a nd feedback procedures. In general education classrooms, teachers are often compelled to instru ct large numbers of stude nts with considerable skill deficiencies, and may acknowledge a limited amount of available instruction time to reverse the academic deficiencies in their students (Barbetta & Heward, 1993) Depending on group size, students can respond up to three or four times more during choral responding than during individual responding (Sindelar et al., 1986). Because there is a relationship between high rates of OTR and ASR in previous research and becau se ASR may facilitate student learning, choral

PAGE 123

123 or mixed responding may be the instructional strate gy best suited to remedy skill deficiencies in students with academic deficits or learning disabilities. For example, choral and mixed responding allows teachers to monitor student understanding and gain immediate feedback during guided practice for all, if not most students in a classroom. When implemented over a period of time choral and mixed responding could allow teachers to inform ally assess areas of needed improvement for targeted students (B arbetta & Heward; Ster ling et al., 1997). In addition, as a general practice, teachers could use mixed and choral responding to reduce the amount of time students passively attend during instruction (S terling et al.). Some additional implications for teachers are evident in the literature. The positive results on ASR associated with the use of choral a nd mixed responding in comparison to individual responding supports earlier findings of Sainato et al. (1987), indicating th at the instructional behavior of the teacher may also be a critical factor leading to positive student outcomes across various settings, subjects, and grade levels Furthermore, teachers can implement group responding with a large class size and reduce the amount of transi tion time needed to implement smaller instructional grouping form ats (Sterling et al., 1997). Pare nts also could incorporate high rates of OTR during home instruction or wh en providing assistance for their childrens homework. Finally, paraeducators could be trained in th e use of group responding and implement these strategies during intensive small group instruction (Sterling et al.). Limitations Although m ixed responding appeared to be more effective in reducing disruptive behavior than choral and individual res ponding, a few limitations may temper the power of the statements that can be made as result of this study. First, as is inherent in all single subject research designs, the small sample size limits the generalizability of the findings. Thus, generalization to other

PAGE 124

124 academic activities and othe r settings, or to students by age, gr ade, gender, or learning histories, requires systematic replication (Kazdin, 1982). Second, the procedures and the studys design did not allow for a comparison of correct responses across three conditions. This limitati on is particularly important because the percentage of correct responses is considered a significant dimens ion of effective instructional practice (Gunter et al., 2004). Howe ver, an anecdotal report of co rrect responses between choral and mixed responding was inconclusive among choral and mixed responding. Third, there are several overlapping data poi nts among the participants (Teo, Amber, and Mats) dependent variables, particularly with ASR. For these participants, choral responding resulted in a slightly higher mean percentage of ASR than mixed responding. In contrast, with Frank, DAndy, and Monty, mixed responding resulted in a slightly higher mean percentage of ASR than choral responding. Thus, it is difficult to determine which instructional strategy is most effective in increasing active student respo nding. The fact that Teo, Amber, and Mats had a slightly higher rate of ASR during choral re sponding than mixed responding suggests that OTR and teacher feedback (37 of each during c horal responding and 31 of each during mixed responding) may have provided prompts and reinforcer s to help these students stay on-task when they may otherwise not attend because they have a short attention span or are easily distracted (Koegel et al., 1989; Skinner et al., 1994). Furt hermore, these students may not have been motivated to pay attention to the individual turns during mi xed responding. Teachers during mixed responding could therefore increase the numb er of individual turns towards the target student (from 3 to 5) and/or prov ide precorrection strategies and/or praise for the target student while attending to other students answers during individual turns.

PAGE 125

125 Fourth, data collection was stopped before a clear data path was established for the dependent variables of off-task behavior and ASR. The decision to end data collection was based on the implementation of the intervention in the a pplied setting and that c ontinued data collection would be cumbersome and cause the experiment to require too many sessions to complete (Cooper, Heron, & Heward, 1987). While stability was achieved in the primary dependent variable of disruption, some of the data paths for some of the participants in the secondary dependent variables had not reached stability and it is difficult to establish a functional relation between the three types of OTR and those variables. It is also difficult to establish if there was covariation between the primar y and secondary variables. Fifth, teacher implementation of contingent co nsequences (i.e., use of rewards, teacher attention, punishment, and response cost activitie s such as moving a card, verbal warnings for non-participation) outside of th e learning trial (teacher questi on, student response, and teacher feedback) was not recorded and it is impossible to infer what if any effects on the dependent measures might have been demonstrated (Car nine, 1976). Although informal observations noted that teachers used positive reinforcement and punishment very little throughout the study, no formal observations were used to collect this data. Therefore, the extent of teacher use of individual attention or extinc tion on the outcomes of the depende nt variables is not known. For example, teacher attention may have affected the percentage of intervals of off-task behavior. Skinner et al. (1994) noted a similar limitati on and reported in thei r study that tangible reinforcers and individual attention might have been functionally related to very high rates of attention to tasks and possibly caused st udents to learn at their maximum levels.

PAGE 126

126 Sixth, a lack of a business as usual condition prohibits a comparison to student baseline rates on the dependent variables, and therefore th e extent of improvement in student academic and social behavior can not be determined. Finally, maintenance data was not collected after the intervention phase of the study. The primary experimenter had been in the first thr ee classrooms more than five weeks and believed that the teachers were tiring of the interventi on and were not interested in maintenance data. Therefore, it is not possible to determine whet her teachers continued to use choral or mixed responding (social validity assessments showed that teachers utilized individual responding before the onset of the study). Furthermore, it is not known whether students sustained improvements after the conclusion of the study. In the future, a maintenance phase could be built into the design of the study and researchers could determine if teachers were implementing choral or mixed responding at a rate of 5 OTR per minute. Implications for Future Research The findings from this study demonstrate a f unctional relation between mixed and choral responding in comparison to individual respondin g on the disruptive and off-task behavior of second grade students at risk for emotional and behavioral disorders (EBD). Furthermore, the findings replicate earlier research on the effectiveness of choral responding in terms of reducing disruptive and off-task behavior and increasing ASR. As a logical next step, researchers should compare choral responding and other ratios of mi xed responding: (a) with students of different ages and across various subject areas such as math and sc ience (Carnine, 1976), (b) across sessions of more than an 8-minutes (Sainato, 1987) and (c) with children identified with various learning disabilities or with au tism (Koegel et al., 1980). These ex tensions would help establish and verify the conditions under which varying types of responding are more effective and efficient.

PAGE 127

127 In addition, future research would do well to include summative assessments at the end of the study to measure the impact of the three types of OTR on individual student learning. For example, researchers could examine the influe nce of the three types of OTR on sight word acquisition and then measure in creases in reading comprehens ion or sight word vocabulary (Skinner & Shapiro, 1989). Future research should also examine the relationship between the three types of OTR and teachers use of praise on student disruptive and off-task behavior (Sutherland et al., 2003). In addi tion, researchers could examine the effects of the three kinds of OTR and praise on teachers use of negative consequences (puni shment, office referrals, and time out etc.) toward target students (Gunter et al., 1994). This is particularly important because students with EBD can be part of numerous confront ations in the classroom, interrupt the flow of instruction, and affect th e behaviors of other students creat ing a chaotic environment for their teachers and all students in the cl assroom (Sutherland et al., 2002). One potential concern with the re sults of this study is the lack of clear effects of any type of responding on the disruptive and off-task behavi or for Teo. Because challenging behaviors are often predictable responses to antecedent and cons equent events occurring in their environment, future research may use functional assessments to gather information on the antecedent and consequent events that are associated with the occurrence of challe nging behavior (Scott & Kamps, 2007). That is, the impact of even th e most powerful strategy or instructional method will be unlikely to be effective with every indivi dual student. The idiosyncrasies of individual student preferences and needs are best determined in an individual manner. Functional behavior assessment is one method of assessing how the e nvironment may interact with student behavior and suggest effective in dividualized strategies.

PAGE 128

128 Researchers should continue to investigate an optimal rate of OTR on the percentage of correct responses and error ra tes (West & Sloane, 1986). For ex ample, increasing the pace of instruction many not be desirable for all students. Some students with skill deficits may need adequate wait time to formulate responses during a fast paced learning trial (Skinner et al., 1994). In addition, researchers could investigate pr ocedures used to decrease disruptive behavior, and increase on-task behavior and ASR, during slower paced instruction where some students may engage in high rates of disruptive behavior (i.e., during in dividual responding) (Skinner et al.). Summary Previous research has compared individual a nd choral responding with the acquisition of sight words am ong students with mild and moderate disabilities (Sindelar et al., 1986; Wolery et al., 1992). The present study extended the outcomes of this research by comparing individual and choral responding with mixed responding on the academ ic and social behavior of students at-risk for EBD during group instruction. As with previous studies, results showed a positive impact of choral responding in comparison to individual responding on the di sruptive and off-task behavior with students identified at-risk fo r EBD. However, five of the six participants had lower rates of disruptive behavior during mixed responding in comparison to individual and choral responding, while five of the six participants had lower in tervals of off-task behavior during mixed and choral responding than during individual respondi ng. Furthermore, all six participants had higher rates of ASR during choral and mixed respondi ng in comparison to individual responding. Although positive results were found across five pa rticipants, for one pa rticipant there was a lack of clear results on his disruptive and o ff-task behavior among the three types of OTR. Future research could use functi onal behavior assessment to gather information on the antecedent and consequent events that are associated with the occurrence of challe nging behaviors that are

PAGE 129

129 not responsive to effective instructional practi ces. The current study adds to both research and practice on instructional st rategies that reduce disruptive and off-task behavior and increase ASR for students identified at-risk for EBD.

PAGE 130

130 APPENDIX A SAMPLE LESSON TRIAL Session Duration: 8 minutes Setting: General education classroom Materials Needed: Sight word cards Participants Present: Entire class and targeted student Before the study the teacher selected words from previous stories. The words were divided into 3 categories; high fr equency words from 5 stories in Basal reader, vocabulary words from stories previously r ead and names of States. Choral responding mode Step one: The teacher will explain the expectations, procedures, and rules for the choral responding condition. For example, the teacher will say; After I show you the card, I will cue the class by saying What word? then I want you to say the word together. Step two: The teacher will show a sight word card to the class, count silently for three seconds read the definition and then say, What word? Step three: The teacher will provide feedback on whether the answer was correct or incorrect by saying, Yes, that is correct. Or No, th e correct word is _____. Step four: The teacher will select another sight word card and begin the next learning trial. Individual responding mode Step one: The teacher will explain the expectations, procedures, and rules for the individual responding condition. For example, the teacher will say; Today I will call on one student at a time. After I show you the card, I will say; who can tell me the word? Step two: The teacher will show a sight word card to the class, count silently for three seconds read the definition and then say; Who can tell me the word?

PAGE 131

131 Step three: The teacher will provide feedback on whether the answer was correct or incorrect by saying, Yes, that is correct. Or, No, the correct word is _____. Step four: The teacher will select another sight word card and begin the next learning trial. Combination of individual and choral: In this condition the te acher will use the choral responding for 70% of the time and individual re sponding mode 30% of the time. The procedure for individual responding follows. Step one: The teacher will explain the expectations, procedures, and rules for each response condition: choral or individual. For indi vidual responding the teacher will say; After I show you the card and read the definition I will say, Who can tell me what word? I want you to raise your hand and if you are quiet you will have a chance to be called on. For choral responding, the teacher will say; This is for everyone. Step two: The teacher will show a sight word card to the class, read the definition and say, Who can tell me what word? (i ndividual). Or, Everyone. (choral). Step three: The teacher will count silently for thr ee seconds and randomly select from a list of students, and call on that student. Howeve r, during this condition the teacher will call on the targeted student three times. Step four: The teacher will provide feedback on whether the answer was correct or incorrect by saying, Yes, that is correct. Or No, th e correct word is _____. Step five: The teacher will select another sight word card and begin the next learning trial.

PAGE 132

132 APPENDIX B CODING MANUAL Opportunity to respond (OTR): An OTR ( choral responding ) will b e recorded when the teacher asks an academic question to the entire group that requires a specific response. An OTR ( individual responding ) will be recorded when the teacher asks an academic question to one student that requires a specific response. The question must seek a specific re sponse that is related to the academic subject area being observe d. Examples of OTR woul d be What is this word? during reading class. When the teacher repeats the same OTR What is this word? only counts once. 1. Examples: o What is this word? o Please say the word on the flash card. o OK, everyone, what is this word? (teacher is pointing to the sight word/flash card) o Timmy, what is this word? (individual OTR) o Who can tell me what this word is? (This example is a question for one individual to respond and does not refl ect choral responding). 2. Non-examples: o Do you think these words are helpful? o Who finished their homework last night? o What did we do yesterday? o Everybody write this word. o Teacher asks, What does this word represent ? Students raise their hands, but they do not receive an opportunity to provide an answer (e.g., teacher asks another question immediately.) o Copy this word down. Disruptive behavior: A disruptive behavior will be reco rded when a student performs a behavior that interrupts, or has the potential to interrupt, the inst ruction in the cl assroom or the learning of another student. 1. Examples: o Student calling out a response when th e expectation is to raise a hand.

PAGE 133

133 o Student is out of seat without permission. Pe rmission being EXPLICIT permission from the teacher regarding reason for student being out of seat; the exception is the student going to sharpen pencil, unless teacher has restricted this activity. o Student stands up at desk. o In the middle of lesson, the student gets up and walks up in front of the teacher and or asks a question. o Student moves desk or has foot or feet on desk. o Student leans over from his seat and talks to a classmate. o Student is banging/tapping his hands or object (e.g., pencil) on desk. o Student is mocking the teacher as teacher talks (imitating voice and/or body language). o Student is singing at desk. o Student is talking or tells a j oke while the teacher is talking. o Student responds so loudly that other students look at him/he r and do not answer the teacher. o Student uses profanities. 2. Non-examples: o Student mumbling at desk. o Student is looking at another student while that student is looking at the teacher. o Student involuntarily sneezes or coughs. o Student is slowly rocking back and forth in chair. Active student response: An active student response (ASR) is defined as engaging in the behavior that was expected during that condition: (a) independent hand raising for the individual responding, (b) responding in unison with the group for choral responding, or (c) a mixture of both in the mixed responding condition (Godfre y, Grisham-Brown, Schuster, & Hemmeter, 2003). ASR will be recorded when the student raises their hand during wait time (during individual responding) or verb ally responds within 1-sec ond during choral responding. The verbal response and the amount of fingers shown dur ing syllable trials do not need to be accurate in order to record an ASR. 1. Examples: o Student raises his hand to answer the teachers question. o Student verbally responds to the teacher questions. o Student responds but incorrectly. o Student raises 4 fingers to i ndicate a response for the number of syllables in the wordno. 2. Non-examples:

PAGE 134

134 o Student does not raise his hand. o Student does not verbally respond. o Student does not show any fingers. Correct response: A correct response will be recorded when the targeted student along with other students, provides a specific, desired response to an OTR (choral responding*) or when the target student provides a specific, desired response to an OTR (i ndividual responding) within 2 seconds from the teachers prompt. An incorrect response will be recorded when the targeted student provides an answer that does not matc h the word on the flash card, answers after a 3 second time period, or looks at students fingers during syllable practice or verbally responds after students in the choral c ondition and is also a non-example. 1. Examples: o Embarrassed in response to What is this word? (the correct answer is embarrassed). o Tomato in response to OK everyone, what is this word? (the word is tomato). 2. Non-examples: o Stop sign in response to What is this word ? (the correct answer is embarrassed). o Florida in response to OK everyone, what is this word? (the word is tomato). o Student answers correctly after more than 2 seconds. o Student looks at another student s fingers then raises his fi ngers (during sy llable drill). o Student responds after the class during th e choral responding (si ght word practice). Note choral response will be the method that the teacher asks questions but observers will observe only the targeted child. No-response: A no-response will be recorded when the targeted student does not answer the question verbally in the choral and individual responding condition or does not raise his hand to attempt to answer the question in the individual responding condition. 1. Examples: o Teacher asks the entire class to chorally re spond and the student doe s not verbally respond within 3 seconds.

PAGE 135

135 o The student does not raise his hand to answer the question in the individual mode of responding. o When the student is called on to answer individually even if he does not have his hand raised and does not respond with in 3-seconds. On-task: On-task behavior will be coded when a targ et student is sitting in his or her seat and is actively directed toward the teacher (i.e., verbally answer ing questions after the teachers cue, eye contact toward the teacher or flas h card, body is facing the teacher). An on-task behavior will be recorded for the targeted st udent when observed to be on-task when the time sample occurs. This behavior includes followi ng directions given by the teacher and paying attention to the teacher. If the student being observed during the time sample does not meet the criteria for on-task behavior, the observer will record off-task (-) for that interval. The nonexamples for on-task behavior are examples of off-task behavior. 1. Examples: o Teacher talking, student looking at teacher. o Student is answering the teacher question. o Student is sitting at his or her desk and looking at the teacher. 2. Non-examples: o Teacher talking, student looking at the floor. o Teacher talking, student looking at and/or talking to a peer. o Student looking at material that is not related to the lesson. o Student talking to him or herself. o Student looking at observer when it is time to record. o Student looking at desk wh en it is time to record. o Student standing up at desk when tone sounds. o Student drops a pencil or touching any other object. o Student is drawing while teacher talks.

PAGE 136

136 APPENDIX C CODING SHEET Teacher__________________ Observer_____________ Time __________ Date___________ Data point____________ 1 (+) (-) 2 (+) (-) 3 (+) (-) 4 (+) (-) 5 (+) (-) 6 (+) (-) 7 (+) (-) 8 (+) (-) 9 (+) (-) 10 (+) (-) 11 (+) (-) 12 (+) (-) 13 (+) (-) 14 (+) (-) 15 (+) (-) 16 (+) (-) 17 (+) (-) 18 (+) (-) 19 (+) (-) 20 (+) (-) 21 (+) (-) 22 (+) (-) 23 (+) (-) 24 (+) (-) + = On-task -= Off-task = Disruptive behavior CH = Choral responding Ind = Individual responding CR = Correct responding ICR = Incorrect responding NR = No response ASR= Active Student Response = Targeted student = Error correction (2nd chance) CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH NR Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR CH Ind CR NR ICR

PAGE 137

137 APPENDIX D TREATMENT INTEGRITY SHEET

PAGE 138

138 1 min CU-WT-FK-NQ* CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 2 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 3 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 4 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 5 min CU-WT-FK-NQ SYLLABLES CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 6 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 7 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ 8 min CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU-WT-FK-NQ CU = cue; WT = wait time; FK = feedback; NQ = next question

PAGE 139

139 APPENDIX E SOCIAL VALIDITY FORM Date: _______________________ Teacher: __ _______________________________________ School: __________________________________________ Age of Student: ___________ Grade of Student: 2nd Intervention Type___________ Please complete the items below by circling the nu mber under the question that best fits how you feel about the intervention. 1. Which intervention was most difficult to implement? (Indi vidual, Choral or Mixed) 2. How difficult was it to im plement the intervention? Not at all Somewhat Fairly Very 1 2 3 4 3. How time-consuming was the implementation of the intervention? Not at all Somewhat Fairly Very 1 2 3 4 4. How helpful was the training session? Not at all Somewhat Fairly Very 1 2 3 4 5. How helpful to your teaching instruction was the intervention? Not at all Somewhat Fairly Very 1 2 3 4 6. After implementing the intervention, did you see a decrease in the students offtask behavior?

PAGE 140

140 Not at all Somewhat Fairly Very 1 2 3 4 7. After implementing the intervention, di d you see a decrease in the students disruptive behavior than what you normally observe? Not at all Somewhat Fairly Very 1 2 3 4 8. After implementing the intervention, di d you see an increase in the students active responses? Not at all Somewhat Fairly Very 1 2 3 4 9. How likely is it that you will use the intervention in the future? Not at all Somewhat Fairly Very 1 2 3 4 (For questions 10-14 only need to answer once) 10. Number of years teaching? _____ 11. Have you taken a class in classroom management? Yes/No 12. What types of grades does _________ make? 13. Has _________ been suspended? Yes/No. If so how many times ______? 14. How many office discipline referrals (ODR) has _________ received?

PAGE 141

141 REFERENCES Anderson, L. M., Evertson. C. M., & Brophy, J. E. (1979). An experim ental study of effective teaching in first grade reading groups. The Elementary School Journal, 79, 193-223. Armendariz, F., & Umbreit, J. (1999). Using activ e responding to reduce di sruptive behavior in a general education classroom. Journal of Positive Be havior Interventions, 1, 152-158. Barbetta, P. M., Heron, T. E., & Heward, W.L. (1993). Effects of active student response during error correction on the acquisition, maintenan ce, and generalization of sight words by students with developmental disabilities. Journal of Applied Behavior Analysis, 26, 111119. Barbetta, P. M., & Heward, W. L. (1993). Eff ects of active student response during error correction on the acquisition and maintenan ce of geography facts by elementary students with learning disabilities. Journal of Behavioral Education, 3, 217-233. Barlow, D. H., & Hayes, S. C. (1979). Alternatin g treatments design: One strategy for comparing the effects of two treatments in a single subject. Journal of Applied Behavior Analysis, 12, 199-210. Brualdi, A. C. (1998). Classroom questi ons. Washington, DC: ERIC Clearinghouse on Assessment and Evaluation. Carnine, D. W. (1976). Effects of two teacher-pre sentation rates on off-task behavior, answering correctly, and participation. Journal of Applied Behavior Analysis, 9, 199-206. Carr, E. G., Taylor, J. C., & Robinson, S. (1991) The effects of severe behavior problems in children on the teaching behavior of adults. Journal of Applied Behavior Analysis, 24 523-535. Council for Expectional Children (1987). Academy for effective instruction: Working with mildly handicapped students. Reston, Virginia: Author. Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Columbus, OH: Merrill. Davis, C. A., & Fox, J. (1999). Evaluating environmental arrangement as setting events: Review and Implications for Measurement. Journal of Behavioral Education, 9, 77-96. Engelmann, S., & Colvin, G. (1983). Generalized compliance training: A direct-instruction program for managing severe behavior problems Austin, TX: Pro-ed. Ferkis, M. A., Belfiore, P. J., & Skinner, C. H. (1997). The effects of response repetitions on sight word acquisition for stude nts with mild disabilities. Journal of Behavioral Education, 7, 307-324.

PAGE 142

142 Gall, M. (1970). The use of questions in teaching. Review of Educational Research, 40, 707-721. Gall, M. (1984). Synthesis of research on teachers questioning. Educational Leadership, 42, 4047. Gardner, R., Sainato, D. M., Cooper, J. O., Her on, T. E., Heward, W. L., Eshleman, J., & Grossi, T. A. (Eds.). (1994). Behavior analysis in education: Focus on measurably superior instruction. Pacific Grove, CA: Brooks/Cole. Gersten, R., Vaughn, S., Deschler, D., & Schiller, E. (1997). What we know about using research findings: Implications for improvi ng special education practice. Journal of Learning Disabilities, 30, 466-476. Godfrey, S. A., Grisham-Brown, J., Schuster, J. W., & Hemmeter, M. L. (2003). The effects of three techniques on student participation with presc hool children w ith attending problems. Education and Treatment of Children, 26 255-272. Good, T. L. (1970). Which pupils do teachers call on? The Elementary School Journal, 70, 190198. Good, T. L., & Brophy, J. E. (2003). Looking in classrooms (9th ed). New York: Allyn & Bacon. Greenwood, C. R., Delquadri, J., & Hall, R. V. (1984). Opportunity to respond and student academic achievement. In W.L. Heward, T. E. Heron, D. S. Hill, & Trap-Porter (Eds.), Focus on behavior analysis in education (pp. 58-88). Columbus, OH: Merrill. Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, 257-263. Gunter, P. L., & Coutinho, M. J. (1997). Nega tive reinforcement in classrooms: What were beginning to learn. Teacher Education and Special Education, 20, 249-264. Gunter, P. L., Hummel, J. H., & Conroy, M. A. (1998). Increasing correct academic responding; An effective intervention strategy to decrease behavior problems. Effective School Practices, 17, 55-62. Gunter, P. L., Jack, S. L., Shores, R. E., Carre ll, D. E., & Flowers, J. (1993). Lag sequential analysis as a tool for functional analysis of student disruptive beha vior in classrooms. Journal of Emotional and Behavioral Disorders, 1, 138-148. Gunter, P. L., Reffel, J. M., Barnett, C. A., L ee, J. M., & Patrick, J. (2004). Academic response rates in elementary-school classrooms. Education and Treatment of Children, 27, 105113.

PAGE 143

143 Gunter, P. L., Shores, R. E., Jack, S. L., Denny, R. K., & DePaepe, P. A. (1994). A case study of the effects of altering instructional interactions on the disruptive behavior of a child identified with severe behavior disorders. Education and Treatment of Children, 17, 435444. Gunter, P. L., Venn, M. L., Patrick, J., Miller, K. A., & Kelly. L. (2003). Efficacy of using momentary time samples to determine on-task behavior of students with emotional/behavioral disorders. Education and Treatment of Children, 26, 400-412. Hall, R. V., Delquadri, J., Greenwood, C. R., & Thurston, L. (1982). The importance of opportunity to respond in childrens academic succe ss. In E. B. Edgar, N. G. Haring, J. R. Jenkins, & C. G. Pious (Eds.), Mentally handicapped childre n: Education and training Baltimore: University Park Press. Heward, W. L. (1994). Three low tech strategies for increasing the frequency of active student response during group instruction. In R. Gardner, III, D. M. Sainato, J. O. Cooper, & T. E. Heron (Eds.) Behavior analysis in education. Focus on measurably superior instruction (pp. 283-320). Monterey, CA: Brooks/Cole. Heward, W. L., Courson, F. H., & Narayan, J. S. (1989). Using choral responding to increase active student response. Teaching Exceptional Children, 21, 72-75. Jones, G. M., & Gerig, T. M. (1994). Silent sixt h-grade students: Characteristics, achievement, and teacher expectations. The Elementary School Journal, 95, 162-182. Kauffman, J. M. (2005). Characteristics of child rens behavior disorders (7th ed.). Columbus, OH: Merrill. Kazdin, A. E. (1982). Single case research designs. New York: Oxford University Press. Kennedy, C. H. (2005). Single-case designs for educational research. Boston, MA; Allyn & Bacon. Koegel, R. L., Dunlap, G., Dyer, K. (1980). Intertrial interval duration and learning in autistic children. Journal of Applied Behavior Analysis, 13, 91-99. Lambert, M. C., Cartledge, G. Heward, W. L ., & Lo, Y. (2006). Effects of response cards on disruptive behavior and academic responding du ring math lessons by fourth-grade urban students. Journal of Positive Be havior Interventions, 8, 88-99. McKenzie, G. R., & Henry, M. (1979). Effects of testlike events on ontask behavior, test anxiety, and achievement in a classroom rule-learning task. Journal of Educational Psychology, 71, 370-374.

PAGE 144

144 Miller, A. D., Hall, M. A., & Heward, W. L. ( 1995). Effects of sequentia l 1-minute time trials with and without inter-trial feedback and self-correction on general and special education students fluency with math facts. Journal of Behavioral Education, 5, 319-345. Nelson, J. R. & Roberts, M. L. (2002). Ongoing r eciprocal teacher-student interactions involving disruptive behaviors in gene ral education classrooms. Journal of Emotional & Behavioral Disorders, 8, 27-39. Noell, G. H., Witt, J. C., Gilbertson, D. N., Ra iner, D. D., & Freeland, J. T. (1997). Increasing teacher intervention implementation in gene ral education settings through consultation and performance feedback. School Psychology Quarterly, 12, 77-88. Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., Koenig, J. L., Resetar, J. L., & Duhon, G. J. (2005). Treat ment implementation following behavioral consultation in schools: A comparis on of three follow-up strategies. School Psychology Review, 34, 87-106. Randolph, J. L. (2007). Meta-analysis of the research on response cards: Effects on test achievement, quiz achievement, partic ipation, and off-task behavior. Journal of Positive Behavior Interventions, 9, 113-128. Redfield, D. L., & Rousseau, E. W. (1981). A meta-analysis of experimental research on teacher questioning behavior. Review of Education Research, 51 237-245. Rosenshine, B. V. (1986). A synthesi s of research on explicit teaching. Educational Leadership, 43, 60-69. Sainato, D. M., Strain, P. S., & Lyon, S. R. (1987). Increasing academic responding of handicapped preschool childre n during group instruction. Journal of the Division for Early Childhood, 12, 23-30. Samson, G. E., Sirykowski, B., Weinstein, T., & Walberg, H. J. (1987). The effects of teacher questioning level on student achieve ment: A quantitative synthesis. Journal of Educational Research, 80, 290-295. Schwartz, I. S., & Baer, D. M. ( 1991). Social-validity assessments: Is current practice state of the art? Journal of Applied Behavior Analysis, 24, 189-204. Sindelar, P. T., Bursuck, W. D., & Halle, J.W. (1986). The effects of two variations of teacher questioning on student performance. Education and Treatment of Children, 9, 56-66. Sindelar, P. T., Rosenberg, M. S., & Wilson, R. J. (1985). An adapted alternating treatment design for instructional research. Education and Treatment of Children, 8, 67-76. Sitko, M. C., & Slemon, A. G. (1982). Developing teachers questioning skills: The effects of delayed feedback Canadian Journal of Education, 7, 109-121.

PAGE 145

145 Skinner, C. H., Belfiore, P. J., Mace, H. W ., William-Wilson, S., & Johns, G. A. (1997). Altering response topography to increase respons e efficiency and learning rates. School Psychology Quarterly, 12, 54-64. Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates by increasing student responses rates: A summary of research. School Psychology Quarterly, 11 313325. Skinner, C. H., Ford, J. M., & Yunker, B. D. (1991). A comparison of instructional response requirements on the multiplication performance of behaviorally disordered students. Behavioral Disorders, 17, 56-65. Skinner, C. H., Rhymer, K. N., & McDaniel E. C. (2000). Naturalistic observation in educational settings In E. S. Shapiro & T. R. Kratochwill (Eds.) Conducting schoolbased assessments of ch ild and adolescent behavior (pp. 21-54). New York: Guilford. Skinner, C. H., & Shapiro, E. S. (1989). A comparison of taped-words and drill interventions on reading fluency in adolescents with behavior disorders. Education and Treatment of Children, 12, 123-133. Skinner, C. H., Smith, E. S., & McLean, J. E. (1 994). The effects of intertrial interval duration on sight-word learning rates in childr en with behavioral disorders. Behavioral Disorders, 19, 98-107. Stanley, S. O., & Greenwood, C. R. (1983). A ssessing opportunity to respond in classroom environments through direct observation: How much opportunity to respond does the minority, disadvantaged stude nt receive in school? Exceptional Children, 49, 370-373. Sterling, R. M., Barbetta, P. M., Heward, W. L ., & Heron, T. E. (1997). A comparison of active student response and on-task instruction on the acquisition and maintenance of health facts by fourth grade spec ial education students. Journal of Behavioral Education, 7, 151-165. Stevens, R., & Rosenshine, B. (1981). Advances in research and teaching. Exceptional Education Quarterly, 2, 1-9. Sutherland, K. S., Alder, N., & Gunter, P. L. (2003). The effect of increased rates of opportunities to respond on the classroom behavi or of students with emotional/behavioral disorders. Journal of Emotional and Behavioral Disorders, 11, 239-248. Sutherland, K. S., & Wehby, J. H (2001). Expl oring the relationship between increased opportunities to respond to academic requests an d the academic and behavioral outcomes of students with EBD: A review. Remedial and Special Education, 22, 113-121. Sutherland, K. S., Wehby, J. H., & Yoder, P. J. (2002). Examination of the relationship between teacher praise and opportunities for students with EBD to respond to academic requests. Journal of Emotional and Behavioral Disorders, 10 5-13.

PAGE 146

146 Tawney, J. W., & Gast, D. L. (1984). Single subject research in special education. Columbus, OH: Merrill. Van Acker, R., Grant, S. H., & Henry, D. (1996). Teacher and student behavior as a function of risk for aggression. Education and Treatment of Children, 19, 316-334. Walker, H. M., & Severson, H. H. (1993). Systematic screening fo r behavior disorders. Longmont, CO: Sopris West. West, R. P., & Sloane, H. N. (1986). Teacher pres entation rate and point delivery rate: Effects on classroom disruption, performan ce accuracy, and response rate. Behavior Modification, 10, 267-286. Winne, P. H. (1979). Experiments relating teachers use of higher cognitive questions to student achievement. Review of Educational Research, 49, 13-49. Witt, J. C., VanDerHyeden, A. M., & Gilber tson, D. (2004). Troubleshooting Behavioral Interventions: A Systematic Process for Finding and Eliminating Problems. School Psychology Review, 33 363-383. Wolery, M., Ault, M. J., Doyle, P. M., Gast D. L., & Griffin, A. M. (1992). Choral and individual responding: Identifica tion of interactional effects. Education and Treatment of Children, 15, 289-309

PAGE 147

BIOGRAPHICAL SKETCH Five years into m y career as a school social worker, I was invited to attend trainings on Positive Behavior Intervention and Supports (PBIS). These trainings fascinated me because they incorporated concepts from the field of systems thinking and applied behavior analysis. I became a PBIS coach for our elementary school and would report the information to our PBIS team, and with their help, apply what I had learned to our school environment. We implemented PBIS principles at the school-wide, classroom, and individual level and over a six year period saw a decline in aggressive behavi ors on the playground, bullying, office referrals, in-school suspensions and out-of-school suspensions, and the amount of over representation of minorities in office discipline referrals. Wanting to learn more about PBIS, and getting the opportunity to study under an emotional and behavioral disorder grant, I applied to and was accepted into the Special Education program at the University of Flor ida in fall 2005. My current research interests include PBIS, functional behavior assessments and the integration of instructional and behavioral interventions for students exhibiti ng behavioral difficulties. I have had my dissertation pilot study accepted for publication, and have four addi tional peer-reviewed articles published or in press. I have been a presenter at several national confer ences. After graduation, I plan to continue my current line of researc h, while also teaching at the university level.