<%BANNER%>

Current Student Assessment Practices of High School Band Directors

Permanent Link: http://ufdc.ufl.edu/UFE0041920/00001

Material Information

Title: Current Student Assessment Practices of High School Band Directors
Physical Description: 1 online resource (138 p.)
Language: english
Creator: Lacognata, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: assessment, band, director, grading, model, music, school, secondary, student
Music -- Dissertations, Academic -- UF
Genre: Music Education thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Measurement and assessment are becoming increasingly important to all music educators. The purpose of this study was to investigate the following questions: 1) in what specific ways are current high school band directors assessing students in their ensemble classes; 2) what are high school band directors attitudes toward the assessment process; and 3) how can the results of this research contribute to the development of a student assessment model for bands? The subjects for this study were 454 high school band directors from across the United States. Results show that the main purpose of student assessment for high school band directors centered on providing their students and themselves with feedback concerning the instructional process in the classroom. Directors reported that performance skills were the most important criteria to assess in their students and the main influences of the assessment methods they use are their personal philosophy of assessment and available class time. Directors reported the best source of preparation for assessing their students came from their colleagues and that they are interested in finding new ways to assess their students. Directors suggest that an effective assessment model for band would be weighted: rehearsal attendance and contribution 34.95%; performance attendance and contribution 34.70%; and individual testing and evaluation 30.57%.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Lacognata.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Brophy, Timothy S.
Local: Co-adviser: Waybright, David A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0041920:00001

Permanent Link: http://ufdc.ufl.edu/UFE0041920/00001

Material Information

Title: Current Student Assessment Practices of High School Band Directors
Physical Description: 1 online resource (138 p.)
Language: english
Creator: Lacognata, John
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2010

Subjects

Subjects / Keywords: assessment, band, director, grading, model, music, school, secondary, student
Music -- Dissertations, Academic -- UF
Genre: Music Education thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Measurement and assessment are becoming increasingly important to all music educators. The purpose of this study was to investigate the following questions: 1) in what specific ways are current high school band directors assessing students in their ensemble classes; 2) what are high school band directors attitudes toward the assessment process; and 3) how can the results of this research contribute to the development of a student assessment model for bands? The subjects for this study were 454 high school band directors from across the United States. Results show that the main purpose of student assessment for high school band directors centered on providing their students and themselves with feedback concerning the instructional process in the classroom. Directors reported that performance skills were the most important criteria to assess in their students and the main influences of the assessment methods they use are their personal philosophy of assessment and available class time. Directors reported the best source of preparation for assessing their students came from their colleagues and that they are interested in finding new ways to assess their students. Directors suggest that an effective assessment model for band would be weighted: rehearsal attendance and contribution 34.95%; performance attendance and contribution 34.70%; and individual testing and evaluation 30.57%.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by John Lacognata.
Thesis: Thesis (Ph.D.)--University of Florida, 2010.
Local: Adviser: Brophy, Timothy S.
Local: Co-adviser: Waybright, David A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2010
System ID: UFE0041920:00001


This item has the following downloads:


Full Text





CURRENT STUDENT ASSESSMENT PRACTICES
OF HIGH SCHOOL BAND DIRECTORS



















By

JOHN P. LACOGNATA


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2010






























2010 John P. LaCognata































This work is dedicated to my father, John J. LaCognata, my role model, mentor, and
best friend.









ACKNOWLEDGMENTS

First, I would like to thank the band directors who took time out of their busy

schedules to complete the survey and share their expertise. Without you this research

would not have been possible. I would also like to thank Sue Rarus, Director of

Information Resources and Publication and her colleagues at the National Association

for Music Education (MENC). Your assistance in administering the survey was greatly

appreciated.

I sincerely thank my graduate committee members Dr. Timothy Brophy, Dr.

David Waybright, Dr. Russell Robinson, and Dr. David Therriault for their guidance and

support throughout this entire process. It was a privilege to learn from you, an honor to

teach with you, and a pleasure to work for you. I want to thank mycolleagues in the

Band Department at the University of Florida. Your camaraderie and friendship has

made the last three years memorable and enjoyable. I also would like to thank the

wonderful people in the College of Fine Arts and the faculty and students in the Music

Department. GO GATORS !

I express thanks to my present and former "band" colleagues across the country.

The importance of your work with and for students cannot be measured. I thank those

people that have had a positive impact on my musical life. Your willingness to share

your time and talents with me will continue to guide me. I would also like to thank Dan

Massoth and Ron Raup at MakeMusic Inc. for taking an interest in my research.

Finally, I thank my family and friends for their constant encouragement, support,

and love. It is through you that I find purpose for my life. Lastly, I am most grateful to

my wife Leigh, and my children John and Alexa. I am blessed to have you. Everything I

do, I do for you.









TABLE OF CONTENTS

page

A C K N O W LE D G M E N T S ........... ......... .. ............................................................... 4

L IS T O F T A B L E S ........................ .. ............... .. ............................................. .......... ....... 8

L IS T O F F IG U R E S ................. ....... ..................................................................................... 9

A B S T R A C T ........................ ...... ........... .............................................................. ................... 1 0

CHAPTER

1 IN T R O D U C T IO N ........................................................................................................... 1 2

E x a m p le s ........ ................. .................. .... ....................... ....................... .... .. ................ 1 2
W is c o n s in ...................... ................ ...... ............................................ 1 2
C a lifo rn ia ... .................................................................................. ......... ..... ............... 1 4
N e w Y o rk ...................... ............... ....................................................... 1 4
W a s h in g to n ........... ........... ...... ......... ............................................... 1 5
T e xa s .................................. ............... ........ 16
O b s e rv a tio n s ........... ......... .................. ...... ........................................... 1 7
S significance of the P rob lem .......... ........... ........................................... .............................. 19
P urp o se o f the S tud y ...................... ................ .............................................2 3
D e lim ita tio n s ........... ........... ................ ...... .............................................2 4

2 LITERATURE REVIE W ............................... .................. 26

Definition of Assessment ......... ............. ..... ........... 26
Philosophical Rationales of Assessment .................. ........ ...................27
Rationalism ....................... ............................. 27
E m p iric is m .................... ............. .............................................................................. 2 8
P ra g m a tis m ................. ............. ................................................................. 2 9
Assessment History ................ ......... ................... 30
A ssessm ent in Education ............................................................................................33
Assessment in Music Education .......................... ......... ........35
Student Assessment in Music Education ................................. ........ 38
Assigning Grades to Band Students ........................................... 44
Developing Music Assessment Models ....................... .. ................... 45
A rts P R O P E L .................... ... .. .......... .................................... 4 6
Comprehensive Musicianship through Performance (CMP) ............................. 47
State Collaborative Assessment and Student Standards (SCASS) .................... 48
Curre nt Research ....................................49
Summary of Research: Study Implications ........... ....................... 54

3 METHODOLOGY AND PROCEDURES ................................................ .................56









Research Method ........ .... .................. ............... ...... ....... 56
Subjects ...... .......... ................. ................. 56
P ro c e d u re s ................. ............. ...................................................................5 7
D a ta C o lle c tio n ................. ............. ................................................................5 7
Statistical Procedures ........................ .............. 59
Pilot Study ......... .............. ............... ......... 60

4 R E S U LT S ......... ............ ........................................................................................ 64

Background Information ................. ......... .......... ........ 64
Grading Information ................ ......... ................... 67
Assessm ent Philosophy ............................................................................ ................ 68
Assessment Information ............................................................................ .................... 70
Assessm ent M odel.......................................................... 72

5 DISCUSSION AND CONCLUSIONS ............... ........ ......... 84

D iscussi on of the Results .......................................................................................... 84
Background Information............................. ........................ 84
Grading Information ................................................................................... .................85
Assessment Philosophy.............................. ......... 85
Purpose ....................... .......... .................. 85
C rite ria ......... ........ ............. .............................................................. 8 7
Category......................... ......... 90
Influence ............. ........................ 90
Preparation ............. ................................................. 91
Assessment Information .............. ......... ...... .........92
Components ............... .... ..................... 92
Data collection ................................................................................................. 93
C characteristics ................................................................................................. 94
Reflection ............. ................... ....................... 95
Assessm ent M odel ............................................................................ ...... ................. 96
C o n c l u s io n s ...................... .......... ............................. ..................................................... 9 6
Research Question 1: In What Specific Ways are Current High School Band
Directors Assessing Students in Their Ensemble Classes? .......................... 96
Research Question 2: What are High School Band Directors' Attitudes
toward the Assessment Process? ....... ... ..... .................. ...................... 97
Research Question 3: How Can the Results of this Research Contribute to
the Development of a Student Assessment Model for Bands? .................. 97
Rehearsal attendance and contribution ............................ 98
Performance attendance and contribution.............. ............. 98
Individual testing and evaluation ........... ....................................... 98
Implications for Music Education .............. .......... ... ... ............ ... ....99
Purpose ................ ..... .. .. .. ......... ..... ... ........................... 99
C rite ria ...................... .. .. ......... .. ................................................................... 9 9
P re p a ra tio n ......... ........ ..... .... ......... .. .. .............................. ......................... 1 0 1
D a ta C o lle ctio n ......... ........ ..... .. .. ......... .. ................................ ..................... 1 0 2


6









S ug g e ste d M o d e l ........ ........ .. ............ .......................................... 10 3
Future Research ......... ...... .. .................. .......... ...... .................104

APPENDIX

A Q UESTIO NNA IRE ..................................................... ........... .. .............. 112

B PILOT STUDY RESULTS................................ ............................................ 121

D em graphic Info rm atio n ........... ................. ....................................... .... ........... 12 1
G reading Inform action .................... ........... .............. ... ... ......... .... 123
A ssessm ent Info rm atio n ........... ................. ......................................... ............... 124

LIS T O F R E FE R E N C E S ...................... ............ .......................................... 129

B IO G R A P H IC A L S K E T C H .......................... ..................................................................... 137









LIST OF TABLES


Table page

1-1 Summary of example assessment components and percentage assignment..... 17

4-1 Importance of purposes of assessment.......................................... .. ... ............ 79

4-2 Criteria importance in the evaluation of band students ................ ......... ......... 80

4-3 Importance of assessment categories ........................... .......... .. .............. 80

4-4 Factors influencing assessment methods ..................................... ............. .... 80

4-5 A ssessm ent preparation n .......................... ....................................... ................. 81

4-6 Assessment components used with the assigned percentage assigned.............. 81

4 -7 D ata co llectio n p roced ures ......... ................. ..................................................... 82

4-8 Materials used in performance-based tests........................... ................... 82

4-9 Importance of characteristics of assessment models.................. ....................... 82

4-10 Agreement level with statements concerning assessment................ .......... 83

5-1 Assessment purposes including category.................................105

5-2 Assessment criteria including categories and national standard......................... 106

5-3 Factors influencing assessment methods including categories........................... 106

5-4 Preparation methods including category.............._.... ....... ................. 107

5-5 Assessment components usage including category................ .................... 107

5-6 Data collection procedures including category......................................................108

5-7 Weighted component results compared to pilot study results...................................108

5-8 Student assessment model Stage one............................................................. 108

5-9 Student assessment model Stage two ...................................................... 109









LIST OF FIGURES

Figure page

4-1 School type ............. ...... .................. .................. 73

4 -2 S c ho o l e n ro llm e nt............................................... .. ................ .......... ........ ............ 7 3

4-3 C om m unity-type of school .................................. ............................. ................. 74

4-4 Socio-economic status of school community.......................................................74

4-5 Student enrollment in band program ............... ............... ............................75

4-6 Student enroll ent in concert band(s) ................. ................ ......... ................. 75

4-7 C concert ba nds per school .............................. ......... ...... ................... ................. 76

4-8 Average number of students per concert band .............................................. 76

4-9 Director's years of teaching experience ....................... ................................. 77

4-10 Director's years teaching at current school ................................................ 77

4-11 D director's education le ve l ......... ................. ................. ................... .... ........... 78

4-12 G rade types assigned ......... ................. ........................................... ............... 78

4-13 Create a balanced assessment tool ............... ............... .............................79

5-1 Current student assessment practices ....................... ............................110

5-2 Student assessm ent m odel ............................... .......................... ................. 111









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

CURRENT STUDENT ASSESSMENT PRACTICES
OF HIGH SCHOOL BAND DIRECTORS

By

John P. LaCognata

August 2010

Chair: Timothy S. Brophy
Major: Music Education

Measurement and assessment are becoming increasingly important to all music

educators. The purpose of this study was to investigate the following questions: 1) in

what specific ways are current high school band directors assessing students in their

ensemble classes; 2) what are high school band directors' attitudes toward the

assessment process; and 3) how can the results of this research contribute to the

development of a student assessment model for bands? The subjects for this study

were 454 high school band directors from across the United States.

Results show that the main purpose of student assessment for high school band

directors centered on providing their students and themselves with feedback concerning

the instructional process in the classroom. Directors reported that performance skills

were the most important criteria to assess in their students and the main influences of

the assessment methods they use are their personal philosophy of assessment and

available class time. Directors reported the best source of preparation for assessing

their students came from their colleagues and that they are interested in finding new

ways to assess their students. Directors suggest that an effective assessment model

for band would be weighted: rehearsal attendance and contribution 34.95%;









performance attendance and contribution 34.70%; and individual testing and evaluation

30.57%.









CHAPTER 1
INTRODUCTION

While many high school band directors do a very good job of teaching and

preparing ensembles for performances, very few have developed and implemented an

effective assessment tool to use when assigning grades for their students. Having a

way to assign student grades with proper rationale and supporting data is essential in

this age of school accountability and grade-point minded students and parents. The

days of abstract grading systems or teacher-bias grade assignment are gone.

Instrumental music educators can no longer simply grade their students on attendance

and perceived effort or interest (Abeles, 1995; Pizer, 1990).

A "GoogleTM search" of the phrase "high school band grading policy" results in

approximately 460,000 hits. Consider the following five examples regarding current

grading policies listed in high school band handbooks found on various band web

pages. Attention should focus on the selection of specific assessment components, the

wording or explanation of that component, and the percent the component is assigned

in the overall grading plan.

Examples

Wisconsin

Attendance: 30%

Absences and tardiness disrupt the learning environment. Students need
regular day-to-day attendance and must be punctual to maintain a sense of
continuity in their program. Even one absence can affect the success and
educational outcome for the individual and the entire class on that day.
Music rehearsals are particularly difficult to make-up since the process is so
experiential. It is impossible to re-create what the other students
experienced the preceding day.

Attendance at all performances is critical for every student in the ensemble.
Every student in the band has a specific role in the group and an absence is









very noticeable. If you will miss a performance or rehearsal (outside of the
school day), please complete a Notice of Planned Absence form. If a
student's absence is because of involvement in another school activity, it is
your responsibility to notify the director of the activity via the Notice of
Planned Absence form. If an emergency situation arises, please notify the
director of the situation as soon as is feasible.

For each unexcused absence from a band performance, the final semester
grade will be reduced by one letter grade.

* Assessments: 30%

The number of assessments will vary each quarter. Each assessment will
receive points for the quality of work. Assessments will be given a due date
and it is the student's responsibility to see that the assessment is completed
on time. Work turned in late will be reduced by one letter grade for each day
that it is late. It is the student's responsibility to make up any missed
assessments.

Audio recordings may be submitted for playing tests. This can be done by
cassette tape, CD, DVD, mp3, wav or any other electronic format that is
compatible with the high school's technology capabilities. Difficulties with
technology do not relieve you of the responsibility to turn in assessments by
the due date.

Typical assessments might include, but are not limited to, any of the
following: playing tests (scales, excerpts from literature, etc.) chair
auditions, quizzes and/or other written assignments.

* Participation by Effort, Attitude and Preparation: 30%

A positive attitude is the underlying ingredient necessary to the success of
each ensemble and in turn each member of that ensemble. The student is
expected to leave one's ego at the door and become a team player
approaching all new music and ideas with an open mind, seeing each as an
opportunity to learn. We must work together and encourage one another to
achieve success. Full and positive participation in every rehearsal and
performance is expected of every band student.

* Lessons/Sectionals: 10%

Many things are accomplished in individual or small group settings that
cannot be accomplished in full ensemble rehearsals. Therefore, attendance
is required at all scheduled lessons and sectionals.

(Wisconsin Lutheran High School Band, 2009)









California


Participation: 25%

The participation grade includes being prepared for class, having your
instrument, music, and pencil. Students are expected to actively participate
in class. Activities which don't allow students to participate daily will cause
a drop in the participation grade. If students are not allowed to participate
due to behavior, this will negatively affect their participation grade for the
day. Students who have regularly non-working instruments, and delay in
getting repairs, will have their participation grade lowered for the days
involved.

Performances: 25%

Students are expected to participate in scheduled performances, during or
outside of school hours. Very few reasons will be accepted for failing to
participate in a scheduled concert. As our concerts are scheduled WELL in
advance, ample planning is easy. If students are not in the required
uniform for a performance, they will not be allowed to participate, and will
receive a zero grade for that performance.

Playing and Written Tests: 50%

Students will be tested during band class on a regular basis. These grades
will be averaged and will account for half of the band grade. Tests will be
weighted evenly, whether announced or not announced ahead of time.
Missed tests must be made up, just as in any other class.

(Heritage School Band, 2008)

New York

Rehearsals: 34%

Preparedness have all necessary equipment needed for class (i.e.,
Instrument, music, pencil, marching band lyre/music) and be able to play to
the best of your ability. Behavior/Attitude pay attention, listen to directions,
and do not disrupt class or rehearsals by talking or passing notes, etc. with
others in the band. Be On Time

Lessons: 34%

Attendance The number of lessons attended will directly affect student's
grade. Each lesson is important to attend when scheduled as students may
be assigned group assignments to be worked on together for an evaluation
each quarter. Preparedness practice your assignments. I will be checking
band lockers to see if instruments are going home for practice on









weekends. Practice in school during study hall counts!! (6% of total grade)
Responsibility- If you have to miss a lesson (i.e., Test/quiz in class, etc.)
let me know in advance and schedule a make-up lesson time. Make-up
lessons can receive a maximum score of 8 pts out of the possible 10 pts for
regularly scheduled lessons, unless you inform me of the missed lesson in
advance.

Events Attendance: 12%

Attend performances the band plays in! Every member is important to the
success of the performance. Wear proper attire.

Written/Playing Assignments: 12%

Complete them and hand them in on time. Your grade will be directly
affected be whether or not you do these assignments.

Final Project: 10%

(Northeastern Clinton Central School District, 2004-2005)

Washington

Individual Practice/Practice Sheets: 20%

In order for students to improve their playing skills they must practice on
their own. Without individual practice a student will simply not improve, nor
will they buildup the physical and mental stamina needed to make it through
a concert. Students will receive Practice Sheets roughly every two weeks.
Students log their practice time (which parents) verify with their initials),
turn the sheets in to the band office by the due date. Practice time will be
posted in the grade book and calculated as follows:

(A) Exceeding Standard = 3 hours per week (1/2 hr. daily)
(B) Meeting Standard = 2 hours per week
(C) Approaching Standard = 1 hour per week
(F) Below Standard = 0 Practice Time

Performance Assessment: 20%

Performance Assessments (playing tests) will be done on assigned material
covered in class. Most playing tests involve a student being recorded in a
practice room, alone, playing through the assigned music. The teacher will
listen to and evaluate each student performance, and provide feedback
regarding their skills. In the Concert Band, Jazz Ensemble, and
Intermediate Band, some performance assessments may take place in the
classroom during rehearsal. Grades for performance assessments will be
calculated and posted using the scale above (95=A, 85=B, etc.).









Written Assessments: 10%


Written assessments will be given on material covered in class, such as
notation, rhythmic dictation, and music theory, terms/definitions, etc.
Written assessments will be graded and posted using the same scale as
above.

Daily Activity Assessment: 50%

Activity assessment will indicate a student's understanding and application
of proper rehearsal etiquette during daily rehearsals and related band
activities (concerts, assemblies, etc.). Proper rehearsal etiquette, simply
put, means proper rehearsal manners. This involves teamwork,
consideration, respect, LISTENING, etc, and is vital to a positive learning
environment in band. Activity Assessment will be calculated and posted
using the same scale as above.

(Todd Beamer High School, 2009)

Texas

Participation: 25%

The student will receive a grade for each before and after school sectional
and rehearsal during a grading period. The student will be on task on
focused during all rehearsals. The student will have instrument, music,
pencil, and supplies. The student will mark music and take notes as
needed. An unexcused absence from a before or after school rehearsal or
sectional will lower a student's participation average of the six weeks by 20
points. Tardies lower a student's participation average by 5 points.

Skills: 25%

The student will be expected to improve individual music skills. The
students' individual skill development will be evaluated through taped music
tests, individual playing tests, scale tests, and written tests. The student will
be evaluated on improvement of ensemble skills during daily rehearsals.

Fundamentals: 25%

The student will be expected to improve performance fundamentals. The
student will be evaluated for improvement of music fundamentals through
daily observation during the "basics" part of each rehearsal and during
sectionals. The student will be expected to demonstrate correct posture,
hand position, embouchure, air production, articulation and attentiveness as
monitored during rehearsals. The student will be expected to develop a
historical knowledge of the literature relative to his/her respective
instrument.









Performance: 25%


The student will receive a grade for each performance during a grading
period. Performances will be counted as major exams. The number of
performances will be determined by the performance calendar. If no public
performance occurs during a grading period, the performance grade will be
based upon informal classroom performances determined by the director.

(Lake Highlands Area Band Club, 2008)

Table 1-1. Summary of example assessment components and percentage assignment.
Example number Assessment component Assigned percentage
Attendance 30
Assessments 30
Participation 30
Lessons/sectionals 10
Participation 25
2 Performances 25
Playing/written tests 50
Rehearsals 34
SLessons 34
Event attendance 12
Written/playing assignments 10
Individual practice/practice sheets 20
Performance assessments 20
Written assessments 10
Daily activity assessment 50
Participation 25
Skills 25
Fundamentals 25
Performance 25

Observations

The first observation concerning these excerpts involves the variety of

components used in these grading policies. Components incorporated in the excerpts

included attendance, events attendance, assessments, playing tests/assignments,

written tests/assignments, participation, rehearsals, daily activity assessment, lessons,

sectionals, performances, and individual practice. We assume each director has

selected individual assessment components in an effort to reinforce important aspects

of their ensemble class and/or enforce policies they feel are essential to the efficient









operation of their band program. While there appears to be certain commonalities in the

design or purpose of the components selected by individual directors, there is not an

agreement of what specific components should be incorporated in their grading policies.

One basic characteristic of the individual assessments components does exist. Each of

these assessment components can be divided into two distinct categories: musical, and

non musical. Musical components mentioned in the excerpts include playing and

written tests/assignments, performances, lessons, and sectionals. Examples of non

musical components are participation, attendance, effort, and attitude. It is evident that

each director has combined assessment components from each of these categories in

their grading policies.

The second observation concerning these excerpts involves the wording or

explanation associated with each individual assessment component. There is no

evidence from these excerpts of any standard definitions for the components, nor does

there seem to be a commonly accepted way to incorporate these components into an

overall assessment tool. Each director appears to be isolated when defining how and

why individual components are being included in their grading policies. Directors are

required to create their own explanation and rationale associated with each component

they select.

The final observation of these excerpts is the variety of emphasis the directors

place on the different assessment components. Some directors place nearly equal

weights to each assessment component (i.e., 30%, 30%, 30%, 10%), while others place

much more weight on one component versus its counterparts (i.e., 50%, 25%, 25%).

There is great variety in the importance each individual assessment component is









assigned in the included grading policies. Further, it becomes the job of each director to

create a justification for assigning emphasis to the various components used. There is

no evidence of a generally accepted model or plan for weighing the various assessment

components in an assessment tool.

In the process of grading their students, band directors must 1) select appropriate

assessment components, 2) define those components to ensure they are accurately

measuring what is intended, and 3) combine and weigh the various components to

produce an effective assessment tool. Each of these decisions is inter-related and

affects the validity of their assessment tool. There is little evidence to suggest there is a

widely accepted assessment model that secondary band directors can refer to when

designing their grading policies.

Significance of the Problem

Assessment has become an important and visible part of today's educational

environment. In many states, schools are assessed and assigned a letter grade based

on how they score in certain areas (Miller, 2007). These grades are made public,

reported through the media, and listed on department of education websites. Schools

receiving high marks are praised and often receive additional funding from state and

federal agencies. Schools receiving failing marks are placed on probation. Often a

sense of crisis can be felt throughout that learning community. Administrators are often

fired and teachers questioned in an effort to bring that grade up the next year

(Paswaters, 2006). These school grades have changed the way teachers and

administrators approach curriculum, class assignment, and scheduling (Eyerman, 2002;

Lopez, 2006). The concept of school grades has also changed the way students and

parents view grading and assessment in each of their classes (Lehman, 1997).









A problem for many band directors is the lack of training concerning grading and

student assessment. Grading systems are rarely if ever discussed in an undergraduate

music education curriculum, and most discussions on assessment only apply to general

education courses, which have little or no application in a performance-based music

ensemble class. There is little guidance for music educators on how to properly design

an assessment tool. Further, there are no commonly accepted assessment models that

educators can copy and adapt to their specific situation (Tracy, 2002).

An early study (MENC, 1953) divided the undergraduate music education

curriculum into four categories: general culture, basic music, musical performance, and

professional education. Since that study, there have been numerous articles and

dissertations concerned with the subject of educating music educators (Mathison, 1971;

Wilbur, 1955). An examination of the current curriculum of undergraduate music

education majors includes the following components: music theory, ear training, music

history, instrument techniques courses, piano, conducting, applied music study, general

education courses, general studies, ensembles, and a student teaching internship

(Meaux, 2004).

The National Association of Schools of Music (NASM) emphasizes the importance

of assessment training in the undergraduate music education curriculum. As stated in

their handbook under the sub-heading of Teaching Competencies the following

statement addresses this topic.

An understanding of evaluative techniques and ability to apply them in
assessing both the musical progress of students and the objective and
procedures of the curriculum (National Association of Schools of Music,
2009, p. 100).









We continue to observe developments in the music education curriculum as

collegiate teachers make an effort to address the needs of future music educators.

However, there still exists the need to examine and refine what is offered to these

students in an effort to set them up to be successful teachers. One area currently

overlooked in preparing these music educators is the subject of student grading and

assessment (Reid, 2005).

Another factor that has placed a great deal of attention on classroom assessment

is college entrance requirements. It is becoming more competitive for high school

students to gain acceptance into many colleges and universities (Chapman, 2004).

Students begin concerning themselves with entrance exam scores and their high school

grade-point average (GPA) before they ever arrive at high school. This results in a

great deal of attention given to each grade placed on their transcript. Teachers are

held accountable for every mark they assign students with the understanding of how it

may affect their future college plans (Cope, 1996).

The problem of proper assessment and grading is further compounded by the

pressures and time constraints placed on high school band programs and the

perception by directors that the process of assessment is time consuming and tedious

(Lehman, 1997). Most high school band directors begin their school year with a

marching performance every Friday night, and rehearsals during and after school many

nights of the week. Many directors would argue that they barely have enough time to

prepare their ensembles for these public performances and do not have the luxury of

dealing with assessment in any detail. Bennett Reimer further discusses this view in his

book A Philosophy of Music Education:









Performance directors are driven to perform fine concerts; that is how their
success is judged. This is further intensified in the community of school
music teachers, whose values, shaped by the surrounding forces center
strongly on producing the best possible players, singers, and groups. The
emphasis in the music program is almost completely on performance, and
that emphasis over the years has garnered strong support from both
parents and school administrators (Reimer, 1989).

In addition, many high school band programs enroll over a hundred students and

the idea of tracking and accounting for each student in terms of assessment becomes

daunting to most directors (Chiodo, 2001).

There is a growing need for the development of an effective student assessment

model for high school bands. An effective model should be easy to implement and

should address the important musical, educational, and organizational issues common

to most band programs. An effective model should also motivate students to develop

on their instruments and have a positive experience in their high school music careers.

There currently is not a widely accepted model for assessment in the high school

instrumental ensemble class. While there are generally accepted models for warm-ups,

ensemble tuning, and even literature selection, nothing in the way of assessment

models is typically discussed or practiced. Each band director is individually

responsible for development and implementation of an assessment tool for use when

grading his or her students. While each assessment tool needs to have individual

flexibility in relation to the specific situation (school, director, etc.), I believe certain

assessment components and methods of implementation apply to all band programs.

Despite the many differences among band programs a great many fundamental

elements are common to all. These elements become the framework for the

assessment tool implemented by those directors.









With little research done in the area of student assessment, the question becomes

"how do we find or develop an accurate assessment model that can be used by

secondary band directors when grading their students?" Theories and assumptions can

only go so far when we are dealing with this very real topic. In addition to the study of

assessment model building, I believe a great deal can be learned from studying the

individual assessment systems that current band directors have developed for their

band programs. In the end, these are the experts in operating instrumental classrooms

in our schools. These are the people running our high school band programs across

the country. This is the one group that should understand the issues and challenges in

assessing students and assigning grades to students participating in high school band

programs.

Purpose of the Study

The purpose of this study is to investigate current student assessment practices of

high school band directors in the United States. The study will also investigate the

directors' attitudes toward the student assessment process. The end goal of the study

is to develop a valid and practical student assessment model that can be used by high

school band directors. The development of this model will be guided by new research

results in the form of a national survey. The following research questions guided this

study:

1. In what specific ways are current high school band directors assessing students in
their ensemble classes?

2. What are high school band directors' attitudes toward the assessment process?

3. How can the results of this research contribute to the development of a student
assessment model for bands?









Delimitations


1. While all levels of music education would benefit from an effective student
assessment model, this study specifically dealt with the high school (9th to 12th
grade) level. Many similarities exist from level to level (especially middle school
and high school band). However, the pressure placed on performance ensembles
at the high school level from the school and surrounding community dictate many
aspects of their program, making it unique.

2. While all ensemble disciplines in a high school music program (band, chorus, and
orchestra) share many of the same challenges in terms of student grading and
assessment, many factors are unique to each discipline. In this study, the scope
of the investigation was limited to classes involving band performance-based
ensembles.

3. Sample size for this study was limited to approximately 5,000 subjects and only
included responses from high school band programs in the United States whose
directors were members of The National Association for Music Education (MENC).
This was necessary because the largest available national database for high
school band directors is maintained by MENC. Further, the largest random
sample the researcher could acquire from MENC was 5,000 subjects. The sample
should be sufficient to generalize results of the questionnaire to all U.S. high
school band programs whose directors are members of MENC.

Definition of Terms

ASSESSMENT: an observation of what a student knows and is able to do. Assessment is
the process of collecting, describing, and analyzing information about student
performance or program effectiveness in order to make educational decisions.

ALTERNATIVE ASSESSMENT: any assessment technique other than traditional norm-
referenced or criterion-referenced pencil-and-paper tests, that uses strategies for
collecting and analyzing information.

AUTHENTIC ASSESSMENT: assessment techniques that gather information about students'
ability to perform tasks found in real-world situations.

CRITERION-REFERENCED TEST: a measurement of achievement of specific criteria or skills
in terms of absolute levels of mastery. The focus is on performance of an individual as
measured against a standard or criteria, rather than against the performance of others
who take the same test, as with norm-referenced tests.

EVALUATION: the collection and use of information (assessments) to make informed
educational decisions.

FORMATIVE ASSESSMENT: an ongoing assessment made in an educational program for
the purpose of improving the program as it progresses.










MEASUREMENT: the use of a systematic methodology to observe behaviors in order to
represent the degree of performance ability, task completion, and concept attainment.

PERFORMANCE ASSESSMENT: an evaluation in which students are asked to engage in a
complex task, often involving the creation of a product. Student performance is rated
based on the process the student engages in and/or based on the product of his/her
task. Many performance assessments emulate actual workplace activities or real-life
skill applications that require higher-order processing skills. Performance assessments
can be individual or group-oriented.

PERFORMANCE TASK: a demonstration in which a student is able to show his or her ability
to use learned material in real-world situations.

PORTFOLIO: a file of student work centered in a particular topic or content area.

PORTFOLIO ASSESSMENT: an analysis of a collection of student work to show student
achievement and attainment of standards in a specific content area. Student progress
is decided by reviewing the collected works in reference to previously conceived criteria.

RUBRIC: a set of guidelines for giving scores. A typical rubric states all the dimensions
being assessed, contains a scale, and helps the rater place the given work properly on
the scale.

SELF-ASSESSMENT: analysis of one's own achievement or abilities.

STUDENT ASSESSMENT: the judgment of students' capabilities in a subject, formed from
information collected from performance tasks directly related to well-defined,
educationally sound performance criteria.

SUMMATIVE ASSESSMENT: an assessment, administered at the conclusion of an education
program, used to determine the overall effectiveness of that program.









CHAPTER
LITERATURE REVIEW

This literature review presents a description and synthesis in the areas identified

as central to the present study: definition of assessment, assessment history,

assessment in education, assessment in music education, student assessment in

music, assigning grades to band students, and the development of an assessment

model. The review will conclude with a discussion of recent research in the area of

student assessment in the music classroom as they are related to the purposes of the

present study.

Definition of Assessment

Assessment can be defined as "the process of documenting knowledge, skills,

attitudes, and beliefs. Assessment can focus on the individual learner, the institution,

the learning community, or the educational system as a whole" (Council of Chief State

School Officers, 2009). While this definition appears adequate, our profession lacks

agreement on the terminology used in assessment research. Other terms commonly

associated with assessment include evaluation, measurement, and testing. These

terms are often used interchangeably, causing much confusion regarding the names

and labels associated with current assessment techniques and principles (Bessom,

1980).

Hart (1994) attempts to clarify these discrepancies by providing separate

definitions for each specific term. He defines assessment as gathering information

about what students know and can do. Teachers collect this information in a variety of

ways including observing students, examining their work, and testing their knowledge

and skills. He defines evaluation as a process that involves interpreting the information









gathered from assessment, as a means of determining whether the students have

learned what the teacher wanted them to learn. A test is defined as a type of

assessment instrument or tool used to determine whether a student has achieved the

main goals of instruction.

An additional term often associated with assessment is grading. Grading can be

defined as the process of reporting and recording student progress (Council of Chief

State School Officers, 2009). Hoffer (2000) said the teacher ultimately "needs to

establish clear-cut criteria for grading, consistent with the overall evaluation procedures

of the school". The teacher's role relating to grades can be complex. The common goal

of this process should be to make the assigned grades as fair, consistent, and objective

as possible (Brookhart, 1993).

Philosophical Rationales of Assessment

Contemporary philosophical views on assessment are based on the earlier

schools of thought on the subjects of learning and education. This literature review will

begin with an examination of three basic philosophical schools and their views on

learning and assessment.

Rationalism

Rationalism (often referred to as idealism) maintains that a person's

consciousness of what is perceived is an integral part of reality. The central thesis of

rationalism is that knowledge is a fixed body of truth that applies in all times and places.

It began with Socrates (470?-300 B.C.) and Plato (427-347 B.C.) in ancient Greece, and

proponents include Rene Descartes (1596-1650), Immanuel Kant (1724-1804), Georg

Hegel (1770-1831), and a number of English and American philosophers.









Probably the greatest strength of rationalism is its conscious intellectual approach

to reality, the way reality is known, and the values that should be held. Another strength

of rationalism is its stability. It provides conclusions that are not going to be buffeted

about, by each novel breeze or whim. What is true is true, always was true, and always

will be true.

Rationalists have a rather great interest in evaluating students' learning. They see

evaluation as an important part of education. Traditionally the rationalists, especially

Socrates, followed the dialogue procedure in which a teacher and student probed and

searched together to uncover truth. Over the ages, the emphasis changed more to

students learning what was believed to be valuable and lasting. Student learning is

evaluated not just on factual knowledge or skill development; but rather from more

subjective, more probing, and comprehensive evaluations of the students' work (Abeles,

1995).

Empiricism

The roots of Empiricism (often called realism) reach back to Aristotle (384-322

B.C.). The heart of realism is the acceptance of "what is clear to everyone." Things are

what they appear to be; not representations of some greater but invisible reality.

Empiricists believed the road to truth is through observation and scientific evidence.

Some of the important names associated with Empiricism include Baruch Spinoza

(1632-1677), John Locke (1632-1704), and the American philosopher-psychologist

William James (1842-1910).

The main strength of empiricism lies in its practical quality. Empiricists take

whatever information they have and work with it as best they can, even though they









realize their knowledge is not perfect or complete. In short, this philosophical position

deals with reality as it can best be known.

Like rationalists, empiricists are interested in evaluating the results of instruction.

However, they are more interested in the acquisition of specific information and skills

(the ones deemed necessary to function in society and an area of work). Empiricists

see teachers as central in the educational process. Teachers largely decide what will

be taught and how it will be taught. If they are not the only source of information,

teachers tell students where to locate it (Abeles, 1995).

Pragmatism

The roots of pragmatism go back to Heraclitus (sixth to fifth centuries B.C.) and

the Sophists in ancient Greece. Heraclitus emphasized the idea that all things change;

nothing is permanent. The logic of pragmatism is the scientific method. People

associated with pragmatism include Francis Bacon (1561-1626); Auguste Comte (1798-

1857); and American philosophers Charles Sanders Peirce (1839-1914), and John

Dewey (1859-1952).

The strength of pragmatism lies in its attention to the process of uncovering the

truth. It does not depend on what one thinks is natural, or on mental cognition, or on

perception of the world. Instead it proposes the scientific method as the best means for

determining reality.

Logically, pragmatists are more interested in evaluation than are the holders of

other philosophical positions, since consideration of the results is a part of the scientific

process. The evaluation, however, is not concerned solely with what content has been

learned, but concentrates on the methods of learning used by the teacher. Pragmatists

see teachers as agents who impart to the young the techniques for living and acquiring









knowledge. Teachers also instruct students how to meet the new situations that will

inevitably arise; in a sense, the students are educated for change (Abeles, 1995).

Abeles, Hoffer, and Klotman (1995) suggest three reasons why all music

educators should think about philosophical matters as they relate to teaching

(research). One reason for doing this is that all teachers make decisions as part of their

work, and most of these decisions have philosophical implications. A second reason for

considering such matters is that basic understandings and beliefs provide, or at least

should provide, a sense of direction and perspective. A third reason for thinking about

philosophical topics is that teachers should be consistent in the different actions they

take.

A basic understanding of these three philosophical viewpoints (rationalism,

empiricism, and pragmatism) provides some background for making research decisions

concerning assessment. Examining the differences and similarities in how each

viewpoint approaches education, evaluation, and assessment can provide strength to

future directions in these fields.

Assessment History

While some view assessment as an outgrowth of educational reform, its history

can be traced back more than 4,000 years. As early as 2,000 BC, there is evidence

that civil service testing in China was established in an effort to select employees based

on merit rather than preference. In 200 BC, Socrates developed and used

conversational methods of assessment to test his students' ability to describe and

rhetorically defend their stated views and opinions. In addition, early Olympic Games

included the evaluation of poets and musicians as well as athletes (Cole, 1995).









More recently, when America entered World War I in 1917, tests were needed to

determine who was fit for officers' training school at a time when the U.S. Army was

drafting large numbers of soldiers. The Army Alpha test, the first widely distributed

multiple-choice test, determined intelligence by measuring verbal ability. Upon

discovering that many of the military recruits were functionally illiterate, the Army

produced a second test. The Army Beta test used mazes and puzzles to measure

intelligence and required no specific language skills (Thorndike, 2005).

Educational testing appeared in the United States during the 18th century in the

form of oral exams given by university faculty to determine the quality of their students'

academic performance. Edward Thorndike's 1904 publication, An Introduction to the

Theory of Mental and Social Measurements, became the foundation for much of the

testing effort in the early 1900s and earned him the recognition as the "father of

educational measurement" (Mabry, 1999).

The beginning of standardized testing can also be traced to the work of Alfred

Binet who developed the use of intelligence testing in Paris, France, in the early 20th

century. City and educational leaders asked Binet to develop a test to help determine

which students would be more apt to succeed, and which might be more apt to fail.

Binet's work led to the creation of the first intelligence tests, and the model for the

intelligence quotient know as IQ. The resulting success of the 1913 Stanford-Binettest

led to the creation of many new achievement and aptitude tests from the 1920s through

the 1950s. Revised forms of some early 20th century tests are still used today, such as

the lo wa Test of Basic Skills, the Stanford-9, and the Scholastic Assessment Test

(Trice, 2000).









In the 1830s, Horace Mann devised the first standardized written exams in

Massachusetts and Connecticut. His 1846 Boston Survey was the first large-sca le test

printed for use in assessing student achievement in the areas of grammar, geography,

history, philosophy, geometry, astronomy, writing, and math. By the middle of the

twentieth century, educational and psychological testing became a lucrative business.

Many new standardized tests were published including the American College Test, and

the General Aptitude Test Battery. In 1947, Henry Chauncey founded the Educational

Testing Service, which continues to provide tests and other services to the education

community (Kancianic, 2006).

America's 1941 entry into World War II required the creation of many new

batteries of tests. Louis Leon Thurstone's refinement of factor analysis procedures

enabled tests to categorize individuals across several dimensions. The success of the

factor analysis method resulted in fewer military dropouts and the creation of several

taxonomies of human behavior. In particular, psychologist Benjamin Bloom's (1971)

Taxonomy of Educational Objectives dominated assessment and educational

psychology textbook chapters (McMillan, 2003).

The current state of assessment has been greatly influenced by technology.

With the invention of the computer in the 1960s, data could be gathered, stored, and

analyzed with greater efficiency and less cost. Current technology makes it possible to

assess large populations in various locations and have statistical results of data

instantaneously. This type of technology can also be programmed to guide and adapt

an assessment to address the specific responses of an individual, allowing countless

options in assessment formats.









Assessment tests are currently used in a number of settings. In education

assessments, aptitude and achievement tests are used for a variety of purposes.

Career assessment tests are used for job placement and employment-screening tests

by companies to help determine the skills and knowledge of future employees. In

addition, there are personality-type assessments, and assessment tests used by

government agencies to determine need for admittance in specialized programs.

Assessment in Education

According to John Dewey (1916) the role of assessment should be to interact with

instruction to help the child realize full growth through successive habit formations.

Active habits involve thought, invention, and initiative in applying capacities to new

aims. Dewey also said (1910) concepts enable us to generalize, extend and carry over

our understanding from one thing to another. It would be impossible to overestimate the

educational importance of arriving at concepts. They apply in a variety of situations, are

in constant referral, and give standardized, known points of reference. Without this

conceptualizing, nothing is gained that can be carried over to the better understanding

of new experiences. The deposit is what counts, educationally speaking.

Dewey stressed the importance of assessment being applicable to course content

and its logical outreach of the material being presented in the classroom. Assessment

of material should reinforce important concepts related to the content being studied.

The idea of applying these concepts to related material is important in Dewey's

approach to assessment and education. Dewey emphasized means as being equal to

ends; that is, the way one gains information is as important as the information itself

(Abeles, 1995).









Assessment is needed to appraise student progress, to provide guidance and

motivation for learning, and to identify areas where improvements are needed in either

instruction or the program (Colwell, 1982). Assessment should also serve as a useful

and essential tool in the classroom. It can be used to evaluate student progress, set

standards, guide instruction, and communicate student progress to parents and

administrators (Farrell, 1997).

In 2001, the National Board for Professional Teaching Standards (Linn, 2005)

developed guidelines for teacher competencies in assessment. They recommended

that successful teachers should be able to

1. Create a variety of assessment tasks and materials for assessing student learning

2. Plan assessments before planning instruction

3. Present assessments at appropriate times in the instructional sequence

4. Ensure that students understand what they are expected to know and be able to
do

5. Ensure that students understand how they will be assessed, upon what criteria
they will be judged, and how this information will help them to improve

6. Use a variety of meaningful student self-assessment techniques

In addition, our federal government has had a huge impact on assessment in

education. The federal government invested $1.3 billion in public education through the

Elementary and Secondary Act of 1965. With this significant financial investment came

a heightened expectation of student performance and accountability (Mark 1999). In

1983, A Nation at Risk: The Imperative for Educational Reform (National Commission

on Excellence in Education, 1983) reported many shortcomings in the American

Education system, including a steady decline in standardized test scores. The most

recent legislation influencing student assessment has been the No Child Left Behind Act









(NCLB) of 2001 (U.S. Department of Education, 2002). The NCLB act requires subject-

specific accountability for student learning which has resulted in large-scale testing at

the state level.

Effective assessment measures reveal even more than what students know and

understand. They must also indicate how those new understandings evolved.

Assessment serves as evidence of the broadening and deepening of students'

capacities to solve sophisticated problems, make sensitive judgments, and complete

complex projects. It would seem that the development of a complete inventory of

assessment types and how to implement these techniques would be of great assistance

for educators and administrators (Farrell, 1997).

Assessment in Music Education

Reimer states the following in regard to music tests and testing:

The profession needs much more experience in gathering and presenting
evidence about the growth of essential musical behaviors, and we need, as
well, good tests to help us gather and present this evidence. Tests in the
future will be more holistic, more oriented to real-world problem solving and
processing of musical information and the making of musical judgments and
decisions; that is, to the measurement of musical intelligence in a variety of
manifestations. Such tests and other modes of professional evaluation will
add to our status as a bona fide curriculum and add to our professional
expertise. Tests can be abusive, as we know all too well, but they can also
be powerful aids in effective education (Reimer, 1989).

Reimer continues:

We are already expert at assessing performance ski Ils and must continue to
refine this expertise. Especially important will be improvements in regard to
giving out students a variety of specific musical performance problems to
solve, involving technique, notation, stylistic interpretation, ensemble, and
so forth. We also need to evaluate how performers engage themselves
intelligently in dealing with problems of process how they structure a
performing problem they are faced with, what imaginative ways they employ
to solve it, how they use their musical understanding as an aid, the steps
they go through, and their critical judgments about their solutions. We must
continue to evaluate the growth of skills, but we must pay far more attention









to assessing the growth of musical intelligence and musical independence
as demonstrated by problem solving as relevant to performance (Reimer,
1989).

Brophy (2000) defines one of the purposes of assessment in the music classroom

as the opportunity to obtain evidence of the musical growth and progress of students.

For the teacher, assessment can also be used to guide instruction and aid in the choice

of teaching strategies. Another reason assessment is important is to further validate the

music program with parents, students, and administrators. Finally, assessment can

provide evidence of accountability for student learning.

Four music national assessments have been administered to students in the

United States. In 1971, the National Assessment of Educational Progress (NAEP)

administered the first national music assessment to students in three age groups: 9, 13,

and 17 years. The purposes of the test were to determine what the music students

knew, what they could do, and their attitudes toward music education. The NAEP

brought together scholars, teachers, and curriculum specialists to develop the

objectives for this assessment. In association with the Educational Testing Service

(ETS) of Princeton, New Jersey, the following broad categories of objectives were used

in the first of these national music assessments.

1. Perform a piece of music

2. Read standard musical notation

3. Listen to music with understanding

4. Be knowledgeable about some musical instruments, some of the terminology of
music, methods of performance, some of the standard literature of music, and
some aspects of the history of music

5. Know about musical resources of the community and seek experiences by
performing music









6. Make judgments about music, and value the personal worth of music.

Results of the assessment indicated that while students' attitudes toward music

were positive, their performance on the exercises was largely quite low (Mark 1996).

The NAEP administered a second national music assessment in 1978. The same

age groups were measured. However, the objectives for this assessment changed from

the first.

1. Value music as an important realm of human experience
2. Perform music
3. Create music
4. Identify the elements and expressive controls of music
5. Identify and classify music historically and culturally.

Some criticisms of the second assessment: it did not include performance

assessment like the first (due to a lack of funding), and the results were underreported.

Overall, the information from the two National Assessment reports were of great

potential value to the music education profession, but actually had little influence on

practices (Mark 1996).

The next assessment was not administered until 1997 because of a lack of funding

and concern for arts education. By means of funding from the National Endowment for

the Arts, the assessment project was administered by the Council of Chief State School

Officers. This assessment was largely based on the National Standards for Arts

Education and measured all of the arts disciplines (music, arts, dance, and theater).

Only eighth-grade students were administered the test, which measured student's

knowledge and ability in creating, performing, and responding. Overall results indicated

that while students who participated in music activities performed better than those who

did not, a great deficit in students' music knowledge and skills existed (Mark 1996).









The most recent national assessment took place in 2008 and was again

administered by NAEP. Findings were published by NAEP in their series, the Nation's

Report Card: Arts 2008 (Music and Visual Arts). The assessment was given to a

nationally representative sample of 7,900 eighth-grade public and private school

students (half in music, half in visual arts). The music portion of the assessment

measured students' ability to respond to music in various ways. Students were asked to

analyze and describe aspect of music they heard, critique instrumental and vocal

performances, and demonstrate their knowledge of standard musical notation and

music's role in society. The average responding score for music was reported on an

NAEP scale of 0 to 300. Scores ranged from 105 (for the lowest-performing students)

to 194 (for the highest-performing students). In both music and visual arts, scores were

higher for White and Asian students compared to Black and Hispanic students. Scores

were also higher for female students versus their male counterparts. Scores were

significantly lower for lower-income students (eligible for free/reduced lunch) than those

not eligible. In the music assessment, scores were higher for private school versus

public school students, and eighth-graders attending city schools had a lower average

responding score than students who attended suburban, town, or rural schools.

Approximately one-third of the students participated in a musical activity such as band,

choir, or orchestra (National Assessment of Education Progress, 2008).

Student Assessment in Music Education

The music classroom is a unique environment in the school setting. The variety in

activity and subject matter require the music educator to approach classroom

assessment very carefully. The National Association for Music Education (MENC)









provides the following guidelines for music classroom assessment (MENC: The National

Association for Music Education):

Assessment should be standards-based and should reflect the music skills and
knowledge that are most important for students to learn.

Assessment of student achievement should not be based on the skills and

knowledge that are easiest to assess nor on those for which ready-made assessment

devices are available. Instead, it should be based on the extent to which each student

has met the standards established, and it should reflect the priorities of the instructional

program. Assessment should not be based primarily on where the student ranks relative

to a particular class or group. It should be based on whether the student has met

specific criteria. In these performance standards, separate criteria have been

established for basic, proficient, and advanced levels of achievement.

Assessment should support, enhance, and reinforce learning.

Assessment should be viewed by both students and teachers as a continuing,

integral part of instruction rather than as an intrusion into (or interruption of) the process

of learning. The assessment process should itself be a learning experience, and it

should not be conducted or viewed as separate from the learning process. Students

should regard assessment as a useful tool rather than as a source of fear or anxiety.

They should use it as a means of further learning and as a means of measuring their

own progress. When assessment tasks are designed to provide information concerning

the extent to which students meet standards that have been established for them,

teachers can adjust their instructional programs so as to be more effective.









Assessment should be reliable.

Reliability refers to consistency. If an assessment is reliable, then another

assessment of the same skills or knowledge will produce essentially the same results.

For assessment to be reliable, every student must be assessed by identical procedures

and the assessors must share the same levels of expectation so that a student's score

does not depend on who is doing the scoring.

Assessment should be valid.

Validity means that the assessment technique actually measures what it claims to

measure. The mental processes represented by the scores correspond to the mental

processes being assessed. No measurement instrument should be used to measure

something it was not designed to measure. If there is a mismatch between assessment

strategies and the objectives of the curriculum, the assessment strategies are not valid

for that curriculum.

Assessment should be authentic.

Authentic assessment means the assessment tasks reflect the essential nature of

the skill or knowledge being assessed. The student should actually demonstrate a

music behavior in an authentic or realistic situation rather than merely answer written

questions about it. For example, the ability to play the recorder should be assessed by

having the student play the recorder; not by having the student answer test questions

concerning fingerings, hand position, phrasing, and note-reading. Assessment does not

need to be based on multiple-choice tests or even on paper-and-pencil tests, though

those techniques have their uses.

Portfolios, performance-based assessment, and other techniques of authentic

assessment have been used successfully by music educators for many years; however,









these techniques cannot by themselves solve the assessment problems facing

educators. A portfolio is simply a collection of samples of a student's work taken

periodically for a specific purpose throughout the instructional process. Those samples

must still be assessed, and the assessment requires careful thought about what should

go into the portfolio and also great care in developing suitable assessment strategies

and appropriate scoring procedures. Assessment should take a holistic view of music

learning. It should not concentrate on isolated facts and minutiae; but should deal with

broad concepts, "whole" performances, and complete works of music. Authenticity, like

reliability, is a prerequisite to validity.

The process of assessment should be open to review by interested parties.

Although assessment of music learning can best be carried out by qualified music

teachers, it is important that students, parents, and the public be given sufficient

information and help so they too can make judgments about the extent to which music

learning is taking place in their schools. If their evaluations are faulty, it should be

because of their lack of professional qualifications and not because of lack of

information concerning the assessment process. It is especially important that students

know what they are to be assessed on, how they are to be assessed, and what criteria

will be used to judge their achievement. When appropriate, they should be allowed to

participate in developing the criteria by which their work will be assessed.

These guidelines can assist the music educator in making important decisions

concerning assessment in the music classroom. However, the added requirements of

performance-based ensembles and the larger number of students a teacher deals with

at one time make assessment in this environment especially challenging. Four types of

assessment (Goolsby, 1999) can be used for evaluation in the instrumental classroom









in a relatively straightforward manner: placement, summative, diagnostic, and formative

assessments.

1. Placement assessments include auditions, challenges, and seating assignments,
all aimed at determining a student's abilities in order to properly place the student
in a program.

2. Summative assessments include concerts, festivals, recitals, and other events
where the final "product" of the group's learning is publicly demonstrated and
evaluated.

3. Diagnostic assessment is used to determine where learning difficulties exist. The
most obvious and frequently used tool in instrumental music is error detection.

4. Formative assessment is concerned with the regular monitoring of students to
make sure learning is taking place. One requirement for effective formative
assessment is students' clear understanding of what they should learn.

A wide variety of assessment components have been discussed in the area of

instrumental music education (Antmann, 2007; Asmus, 1999; Burrack, 2002; Chiodo,

2001; Cope, 1996; Dirth, 2000; Goolsby, 1999; Hanzlik, 2001; Kancianic, 2006; McCoy,

1991; Norrington, 2006; Pizer, 1990; Reid, 2005; Sears, 2002; Sherman, 2006;

Simanton, 2000; Stauffer, 1999; Tracy, 2002). Each component should have a purpose

in the broader assessment tool. Each component should also support the goals and

instruction of the classroom (Asmus, 1999). The following are commonly used music

assessment tools:

ATTENDANCE (concert, rehearsal): accounting for a students' participation in an event.
Attendance may also incorporate penalties for students arriving late or leaving early.

COMPUTER MUSIC THEORY PROGRAMS: music theory programs that are computer
generated.

CONDUCT/DISCIPLINE: assessing a student based on their behavior.

PARTICIPATION: to take part in an event or activity.

PRACTICE LOG/JOURNAL: a self-reported record of an individual's practice.









PLAYING TEST: a performance demonstration by the student on the student's instrument.
The material for a playing test varies but may consist of band music, etudes, scales,
rudiments, or audition music.

PORTFOLIO: a collection of supporting material.

REQUIREMENT CHECKLISTS: a list of accomplishments students progress through at their
own pace.

SELF-ASSESSMENT: a student's assessment of his or her own work.

SIGHT-READING TESTS: a performance demonstration by the student (on the student's
instrument) of music he or she is not previously familiar with.

TEACHER OBSERVATIONS: any assessment that relies on observable behavior of a
student by the teacher.

WRITTEN TESTS: any test or quiz in written form.

A teacher should focus on assessment options that occur naturally in a music

context, that are authentic to your classroom, and that are congruent with your

instructional goals (Stauffer, 1999). Given the importance of assessment, music

teachers need a management system that is as efficient and effortless as possible,

while still producing detailed information about individual students (Chiodo, 2001).

Current educational research suggests that teachers should develop and use

"authentic" performance strategies designed to allow students to demonstrate what they

have learned and further what they can do with their knowledge (Asmus, 1999).

Authentic assessments involve the use of alternative strategies for collecting and

analyzing information. Students are expected to demonstrate what they have learned

by drawing on their knowledge, abilities, and past acheivements to solve problems that

require them to perform under "real-world" conditions (U.S. Department of Education,

1996).









Assigning Grades to Band Students

The assignment of grades is a complex topic in educational assessment. The

grade serves as the primary way a teacher communicates a student's progress and

achievement. Suggestions for establishing grading systems for performing arts

ensembles have been offered by many of the leading experts on music education

(Bowman, 1984; Boyle, 1987; Branum, 1988); but very little research has examined the

actual grading practices of ensemble directors.

The National Association for Music Education (MENC) conducted a survey

concerning grading practices in 1997. Results showed that music teachers were

responsible for assigning grades to a large range of students (from twenty-five to one

thousand) and the majority did this using traditional letter grades (A, B, C, D, F). Most

teachers who responded assigned grades based on performance-based criteria; while

others used criteria such as attendance, effort, behavior, and attitude. Finally, some

teachers used precise criteria and point systems, while others used grading procedures

that were imprecise (MENC 1998).

Attendance, effort, behavior, and attitude have long been an important part of

music classes. However, it is important to separate non musical criteria from the

grading process. Effort, behavior, and attitude are difficult (if not impossible) to grade

objectively. Attendance can be graded objectively, but does not represent a student's

musical understanding or acheivment. There are many reasons that music teachers

use these non musical criteria when determining students grades. With the large

numbers of students music teachers have, it can be difficult to thoroughly and

accurately assess all of them on musical criteria. Also, categories such as attendance,









effort, and behavior are important to productive music rehearsals; so many teachers

may feel it is necessary to include them in grading practices (MENC 1998).

In another study concerning grading practices, Drake (1984) found that

attendance and participation were the principle criteria for assigning grades to students

in performing groups. Attitude, preparation, and satisfactory performance were rarely

mentioned as criteria for grades. McCoy (1988) found similar results when

investigating the grading practices of high school band and choral directors. Ninety-five

percent of the reported grading systems included some type of nonmusic criteria such

as attendance and behavior. The study also found that seventy-five percent of these

grading systems included criteria related to performance psychomotorr criteria), sixty-six

percent included criteria related to attitude (affective criteriaa, and forty-two percent

included criteria related to factual knowledge about music (cognitive criteria).

Bradford (2003) states that students, parents, and administrators benefit when

curriculum-based assessment is used by teachers. Students see grades not as

subjective rewards or punishments, but as accurate reflections of knowledge and

achievement. They become more confident and independent. Parents find it easier to

gauge their students' progress and understand the development of their student in

regard to the performing ensmeble. Administrators begin to see music as an academic

class, rather than solely a venue for entertainment.

Developing Music Assessment Models

The next section is a review of existing assessment models developed in the last

forty years in the area of art and, more specifically, music. These models advanced the

study of student assessment and serve as a guide to educators, enhancing the options

available in the classroom.









Arts PROPEL

Since 1967, a research group at the Harvard Graduate School of Education has

investigated the development of learning processes in children and adults. The name

Arts PROPEL is an acronym for Production (making music by singing, playing an

instrument, or composing); Perception (listening to music); and Reflection thinki ng

about what one does, both in words and in the appropriate symbol system) (Mark 1996).

Project Zero was founded by the philosopher Nelson Goodman to study and improve

education in the arts. Goodman believed arts learning should be studied as a serious

cognitive activity. Project Zero's mission "is to understand and enhance learning,

thinking, and creativity in the arts, as well as humanistic and scientific disciplines, at the

individual and institutional levels." David Perkins and Howard Gardner served as co-

directors of Project Zero from 1972 to 2000, when the current director, Dr. Steve Seidel,

was named. Over the years, Project Zero has maintained a strong research

commitment in the arts. Much of its work takes place in American public schools,

particularly those that serve disadvantaged populations (Harvard Graduate School of

Education, 2010).

In 1985, a joint project among Project Zero, the Pittsburgh (Pennsylvania) Public

Schools, and the Educational Testing Service observed and documented music learning

for two years. Their findings determined that students are the constructors of

knowledge. Understanding occurs when students organize, manipulate and apply

concepts themselves. In addition, it was found that students learn best when they

actively perform and create. The Arts PROPEL instructional and assessment strategies

are based on helping the student develop independence and expertise in learning the

procedural knowledge associated with music. This model encourages the use of









portfolios in order for students to examine their progress over time. Portfolios may

include video and/or audiotapes of student performance, teacher evaluations, and

student self-evaluations (Sears, 2002).

Comprehensive Musicianship through Performance (CMP)

The Comprehensive Musicianship through Performance Project (CMP) was

initiated in Wisconsin in 1977 as a reaction to performance-based music programs that

were accused of producing outstanding performance groups without developing a depth

of musical understanding. The CMP is a process of instruction that promotes

"performance with understanding,"

The project began with a group of respected ensemble teachers selected from a

diverse group of school districts. The group developed and tested a process for

planning rehearsal instruction to include performance skills and also general knowledge

about music. In the CMP model the teacher serves as a facilitator who, in addition to

rehearsing selected works, questions the students on a variety of subjects including

musical style, composer background, form, keys, and other music knowledge.

Student involvement in the learning process is an important aspect of the CMP

model. Teachers are instructed to involve students in a variety of hands-on activities

including listening, analyzing, arranging, composing, discussing, and evaluating music.

The students are encouraged to recognize these activities as "real-life" applications of

musical knowledge; not isolated classroom exercises. Student involvement can also

extend to concert performances in the form of students researching and writing program

notes, or demonstrating important musical traits of a given piece of music to the

audience. Student involvement is also encouraged in the process of assessment. The

CMP model promotes the use of self and peer assessments through recordings of their









own performances. Student portfolios are also used in an effort to help students

become independent learners possessing the ability to make decisions about their own

work, including possible direction for future study (Pontious, 2008).

State Collaborative Assessment and Student Standards (SCASS)

In 1994, the Council of Chief State School Officers established the State

Collaborative Assessment and Student Standards (SCASS) to assist states in

developing standards and assessment tools. The SCASS ARTS is a nation wide group

addressing the refinement of arts education assessment. Its objective is to develop

assessment materials for the large-scale, district-level, and classroom-based

assessment in dance, music, theater, and visual art.

According to the SCASS website, "the group has developed and implemented a

web-based item development process that uses professional development training at

the state level, the submission of items to a website where they are screened for

content and assessment accuracy by a panel of experts according to criteria developed

by the group, and either sent back to the originator or advanced to the final pool."

The SCASS ARTS roster currently includes representatives from California,

Louisiana, Minnesota, New Hampshire, and New Jersey. The group offers numerous

aids to educators concerning arts assessment including an Arts Handbook, an Arts

Assessment Glossary, and an Arts Assessment Bibliography. In addition, the group

(Council of Chief State School Officers, 2009) offers the following publications for

purchase:

1. Guidelines for Video-Taping Performance Assessment

2. Presentation Materials from the National Arts Assessment Institute









3. Arts Education Assessment Consortium Year-End Report and Collection of
Refined Exercises

4. Collection of Unrefined Arts Assessment Exercises Developed for the 1997 NAEP
Arts Education Assessment.

Current Research

Previous researchers designed studies to identify current assessment practices

music educators are implementing. Each study has unique purposes and results but is

related to the current study and influenced how this study was designed. This review

will conclude with a summary of the research and a discussion on the implications for

the current study.

Antmann (2007) designed a survey to determine assessment methods and

grading practices of middle school band directors. Subjects selected for his study were

middle school band directors (N=59) of successful middle school band programs

throughout the state of Florida.

Of the twenty-seven surveys returned, Antmann discovered that the most

commonly used assessment tool in middle school band classes is the individual playing

test. Other frequently used assessment components included practice journals, self

student assessments, and requirement checklists. The categories these directors found

important to assigning grades included playing tests, participation, concert attendance,

conduct/discipline, and attendance (rehearsals).

The study revealed some common assessment and grading habits of successful

teachers. These include regular assessment of a student's ability to perform on

instruments and to read and notate music; assessment of musical skills and abilities

during performance; musicianship requirements when determining student grading; and

non musical criteria such as attendance, participation, and conduct (Antmann, 2007).









In 2002, Sears developed a study whose purpose was to describe how

instrumental music educators document and assess their student's progress. The study

specifically targeted whether instrumental music is formally assessed and what types of

assessment are currently in use by middle school instructors in southeastern

Massachusetts.

Forty-two instructors completed a survey; results showed 61% of teachers

surveyed consider a student's attendance as criteria for assessment. Almost 90% of

these instructors consider a student's effort as a part of the assessment. The most

common assessment components identified were scale performance tests and practice

logs. Teachers also used portfolios, method books, concerts worksheets, and quizzes

as assessment strategies.

Sears recommended that all arts educators take time to customize teaching

materials with appropriate assessments. "We have a myriad of options available to us.

There is no shortage of available rubrics for us to modify. There are endless ways to

build a portfolio over the length of a student's instrumental study. None of this is

accomplished with a check mark for attendance or a pat on the back for effort. The

extra time we take will help provide our students with a meaningful experience in the

arts" (Sears, 2002).

Hanzlik (2001) examined the types and frequency of assessment methods used

by Iowa high school band directors and their attitudes toward such assessment. He

also examined the effects of selected teacher-background variables in teachers'

attitudes toward assessment. Of the 200 band directors randomly surveyed from the









400 high schools listed in the 1988-89 Iowa High School Music Association's

membership list, 154 surveys (77%) were returned.

Assessment practices used by band directors 80% of the time were playing band

music/scales/rudiments, sight-reading music, teacher observations, and playing etudes.

Assessment practices such as student journals, portfolios, reflective writing teacher

surveys and student displays were never used by at least 80% of the band directors.

Band directors in Iowa indicated the assessment practices they used most often

were related to the psychomotor task of playing an instrument. The other five

assessment practices identified by Iowa band directors as being used most often:

contest ballots, concert attendance, teacher observation, student discussion, and sight-

reading. The instructional process in Iowa band rooms emphasized performance

learning and not cognitive or affective learning (Hanzlik, 2001).

In 2006, Kancianic investigated relationships among characteristics of high school

band directors and their school settings, purposes and use of classroom assessment

methods, and factors that influence the use of classroom assessments. The National

Association for Music Education (MENC) provided a membership list from which 2,000

high school band directors were selected by simple random sampling. The overall

survey return rate was 39.75% (N=795); the usable response was 31.7% (N=634).

Classroom assessments used by high school band directors tend to focus on

evaluating student performance skills. Students are not generally involved in the

planning or execution of assessment. Those who teach more band classes use student

self-assessment more often. High school band directors use practice logs less

frequently. Three prevalent issues emerged from the results: teacher autonomy, the









role of assessment training, and teacher workload. Lack of time was viewed as a major

impediment to assessment (Kancianic, 2006).

Sherman (2006) researched the following questions: "What tools are currently

used for assessment in band programs in public high schools?"; "Who performs the

assessments?"; "Is there a distinction between assessment and grading?"; and "Is the

process accepted by all constituencies?" A survey was distributed to a random sample

of 500 high school band directors from the Eastern Region of Music Educators National

Conference (National Association for Music Education). Participation was voluntary; the

response rate was statistically significant, with 158 useable responses.

There is some degree of consistency among high school band directors about the

types of materials used for assessments, the way assessments were performed, and

calculation and conversions in assigning grades. Most directors included some means

for assessing their students' attendance and demeanor or behavior during contact

hours. Terms used to describe these items were rehearsal technique, class

participation, class preparation, and effort. Disturbing perspectives on the issues of

assessment and grading were as follows: 1) too much time consumption, 2)

assessments only served the purpose of grade justification; and 3) some directors tend

to give up during the process and succumb to assigning A's to all of their students

simply to eliminate any backlash (Sherman, 2006).

Simanton (2000) included the following purposes for his study: (a) examine current

assessment and grading practices in American high school bands; (b) gauge local

satisfaction with current assessment and grading practices; and (c) investigate

variations in practice satisfaction based on regional, school, and band director variables.









Data were collected (via surveys) from 202 high school band directors using a regionally

stratified sample, the six regions comprising the Music Educator's National Conference

(MENC).

On average, 56% of band grades come from non-performing criteria (attendance,

participation, and attitude). Performance of band music accounts for another 25.9% of

band grades. The remainder of student grades comes from a combination of technique

and other practices (mostly quizzes and practice logs). Within these criteria weights,

grading appears to be rather generous. Band directors report giving A's to 75.4% of

their students and B's to another 16.3% (Simanton, 2000).

In 1999, Hill investigated assessment procedures, assessment attitudes, and

grading policies currently used in band classrooms in Mississippi public schools. Data

were obtained from 327 student members of the Mississippi Bandmasters' Association

State Band Clinic, 93 members of the Mississippi Bandmasters Association, and 38

randomly selected public school administrators.

Results indicated grades were an important part of the instrumental classroom

and students were motivated to make good grades. All three survey groups indicated

non-music criteria such as attendance, participation, and attitude were used in

determining students' nine-week grades. While traditional forms of evaluation such as

portfolios and paper-and-pencil tests were recognized as useful in the band classroom,

0-25% of the nine-week grade comprised these assessment types (Hill, 1999).

Finally, in 2001, McCreary examined methods and procedures currently used in

evaluating secondary school instrumental (band and orchestra) students, and compared

student perceptions and their teacher perceptions of assessment. Survey respondents









comprised 467 secondary instrumental music students and their ten respective teachers

on the island of Oahu, (state of Hawaii).

Findings indicated that instrumental music teachers predominantly used

traditional forms of assessment. Paper and pencil tests, playing tests, practice time,

and attendance and/or attitude were used to evaluate their students, with a preference

for playing tests and practice time. Eighty percent of the teachers and 93% of the

students surveyed responded that "none or mostly none" of the grade was based on

journals and/or portfolios. A relatively equal balance was found between music and

non-music assessment criteria. Results showed that most students and teachers

perceived that the criteria of playing tests comprised roughly half of the grade and the

non-music criteria of attendance and/or attitude comprised the other half of the grade

(McCreary, 2001).

Summary of Research: Study Implications

First, there is great similarity in the way these studies were developed and carried

out. All included a researcher-developed survey with survey groups ranging from 27 to

634. Most of the studies were fairly regional in their make-up, dealing mostly with band

programs from an individual state. In all cases, researchers suggest future studies in

this area should include a wider cross-section of teachers in various parts of the

country.

Second, all studies were interested in finding commonalities among music

educators in the using assessment components in their current practice of grading

students. Results of these studies indicate a wide spread use of performance-based

tests and non-music criteria to establish student grades. Portfolios, journals, and

student self-assessment were rarely used by teachers in these studies.









Third, many studies separated the assessment components into either a musical

or non musical category. Some studies also examined what percent of the student's

grade comprised each of these categories.

Farrell (1997) says, "There is no one right way to assess students. Balancing

assessment strategies to use a variety of formats is most likely to result in reliable and

valid information. Expanding our assessment practices has enormous implications

because assessment is tied to the content of the curriculum, to what teachers do in the

classroom, and to the standards we set for." Farrell's statement is important to consider

as we address the topic of current assessment practices. We observe many

commonalities in assessment approaches but continue to witness a wide variety of

assessment tools being implemented by directors across the country. Any effort to

construct a general assessment model should keep in mind that each teaching situation

is unique. A good model should have the ability to be applied to different teaching

environments and still remain effective in its results.









CHAPTER 3
METHODOLOGYAND PROCEDURES

This chapter includes an explanation of the methodology and procedure used in

this study. The chapter wi II begin with a definition of the research method used in the

study followed by a description of the participating subjects, procedures, and data

collection method. The chapter will continue with a description of the statistical

procedures and conclude with an examination of a pilot study administered by this

researcher on the subject of student assessment in bands.

Research Method

The research method of this study was descriptive, administering a survey to

collect data. Surveys represent one of the most common types of quantitative research.

Survey research is the method of gathering data from respondents thought to be

representative of some population, using an instrument composed of closed structure or

open-ended items (questions). Creswell (2002) states that surveys help describe the

trends in a population (p. 421). Survey research is an efficient method for gathering

data from a large population and "is a common and valuable approach to determine

status" (Abeles H, 1992, p.231).

Subjects

Subjects for the study consisted of high school band directors, teaching in the

United States, who are members of the MENC: The National Association for Music

Education. The MENC maintains the complete population of directors on the NetForum

2.2 customized database.

The sample size for the study was limited to 5,000 as this was the maximum

number random sample MENC could provide. Alreck and Settle (2004) said a sample









larger that 10% of the target population is rarely necessary, because as sample size

increases, sampling error decreases. Creswell (2002) suggested using a sample size

of 350 for survey research, and Sudman (1976) recommended using at least 1,000

participants for a national survey.

Simple random sampling was used to select participants from the MENC list. A

simple random sample is preferred to other sampling procedures as it represents the

target population more accurately and gives each member of the sampling frame an

equal probability of selection (Alreck, 2004). The MENC provided a randomly selected

list of 5,000 directors from a total population of approximately 15,000 using the reports

function of the NetForum 2.2 database.

Procedures

The questionnaire (discussed in detail in the data collection section) was

converted to a web document using SurveyMonkeyTM and was administered

electronically using the internet. Participants were notified of the study through an email

generated by MENC which included (a) the topic of the study, (b) approximately how

long the questionnaire will take to complete, (c) the deadline for completion of the

questionnaire, and (d) a link to the questionnaire. The MENC also sent out a reminder

one week before the questionnaire deadline.

Data Collection

A questionnaire was designed to collect data on the assessment practices of high

school band directors. The questionnaire was based on questionnaires and surveys

used in earlier research (Hanzlik, 2001; Hill, 1999; Kancianic, 2006; McCreary, 2001;

Sherman, 2006; Simanton, 2000). The questionnaire used in this study was constructed

using a combination of open and closed-ended questions. Closed-ended question









designs included (a) mutually exclusive answers, (b) exhaustive response categories,

(c) numerical rating scales, and (d) semantic differential scaling systems. The

questionnaire was designed in five sections: 1) Background Information, 2) Grading

Information, 3) Assessment Philosophy, 4) Assessment Information, and 5) Assessment

Model (Appendix A).

The Background Information section is divided into three areas: (a) high

school/community information, (b) band program information, and (c) director

information. The high school/community section provided data on (a) type of school, (b)

school enrollment, and (c) the type and socio-economic status of the community in

which the school is located. The band program section provided information on (a) band

program enrollment, (b) the number of students involved in concert band(s), (c) the

number of concert bands, (d) the average number of students in each concert

ensemble, and (e) the total number of minutes each concert band meets per week. The

director information section provided data on the directors' (a) years of teaching

experience, (b) the number of years spent in their current position, (c) their educational

background, and (d) the number of band directors employed at the school.

In the Grading Information section, directors were asked to provide (a) specifics

concerning the type of grades they assign and (b) how those grades are incorporated

into the school grading system. Specific data requested included (a) the number and

duration of marking periods; (b) the type of grade assigned and its affect on the

student's overall GPA; and (c) if a weighted grading system is used at the school, how

the band grade is weighted.









The Assessment Philosophy section provided questions associated with why

directors assess their students and asked them to rate the importance of a variety of

criteria related to assessment. Questions include (a) how important are the following

purposes of student assessment, (b) what importance do you place on the following

criteria in the evaluation of your band students, and (c) how important are the following

assessment categories in a student assessment model for bands. This section also

required directors to address what factors influence their decisions concerning student

assessment and what has best prepared them to make decisions concerning student

assessment.

The Assessment Information section presented specific questions about the way

directors assess the students in their concert bands. The questions in this section asked

directors to provide (a) the specific assessment components they use and the

percentage these components are assigned in the band grade, (b) the procedure for

data collection they use, (c) what materials they used in performance-based tests, and

(d) the importance of varying characteristics in their assessment design.

The final section, Assessment Model, includes one question asking the directors to

create what they believe to be a balanced assessment tool by assigning percentages

(totaling 100%) to the following three assessment components: (a) individual testing and

evaluation, (b) performance attendance and contribution, and (c) rehearsal attendance

and contribution.

Statistical Procedures

Results of the questionnaire produced a data set that was quantitatively

analyzed. The researcher used a variety of descriptive statistics to summarize and

explain the results of the information collected.









In the Background Information and Grading Information sections, the researcher

used the categorical information gathered to describe the subject who responded to the

questionnaire and his or her teaching situation. The results of these questions were

analyzed for measures of central tendencies (mean) in an effort to better understand the

range of categories the questionnaire data came from. The researcher used

appropriate charts and graphs to illustrate this information.

Statistical analysis for the Assessment Philosophy and Assessment Information

sections differed for each question. Based on the design of the question, different

statistical analyses were done. For rating questions, a mean score analysis determined

the most common response. For questions usi ng ranki ngs, a mean response was

reported in descending mean order to accurately show the highest ranking response to

the question. Measures of variability (in the form of standard deviation) were used on

all questions to indicate what commonalities are present among directors' responses.

Statistical analysis for the final section (Assessment Model) showed the mean

responses to the question about the importance placed on the assessment components

by participating directors.

Pilot Study

A pilot study (LaCognata, 2010) was administered in 2008. Locke, Spirduso, and

Silverman (2000) said a pilot study has the potential to provide useful information

relative to the design of the main study. The pilot included 158 high school band

directors, 61 from North Carolina and 97 from Missouri. Participants were invited via

email to complete a survey through the website SurveyMonkey.com addressing student

assessment in the high school band ensemble class. A designated band director in

each state distributed email invitations. The designated band directors were sent an









electronic cover letter of invitation, and they forwarded this through their respective

listserves to high school directors in their states. Participants were asked to complete

the survey within a two-week period. Of the 158 directors contacted, a total of 45

completed the survey, resulting in a response rate of 28%.

The purpose of this study was to gather information about the assessment

practices used in secondary band programs from a sample of in-service band directors.

The 45 directors who responded provided valuable data discussed in the context of the

research questions that guided this inquiry.

Research Question 1: In what specific ways are high school band directors
assessing students in their ensemble classes?

Findings of this study indicate that participation, performances, and performance-

based tests are the primary ways directors are assessing students in their band

ensemble classrooms. Attendance and conduct/discipline also play a vital role in this

process. These results were consistent with those of McCreary (2001) who found that

playing tests were among the most popular assessment methods used by directors. In

addition, Sherman (2006) found an emphasis on performance in regard to student

assessment. Sherman also found that attendance and participation were important

assessment components used by the directors. Finally, Kancianic (2006) also found that

the classroom assessments used by high school band directors tend to focus on the

evaluation of student performance skills.

Results of the study indicated that portfolios, peer assessments, and requirement

checklists were assessment components rarely used by directors. Hanzlik (2001) had

similar findings stating that assessment practices such as student journals, portfolios,









reflective writing, teacher surveys and student displays were never used by at least 80%

of band directors.

Research Question 2: What frequency are assessments components being
implemented by high school band directors?

The directors indicated a wide variety in frequency of use, concerning individual

assessment components by directors. On a weekly basis, results indicated that

participation, attendance, conduct/discipline, and attitude are the most frequently used

components. Performance-based tests and written tests/worksheets are most often

used monthly, and performances are used per grading period by directors to assess

their students. Kancianic (2006) found similar results in frequency of use of

performance-based tests by teachers. Students playing in small ensembles, playing

alone for the teacher, playing alone in front of the class, and playing with others in a

concert all rank among the top 10 most frequently used assessment components.

The least frequently used components are portfolios, Smart MusicTM, peer

assessment, and computer-assisted programs. Again, Kancianic (2006) echoed these

results, finding students creating portfolios and students using computers to assess

their learning were among the least frequent assessment components directors used to

assess their students.

Research Question 3: What degree of importance do the high school band
directors give to (a) individual testing and evaluation, (b) performance
attendance and contribution, and (c) rehearsal attendance and contribution
in an assessment model?

Results of the current study indicate directors assign the following percentages to

the three listed assessment components: 34.72%, individual testing and evaluation;

32.79%, performance attendance and contribution; and 31.79%, rehearsal attendance

and contribution. The importance placed on these components by the directors is also









evident in the directors' use of assessment components: performances, performance-

based tests, participation, and attendance grades ranked as the top four responses,

receiving the highest percentage allocation by directors. The next two ranked

assessment components were written test/worksheets and attitude grades.

The fact that directors cited an almost-perfect balance among the three

suggested assessment components may indicate a starting place in the development of

an assessment model for high school band ensemble classes. Certainly, incorporating

these assessment components seems necessary in the band classroom for successful

student assessment. The pilot study provided valuable feedback that was used to guide

the present study (Appendix B).









CHAPTER 4
RESULTS

The ME NC sent a random sample of 5,000 high school band directors from across

the United States an emailed invitation to participate in the survey. Because of incorrect

or changed addresses, 500 emails were immediately returned as undeliverable. Of the

4,500 directors who received the email, a total of 607 directors followed the survey link

and opened the survey. From that group, 454 directors completed the survey, resulting

in a total response rate of 10% of the original sample; and of the 607 responding

directors, 75% completed the survey. Description and analysis of the survey results are

presented in conjunction with the questions as they appear on the questionnaire.

Background Information

Question 1: Type of school.

The school type of the directors participating in the questionnaire showed that a

large percentage teach in public schools. Results were as follows: Public = 89.6% (407

schools), Private = 8.6% (39 schools), Charter = 0.9% (4 schools), and Other = 0.9%

(Figure 4-1).

Question 2: Number of students (9th to 12th grade) enrolled at your high school.

School enrollment of the 454 schools showed a more balanced representation in

each of the five population categories, with smaller schools having higher percentages.

Results were as follows: 1 to 500 students = 32.4% (147 schools); 501 to 1,000

students = 22.2% (101 schools); 1,001 to 1,500 students = 20.5% (93 schools); 1,501 to

2,000 students = 12.8% (58 Schools); and 2,001 or more students = 12.1% (55 schools)

(Figure 4-2).









Question 3: In what type of community is the school located?

The community-type results again showed representation in each category, with

Urban/Inner City schools being the least represented at 11.7% (53 schools). The

remaining results were Suburban = 31.9%, (145 schools), Town= 26.9% (122 schools),

and Rural/Remote = 29.5% (134 schools) (Figure 4-3).

Question 4: What is the socio-economic status of the community?

While the socio-economic status of the community showed representation in each

category, almost half of the schools reported Low/Middle at 42.3% (192 schools), and a

small percentage of schools reported High at 4.4% (20 schools). The remaining results

were Low = 11.2% (51 schools), Middle = 22.9% (104 schools), and Middle/High=

19.2% (87 schools) (Figure 4-4).

Question 5: Total number of students involved in the band program.

The size of the band programs participating directors taught had representation in

each of the five categories. Results were as follows: 1 to 50 students = 26.7% (121

schools); 51 to 100 students = 34.4% (156 schools); 101 to 150 students = 22.2% (101

schools); 151 to 200 students = 9.5% (43 schools); and 201 or more students = 7.3%

(33 schools) (Figure 4-5).

Question 6: Number of students involved in concert band(s).

The number of students involved in concert bands also had representation in each

category with 1 to 50 students = 38.1% (173 schools); 51 to 100 students = 33.0% (150

schools); 101 to 150 students = 17.4% (79 schools); 151 to 200 students = 6.6% (30

schools); and 201 or more students = 4.8% (22 schools) (Figure 4-6).









Question 7: Number of concert bands at your school.

The number of concert bands being taught showed the following results: 1 concert

band = 53.1% (241 schools), 2 concert bands = 27.8% (126 schools), 3 concert bands =

14.3% (65 schools), 4 concert bands = 3.1% (14 schools), and 5 or more concert bands

= 1.8% (8 schools) (Figure 4-7).

Question 8: Average number of students in each concert band.

Finally, the average number of students in each concert band showed

representation in each of the five categories: 1 to 15 students = 4.0% (18 schools); 16 to

30 students = 19.8% (90 schools); 31 to 45 students at 30.4% (138 schools); 46 to 60

students = 29.7% (135 schools); and 61 or more students = 16.1% (73 schools) (Figure

4-8).

Question 9: Years of experience teaching high school band (including this year).

Most participants have been teaching 17 or more years (36.8%, 167 directors).

Remaining results: 1 to 4 years = 19.8% (90 directors); 5 to 8 years = 18.3% (83

directors); 9 to 12 years = 15.6% (71 directors); and 13 to 16 years = 9.5% (43

directors) (Figure 4-9).

Question 10: Years teaching in your current position (including this year).

Almost half of the directors had taught 1 to 4 years = 40.5% (184 directors).

Remaining results were as follows: 5 to 8 years = 21.4% (97 directors); 9 to 12 years =

12.8% (58 directors); and 13 to 16 years = 7.7% (35 directors); and 17 or more years =

17.6% (80 directors) (Figure 4-10).

Question 11: Highest degree earned in music/music education.

Most directors reported having a master's degree (55.5%, 252 directors). Many

had a bachelor's degree (40.3%, 183 directors). One director is teaching with an









associate's degree, sixteen earned doctoral degrees, and two directors reported post

doctoral study (Figure 4-11 ).

Question 12: Number of band directors employed at your school.

More than half of the directors manage the band program alone (69.2%, 314

schools). Other results: 1.5 directors = 8.1% (37 schools); 2 directors = 16.3% (74

schools); 2.5 directors = 2.4% (11 school); and 3 or more directors = 4.0% (18 schools).

Grading Information

Question 13: How many marking periods does your school have per year?

Directors reported a variety of grading periods, from 2 semesters to 12 grading

periods. Most directors indicated 4 grading periods (quarter system) (mean = 4.48,

median = 4, mode = 4).

Question 14: How many weeks long is a typical marking period?

The length of the grading period also varied, from 4.5 weeks to 18 weeks. Most

directors indicated that 9 weeks was the typical length of a grading period (mean = 9.49,

median = 9, mode = 9).

Question 15: What type of grade do you assign at the end of a marking period?

Most directors indicated that they assign a letter grade (54.2%, 246 schools). The

next largest group said they assign number grades (31.5%, 143 schools) at the end of a

grading period. Other results: No grades assigned = 0.2% (1 school), Pass/Fail = 0.2%

(1 school), Written Comments = 0.2% (1 school), and Combination of types = 13.7% (62

schools) (Figure 4-12).









Question 16: Does the grade given in your band ensemble class affect the
student's overall grade point average (GPA)?

Most directors reported that the grade given in the band ensemble class affects

the student's overall grade point average (GPA) at 95.2% (432 schools). Only4% (18

schools) of directors indicated that the band grade does not affect students' GPA, and

0.9% (4 schools) cited other circumstances.

Question 17: Is there a weighted grading system being used in your school?

Most directors reported that a weighted grading system (higher-level classes

assigned more value in the students overall GPA) is used in their schools (64.5 %, 293

schools).

Question 18: Is there a weighted option in the grade given in your band ensemble
class?

Of 293 schools who indicated they have a weighted grading system, only 19.8%

(90 schools) of directors said there was a weighted option in the grade given in their

band ensemble class. Directors reporting yes indicated a weighted grade was assigned

in the following instances: top ensemble, upper-level students (11th and 12th graders),

honors credit option, or students choosing to do extra work or participate in extra

assessments.

Assessment Philosophy

Question 19: How important are the following purposes of student assessment?

Participants were presented with a list of sixteen purposes for assessing students

in band. Directors rated the importance of these purposes, usi ng a 5-point Likert-type

scale, ranging from 1 (not important) to 5 (extremely important). To provide feedback to

students (M = 4.63) and to determine what concepts students are failing to understand

(M = 4.45) were among the most important purposes. To determine vihether students









were practicing at home (M = 3.37) and to rank students according to individual

performing levels (M = 2.81) were the least important purposes of student assessment

(Table 4-1).

Question 20: How important are the following criteria in the evaluation of your
band students?

Participants were asked to rate the importance of ten different criteria in the

evaluation of their band students. The rating scale was a 5-point Likert-type scale

ranging from 1 (not important) to 5 (extremely important). The ability to play an

instrument in an ensemble (M = 4.26) and individual playing ability on an instrument (M

= 4.16) were rated among the most important. The ability to improvise melodies,

variations, and accompaniment (M = 2.32) and the ability to compose music (M = 2.02)

were rated as the least important (Table 4-2).

Question 21: How important are the following assessment categories in a student
assessment model for band?

Participants were asked to rate the importance of four assessment categories in a

student assessment model for bands. Summative assessment (i.e., concerts, festivals,

recitals) was ranked at the most important (M = 4.27); formative assessment (i.e.,

playing tests) was ranked second (M = 4.03). Diagnostic assessment (i.e., error

detection) was ranked next (M = 3.89) and placement assessment (i.e., auditions,

challenges) was ranked the least important (M = 3.27) (Table 4-3).

Question 22: How influential are the following factors on the assessment
methods you use?

Respondents were presented with a list of sixteen factors that might influence their

choice of assessment methods. Participants rated the level of influence using a 5-point

Likert-type scale ranging from 1 (not influential) to 5 (extremely influential). The









directors' personal philosophy of education (M = 4.38) and the objectives or goals of

your class (M = 4.31) had a high degree of influence on the choice of assessment

method. Requirements set by the school district (M = 2.73) and the assessment method

implemented in the high school band program you attended (M = 2.16) had a low

degree of influence (below the moderately influential response). Also influential in

determining assessment methods were the amount of available class time (M = 4.20),

the demands of the ensemble's performance schedule (M = 3.90), and the expectations

of your students (M = 3.53) (Table 4-4).

Question 23: How do you feel the following have prepared or are preparing you to
assess the students in your band program?

Participants were asked to rate how well eight different factors have prepared (or

are preparing) them to assess the students in their band program. Discussions wth

colleagues (M = 3.98) and clinic at professional conference (M = 3.64) had the highest

response rate, while state or district standards (M = 2.72) and teacher in-service

sessions (M = 2.27) had the lowest response rate (Table 4-5).

Assessment Information

Question 24: Which of the following assessment components do you use to
determine a student's grade?

Participants were presented with a list of sixteen assessment components and

asked to select which they use to determine a student's grade. The two most prevalent

components used by directors were participation (95.6%) and performances (92.1 %).

The two least prevalent were peer assessment (9.5%), and portfolios (7.5%) (Table 4-

6).









Question 25: Please enter the percentage of each component you use to
determine your grades.

Using the same list of assessment components, participants were then asked to

enter the percentage of each component used in determining their grades. Again

performances (26.63%) and participation (22.27%) were assigned the highest

percentages, and portfolios (6.67%) and peer assessments (5.72%) were assigned the

lowest percentages (Table 4-6).

Question 26: Which of the following procedures for data collection do you use
when assessing your students?

From a list of twelve options, participants were asked which procedures for data

collection you use when assessing your students. Teacher observation (88.0%),

students play individually in class (85.4%), and students play in a group in class (80.0%)

were the most used procedures. Students record themselves playing in a group

(13.1%), Smart MusicTM (13.1%), and computer-assisted program (5.1%) were the least

used procedures for collecting assessment data (Table 4-7).

Question 27: If you use performance-based tests when assessing students, what
materials do you utilize?

Participants who use performance-based tests in the assessment of their students

were asked to select what materials they use from a list of seven options. The two most

prevalent responses were scales/rudiments (93.3%) and band music (92.4%). Method

book exercises, sight-reading, etudes, and audition music were also selected by about

half of the participants. Other materials directors use includes chamber ensemble

music, chorales, and rhythm sheets (Table 4-8).









Question 28: The following is a list of characteristics that have been traditionally
used in assessment models of band students.

Participants were asked to rate the importance of fifteen characteristic

traditionally used in assessment models of band students. The range of responses

rated each of the characteristics between moderately important and extremely important

response options. Reflects the music skills and knovIedge that are most important for

students to learn (M = 4.32); supports, enhances, and reinforces learning (M = 4.29);

and is reliable and valid (M = 4.25) were rated as the most important. The least

important characteristics were includes a variety of assessment components (M = 3.69);

is open to reviewby interested parties (M = 3.49); and includes both musical and non

musical components (M = 3.15). There were no characteristics rated below the

moderately important response option by directors (Table 4-9).

Question 29: Rate your agreement level with the following statements concerning
assessment.

Participants were asked to rate their agreement level of eight statements

concerning assessment. Statements ranged from the satisfaction of directors, students,

parents, and administrators with current band assessment practices to the level of

interest in finding other ways to assess students (Table 4-10).

Assessment Model

Question 30: Using the following three assessment components: (a) individual
testing and evaluation, (b) performance attendance and contribution, and (c)
rehearsal attendance and contribution, assign percentages (totaling 100%)
to create what your believe to be a balanced assessment tool for band
students.

The final survey question asked participants to assign percentages of weight that

overall components should have in a balanced assessment tool for band. Directors

assigned the following mean percentages to these components: (a) individual testing











and evaluation = 30.57% (SD = 13.78); (b) performance attendance and contribution =

34.70% (SD = 13.38); and (c) rehearsal attendance and contribution = 34.95% (SD =

12.86) (Figure 4-13).










Public
Private
Charter
Other

0 50 100 150 200 250 300 350 400 450



Figure 4-1. School type (N = 454)


Figure 4-2. School enrollment (N = 454)


160
140
120
100
80 -
60
40
20
0
1-500 501-1000 1001-1500 1501-2000 2001 or more

Students












160

140

120

100
80 -
60

40-

20
0


Urban/Inner City Suburban


Town


Figure 4-3. Community-type of school (N = 454)


Figure 4-4. Socio-economic status of school community (N = 454)


















74


200


150


100


50 E

0 -


Low/Middle Middle Middle/High High


Rural/Remote
















51-100


51-200 201 or more


Students


Figure 4-5. Student enrollment in band program (N = 454)


101-150 1


Figure 4-6. Student enrollment in concert band(s) (N = 454)


1-50


150 ---


100 -


50 ---


101-150 151-200 201 or more


Students


I --r


I--


1-50


51-100





























Figure 4-7. Concert bands per school (N = 454)


I


m


46-60 61 or more


Students


Figure 4-8. Average number of students per concert band (N = 454)


1-15 16-30


31-45


300

250

200

150

100 -

50

0
1 2 3 4 5 or more

Concert Bands











180
160
140
120
100 -
80
60 -
40 -
20 -


1-4 5-8 9-12 13-16 17 or more

Years



Figure 4-9. Director's years of teaching experience (N = 454)


200


150


100


50



1-4 5-8 9-12 13-16 17 or more

Years



Figure 4-10. Director's years teaching at current school (N = 454)

















77












Associate's
Bachelor
Masters
Doctorate M
Post- Doctorate

0 50 100 150 200 250 300


Figure 4-11. Director's education level (N = 454)


Letter Grades

Number Grades

Combination of types

No grades assigned

Pass/Fail

Written Comments


0 50 100 150 200 250 300


Figure 4-12. Grade types assigned (N = 454)


-I-I-


I T I


ME"










Assessment Tool


Figure 4-13. Create a balanced assessment tool (N = 454)


Table 4-1. Importance of purposes of assessment (N = 454)
Purpose M SD
To provide feedback to students 4.63 0.64
To determine what concepts students are failing to understand 4.45 0.78
To determine what concepts students are understanding 4.41 0.76
To determine whether instruction has been successful 4.33 0.81
To demonstrate student accountability for learning 4.27 0.80
To determine future instructional direction 4.26 0.82
To identify individual student abilities 4.23 0.87
To set or maintain class standards 4.11 0.90
To provide feedback to parents 4.05 0.84
To help students prepare for public performance 4.00 0.97
To determine the level of musical preparedness for public performance 3.97 1.01
To establish or maintain credibility for the band program 3.85 1.15
To identify general class abilities 3.81 1.02
To motivate students to practice their instruments 3.75 1.05
To determine whether students are practicing at home 3.37 1.17
To rank students according to individual performance levels 2.81 1.27


* Individual Testing and
evaluation
SPerformance attendance and
contribution
* Rehearsal attendance and
contribution









Table 4-2. Criteria importance in the evaluation of band students (N = 454)
Criteria M SD
Ability to play an instrument in an ensemble 4.26 0.80
Individual playing ability on an instrument 4.16 0.89
Ability to evaluate music and music performances 3.92 0.93
Ability listening to, analyze, and describe music 3.54 1.06
Ability to understand the relationships between music, the other 3.15 1.07
arts, and disciplines outside the arts
Ability to understand music in relation to history and culture 3.13 1.04
Knowledge of music theory 3.05 0.90
Knowledge of music history 2.51 0.88
Ability to improvise melodies, variations, and accompaniment 2.32 0.92
Ability to compose music 2.02 0.89




Table 4-3. Importance of assessment categories (N = 454)
Category M SD
Summative assessment 4.27 0.84
Formative assessment 4.03 0.89
Diagnostic assessment 3.89 0.94
Placement assessment 3.27 1.20




Table 4-4. Factors influencing assessment methods (N = 454)
Factor M SD
Your personal philosophy of education 4.38 0.76
The objectives or goals of your class 4.31 0.73
The amount of available class time 4.20 0.89
The demands of the ensemble's performance schedule 3.90 1.05
The expectations of your students 3.53 1.15
The number of students enrolled in the class 3.36 1.28
Available equipment (computers, recording) 3.29 1.21
Professional development you have participated in 3.11 1.16
Influence from your music colleagues 3.11 1.14
Your undergraduate coursework 3.07 1.17
The expectations of your students' parents 3.04 1.15
The expectation of your school principal 3.02 1.20
Your graduate coursework 2.94 1.38
Influence from professional organizations 2.80 1.11
Requirements set by the school district 2.73 1.22
The assessment method implemented in the high school band 2.37 1.27
program you attended









Table 4-5. Assessment preparation (N = 454)
Preparation option
Discussions with colleagues
Clinics at professional conference
Graduate coursework
Professional organizations
Undergraduate coursework
National standards
State or district standards
Teacher in-service sessions


Table 4-6. Assessment components used with the assigned
454)


Assessment component
Participation
Performances
Performance-based tests
Attend a nce
Conduct/discipline
Written tests/worksheets
Attitude
Extra credit (lessons, concert
attendance)
Practice log/journal
Sight-reading tests
Student self-assessment
Smart MusicTM
Requirement checklists
Computer-assisted programs
Peer assessment
Portfolios


Response % (Count)
95.6 (433)
92.1 (417)
88.7 (402)
77.7 (352)
69.8 (316)
58.1 (263)
54.1 (245)
47.7 (216)


(127)
(111)
(102)
(58)
(58)
(45)
(43)
(34)


28.0
24.5
22.5
12.8
12.8
9.9
9.5
7.5


percentage assigned (N =


Weighted % (Count)
22.27 (394)
26.63 (375)
20.54 (342)
18.67 (287)
12.23 (251)
11.11 (228)
12.51 (219)
10.25 (126)


12.19
7.34
8.10
11.79
7.47
7.68
5.72
6.67


(118)
(92)
(89)
(56)
(45)
(37)
(43)
(39)


M
3.98
3.64
3.07
3.06
3.04
2.83
2.72
2.27


SD
0.95
1.05
1.37
1.08
1.22
1.03
1.12
1.18









Table 4-7. Data collection procedures (N = 454)
Procedure
Teacher observation
Students play individually in class
Students play in a group in class
Short answer test or assignment
Students record themselves playing individually
Student self-assessment
Multiple choice test or assignment
Practice log or record
Essay question test or assignment
Students record themselves playing in a group
Smart MusicTM
Computer-assisted program


Response
88.0
85.4
80.0
39.0
33.0
31.5
28.4
26.2
25.1
13.1
13.1
5.1


Table 4-8. Materials used in performance-based tests (N = 454)
Material
Scales/rudiments
Band music
Method book exercises
Sight-reading
Etudes
All-state / district /county / or honor band audition music
Solo literature


Response
93.3
92.4
56.5
48.2
42.2
40.6
25.9


% (Count)
(397)
(385)
(361)
(176)
(149)
(142)
(128)
(118)
(113)
(59)
(59)
(23)


% (Count)
(418)
(414)
(253)
(216)
(190)
(182)
(116)


Table 4-9. Importance of characteristics of assessment models (N = 454)
Characteristic M SD
Reflects the music skills and knowledge that are most important 4.32 0.76
for students to learn


Supports, enhances, and reinforces learning
Is reliable and valid
Assists in motivating student to learn and develop
Aligns with instruction
Is understood by all parties involved (i.e., students, parents)
Is time efficient
Is relatively easy to administer and maintain
Requires a student to demonstrate a music behavior in an
authentic or realistic situation
Assists in the preparation of music for performances
Includes appropriate grading rubrics
Includes regularly scheduled assessment opportunities
Includes a variety of assessment components
Is open to review by interested parties
Includes both musical and non musical components


4.29
4.25
4.21
4.12
4.06
4.06
4.03
4.02
3.96
3.88
3.74
3.69
3.49
3.15


0.78
0.82
0.82
0.86
0.90
0.88
0.90
0.89
0.95
1.02
0.98
0.95
1.12
1.14


I









Table 4-10. Agreement level with statements concerning assessment (N = 454)
Statement M SD
I would be interested in finding other ways to assess my students 4.29 0.86
My school administrators are satisfied with the current band 4.09 0.77
assessment practices
My assessment practices foster the individual musical development of
my students
My assessment practices are good enough to ensure quality 3.87 0.83
instruction
My students' parents are satisfied with the current band assessment 3.82 0.76
3.82 0.76
practices
My students are satisfied with the current band assessment practices 3.68 0.81
I am satisfied with my current band assessment practices 3.54 0.93
My assessment and grading practices are similar to those of most of 3.51 0.97
the band directors I know









CHAPTER 5
DISCUSSION AND CONCLUSIONS

This chapter presents a discussion of the results of the current study with

reference to past research in this area as well as the previously discussed suggestions

for teachers concerning student assessment by various professional organizations. The

discussion section is presented in the five questionnaire categories: 1) Background

Information, 2) Grading Information, 3) Assessment Philosophy, 4) Assessment

Information, and 5) Assessment Model. Conclusions are presented within the context of

the research questions that guided this study, followed by implications for music

education and future research suggestions in the area of band student assessment.

Discussion of the Results

Background Information

Participants included a representative sample of band directors from across the

United States. While only 10% of the sample (N = 4,500) completed the survey, the

total of 454 completed questionnaires makes this one of the largest completed studies

in this area of research. The limited response rate was likely the result of numerous

variables including the band directors' busy schedules, interest and comfort with the

topic, and the method of invitation and follow-up administered. Members may not give

their full attention to all emails distributed by MENC and the study was restricted to one

follow-up email to encourage directors to participate.

The 454 directors completing the survey teach at schools that are representative

of high schools in the United States. The school type, school size, community type, and

socio-economic categories were all represented. These directors also represent a

balance of all categories of band program size and the administration of the concert









band component of their programs in relation to ensemble enrollment and size. Finally,

the sample includes directors who have a variety of years of teaching experience and

years teaching in their current positions. Most directors had at least a master's degree

in music/music education.

Grading Information

A wide variety of grading systems are used by school systems across the country.

The number of grading periods along with their duration varied greatly in the sample, as

did the type of grade the directors assign at the end of a marking period. This variation

of systems would have to be accounted for in any projected assessment model and

may explain the uniqueness of each director's assessment system.

Band directors reported that 95.2% of the grades they assign to students in band

ensemble classes affect the student's grade point average. This encouraging result

supports the decision to include the arts as a core class in the No Child Left Behind Act

(NCLB) of 2001 (U.S. Department of Education, 2002). A disappointing and somewhat

contradictory result was that only 19.8% of those directors teaching in a school offering

a weighted grading system had the option of issuing a weighted grade to their band

students.

Assessment Philosophy

The assessment philosophy section of the questionnaire provided valuable

feedback from directors including the motivation behind their assessment choices, their

views on assessment issues, and the factors that influence their assessment decisions.

Purpose

The sixteen purposes directors rated are divided into three categories:

instructional purposes (I); performance purposes (P); and external purposes (E).









Instructional purposes (I) relate to the process of teaching and learning and the

feedback associated with that process (i.e. to provide feedback to students, to

determine vhat concepts students are failing to understand, to determine what concepts

students are understanding, to determine whether instruction has been successful, to

demonstrate student accountability for learning, and to determine future instructional

direction). Performance purposes (P) are associated with any aspect of individual or

group performance ability or leve Is (i.e. to identify individual student abilities, to help

students prepare for public performance, to determine the level of musical

preparedness for public performance, to identify general class abilities, and to rank

students according to individual performance levels). External purposes include factors

that do not directly relate to the instructional or performance aspects of the classroom

(i.e. to set or maintain class standards, to provide feedback to parents, to establish or

maintain credibility for the band program, to motivate students to practice their

instruments, and to determine whether students are practicing at home).

The purposes of student assessment considered most important by the directors

were to provide their students and themselves with feedback concerning the

instructional process in the classroom (to provide feedback to students, to determine

what concepts the students are failing to understand, to determine what concepts the

students are understanding, and to determine whether instruction has been successful).

These results were consistent with findings of the pilot study (2008) and align with the

MENC guideline (MENC: The National Association for Music Education, 1998):

assessment should support, enhance, and reinforce learning.









The purposes of student assessment considered least important by the directors

centered on motivation and placement of students (to motivate students to practice their

instruments, to determine whether students are practicing at home, and to rank students

according to individual performance levels). Again, these results were consistent with

findings of the pilot study and indicate directors are not concerned with external factors

associated with student assessment.

It is important to note that the current study may indicate a shift in the directors'

purpose for student assessment from previous research. The sixteen purposes

presented to the directors can be divided into three basic categories: instructional

purposes, performance purposes, and external purposes. Results of the current study

clearly rank these categories: 1 = instruction purposes, 2 = performance purposes and 3

= external purposes (Table 5-1).

Earlier findings by Kancianic (2006), Hanzlik (2001), and Hill (1999) found a much

greater emphasis on performances purposes (i.e., to help students prepare for public

performance, and to determine the level of musical preparedness for public

performances); ranking instructional purposes second. Consistent findings in the

research show external purposes (i.e., to establish or maintain credibility for the band

program, and to provide feedback to parents) least important to directors.

Criteria

The ten criteria directors rated are divided into four categories which can be

related to Bloom's taxonomy of learning domains (Bloom, 1971): performance criteria

(P); critical thinking criteria (CT); knowledge criteria (K); and creative criteria (C).

Performance criteria (P) align with Bloom's psychomotor domain, relating to the manual

or physical skills associated with musical performance (i.e. ability to play an instrument









in an ensemble, and individual playing ability on an instrument). Critical thinking criteria

(CT) relate most with Bloom's cognitive domain, and to a lesser extent the affective

domain, and are associated with the evaluation, analysis, description, and

understanding of music in relation to other areas (i.e. ability to evaluate music and

music performances, ability listening to, analyze, and describe music, ability to

understand the relationships between music, the other arts, and disciplines outside the

arts, and ability to understand music in relation to history and culture). Knowledge

criteria (K) is directly associated with Bloom's knowledge domain and relates to mental

skills or recall in relation to music (i.e. knov~edge of music theory, and knowledge of

music history). Creative criteria (C) are associated with Bloom's knowledge and

psychomotor domains and encompass the mental and physical compositional and

improvisational skills associated with music (i.e. ability to improvise melodies, variations,

and accompaniment, and ability to compose music).

The assessment criteria directors deemed most important in the evaluation of their

band students centered on performance skills (i.e., ability to play an instrument in an

ensemble, and individual playing ability on an instrument). This result was expected

with the understanding that the classes in question are performance-based ensembles

with the primary purpose of preparing music for performance. The next category

directors found important involved some type of critical thinking including evaluating and

describing music and musical performances. Included with this category were

understanding music and its relationship with other arts disciplines, outside disciplines,

and music in relation to history and culture. The assessment criteria the directors

deemed least important revolved around music knowledge and external performance









skills not directly associated with the performance of traditional concert band literature

(i.e., knowledge of music theory and history, and the ability to improvise melodies,

variations, accompaniments, and compose music)

The original survey question (#20) and criteria responses were designed to

investigate if high school band directors were assessing their students based on the

national standards for music education. These nine standards were offered to our

profession by the Music Task Force (MENC, 2008) on March 11, 1994, in association

with the Goals 2000: Educate America Act.

1. Singing, alone and with others, a varied repertoire of music.

2. Performing on instruments, alone and with others, a varied repertoire of music.

3. Improvising melodies, variations, and accompaniments.

4. Composing and arranging music within specified guidelines.

5. Reading and notating music.

6. Listening to, analyzing, and describing music.

7. Evaluating music and music performances.

8. Understanding relationships between music, the other arts, and disciplines outside
the arts.

9. Understanding music in relation to history and culture.

The directors most valued criteria associated with standards 1 and 2, with mean

response levels of M = 4.26 and M = 4.16 respectively. Standards 7 and 6 followed,

with mean response levels of M = 3.92 and M = 3.54. While the next standards ranked

are 8 and 9, their corresponding mean levels of M = 3.15 and M = 3.13 fall just above

the moderately important response option. Standards 3 and 4, along with music theory

and music history knowledge, all fall around or below the moderately important

response option, with standard 4 (M = 2.02) approaching the not important response









option. These findings are consistent with current research by Zitek (2008), Schopp

(2008), Diehl (2007), and Antmann (2007) who found that band directors' curricular

activities and assessment are centered on the actual playing of music versus the

creation of new music, either through composition or improvisation (Table 5-2).

Category

The assessment category considered most important by directors was summative

assessment (i.e., concerts, festivals, recitals) followed by formative assessment (i.e.,

playing tests). As these two categories of assessments align with the objectives of

performance-based ensembles, this result was expected, with both categories reporting

strong responses (M = 4.27 and M = 4.03). Diagnostic assessment (i.e., error

detection) and placement assessment (i.e., auditions, challenges) were considered less

important by the directors, but still received above-average mean responses of M = 3.89

and M = 3.27. These results indicate that the directors are assessing their students

using assessments from all four categories which align with the National Board for

Professional Teaching Standards (Linn, 2005) suggestion: create a variety of

assessment tasks and materials for assessing student learning.

Influence

The sixteen factors directors rated are divided into five categories: personal

philosophy (P); class time (CT); logistics (L); training (T); and external factors (E).

Personal philosophy (P) relates to the director's opinion or view on assessment (i.e.

your personal philosophy of education, and the objectives or goals of your class). Class

time (CT) includes any factors relating to perceived time constraints (i.e. the amount of

available class time, the demands of the ensemble's performance schedule, and the

number of students enrolled in the class). Logistics (L) include available resources (i.e.









available equipment, computers, and recording devices). Training (T) relates to any

education or training the director has experienced (i.e. professional development you

have participated in, your undergraduate coursework, your graduate coursework, and

the assessment method implemented in the high school band program you attended).

External factors (E) include external expectations or influences (i.e. the expectation of

your students, the expectations of your students' parents, influence from your music

colleagues, influence from professional organizations, and requirements set by the

school district).

The factors that most influence the assessment methods used by directors

centered on personal philosophy, available class time, and logistics (i.e., the objectives

or goals of your class, the demands of the ensemble's performance schedule, and

available equipment). Training (i.e., undergraduate and graduate course work and

professional development) and external factors (i.e., the expectations of students,

students' parents, and school principal, and influence from music colleagues and

professional organizations) least influenced assessment methods directors use (Table

5-3). These results are consistent with the pilot study responses and other research by

Kancianic (2006) and Kotora (2001), who found that band directors are influenced more

by internal goals and objectives related to musical performance than by external

requirements or expectations set by others.

Preparation

The eight preparation methods directors rated are divided into three categories:

colleagues (C); training (T); and external methods (E). Colleagues (C) relate to

preparation gained from other music educators (i.e. discussions with colleagues, and

clinics at professional conference). Training (T) relates to any education or training the









director has experienced (i.e. graduate course wrk, undergraduate coursework, and

teacher in-service sessions). External methods (E) refer to outside organizations or

published standards (i.e. professional organizations, national standards, and state or

district standards).

Directors considered their colleagues the best source of preparation for assessing

students in their band program. Directors responded strongly for discussions with

colleagues (M = 3.98) and clinics at professional conference (M = 3.64) as the methods

that prepared them best for the task of student assessment. Directors considered their

training (i.e., graduate and undergraduate course work) and external methods (i.e.,

professional organization, national or state standards) to have moderately well prepared

or not well prepared them, with a range of response (mean values between 3.07 and

2.27) (Table 5-4).

Assessment Information

Results from the Assessment Information section of the questionnaire provided

specific information about how the directors are currently assessing their students. This

section also asked the directors to reflect on the importance of specific assessment-

model characteristics and to reflect on the effectiveness of their current assessment

method.

Components

The sixteen assessment components the directors commented on are divided into

two categories: musical (M) and non musical (N). The musical components (M) relate

to any and all aspects of music (i.e. performances, performance-based tests, written

tests/worksheets, practice log/journal, sight-reading tests, Smart MusicTM, requirement

checklists, computer-assisted programs, and portfolios). Non music components (N)









are external music (i.e. participation, attendance, conduct/discipline, attitude, student

self-assessment, and peer assessment).

Assessment components used by most of the directors were participation,

performances, and performance-based tests. Attendance and conduct/discipline were

also used by many directors. Assessment components used by fewer than ten percent

of the directors include computer-assisted programs, peer assessments, and portfolios.

The four most frequently selected components were also assigned the most

weight in the directors' overall assessment method: performances (26.63%);

participation (22.27%); performance-based tests (20.54%); and attendance (18.67%).

The least-weighted components were sight-reading tests (7.34%); portfolios (6.67%);

and peer assessment (5.72%).

These findings were virtually identical to the pilot study and mirrored component

usage results from research conducted by Antmann (2007), Sears (2002), and Sherman

(2006). Directors place clear emphasis on components that reinforce the preparation of

performance materials and the performances themselves. Directors also stress the

importance of "team" related concepts such as participation, attendance, and

conduct/discipline in assessing their students. These concepts become extremely

important in the setting of performance-based ensembles, where the success of the

group is relies on each member fulfilling individual responsibilities. Directors indicate

the use of both musical and non musical assessment components in their overall

assessment method (Table 5-5).

Data collection

The twelve data collection procedures that directors commented on are divided

into three categories: classroom method (C); outside of the classroom method (0); and









test or assignment (T). Classroom method (C) relates to all data collection procedures

occu rri ng in the classroom (i.e. teacher observation, students play individually in class,

and students play in a group in class). Outside of the classroom method (0) relates to

data collection procedures not occurring in the classroom (i.e. students record

themselves playing individually, student self-assessment, practice log or record,

students record themselves playing in a group, Smart MusicTM, and computer-assisted

programs). Test or assignment (T) includes any written evaluation (i.e. short answer

test or assignment, multiple choice test or assignment, and essay question test or

assignment).

The most-used methods for data collection occur in the classroom and involve

performance-based activities. Over 80% of the directors use teacher observations, and

students playing individually or in a group, when assessing their students. Other data-

collection methods used by far fewer directors include outside-the-classroom methods

(i.e., students recording themselves playing individually or in a group, practice log or

record, and Smart MusicTM ) and written and knowledge-based methods (i.e., short

answer, multiple choice, or essay question tests or assignments) (Table 5-6).

Characteristics

Characteristics of student assessment considered most important by the directors

align with the guidelines provided by the MENC for music classroom assessment.

Assessing the most important music skills and knovwedge and using assessment to

support, enhance, and reinforce learning were most valued by directors, as well as

having assessments that are reliable and valid. Other characteristics considered

important and aligning with MENC's guidelines include having assessments that are

understood by all parties involved, and having assessments that require a student to









demonstrate a music behavior in an authentic or realistic situation. Characteristics

considered less important by directors include using appropriate grading rubrics,

including regularly scheduled assessment opportunities; and including a variety of

assessment components.

The directors also rated time-efficient, and relatively easy to administer and

maintain, as important characteristics of an assessment model. This supports research

by Kancianic (2006), Chiodo (2001), and Sherman (2006) who found that the main

problem directors perceive with student assessment is time constraints in dealing with

large numbers of students.

Reflection

When reflecting on eight statements concerning their assessment practices,

directors agreed most strongly with the statement I vould be interested in finding other

ways to assess my students (M = 4.29) and only moderately agreed with the statement I

am satisfied Wth my current band assessment practices (M = 3.54). These results

create a sense of optimism for the future of student assessment in high school band

programs. Not only have the directors identified assessment as an area of concern, but

they have also indicated a willingness to explore new assessment methods.

Directors rated school administrators'satisfaction Wth the current band

assessment practices highest (M = 4.09); with students'parents (M = 3.82); the

students themselves (M = 3.68); and themselves (M = 3.54) following. The groups less

directly involved in the assessment process (school administrators and students'

parents) are more satisfied. The individuals most directly involved (i.e. directors and

students) are the less satisfied with the assessment process. Directors also responded

at slightly above the moderately agree level that their assessment practices foster the









individual musical development of their students (M = 3.89), and that their assessments

are good enough to ensure quality instruction (M = 3.87)

Assessment Model

In assigning weight values to the three assessment components in this study, in

an effort to create a balanced student assessment model for bands, directors assigned

similar weight to the two performance-oriented components (rehearsal attendance and

contribution [M = 34.95%] and performance attendance and contribution [M = 34.70%]);

and only slightly less weight to the assessment component (individual testing and

evaluation [M = 30.57%]). These weight distribution results were very similar to pilot

study findings, reinforcing the idea that directors suggest an equal distribution among

the three assessment components (Table 5-7).

Conclusions

The purpose of this study was to investigate current student assessment

practices of high school band directors.

Research Question 1: In What Specific Ways are Current High School Band
Directors Assessing Students in Their Ensemble Classes?

Results show that participation, performances, and performance-based test are

the primary components high school band directors are using to assess the students

who participate in their programs. These individual assessment components are

typically responsible for 20 to 25% of the student's grade; and when combined with

other components, comprise a total assessment plan that includes both musical and

non musical assessment components. Directors primarily rely on in-class data

collection methods (that include teacher observation) and playing tests (that require









students to play their instruments individually and in a group setting). These playing

tests typically consist of scales/rudiments and band music.

Research Question 2: What are High School Band Directors' Attitudes toward the
Assessment Process?

Results show that the main purpose of student assessment for high school band

directors centered on providing their students and themselves with feedback concerning

the instructional process in the classroom. Directors reported that performance skills

were the most important criteria to assess in their students and the main influences of

the assessment methods they use are their personal philosophy of assessment and

available class time. Directors reported the best source of preparation for assessing

their students came from their colleagues. Directors are interested in finding new ways

to assess their students.

Research Question 3: How Can the Results of this Research Contribute to the
Development of a Student Assessment Model for Bands?

The five examples of current grading policies included in the introduction

(Wisconsin, California, New York, Washington, and Texas) demonstrated an

inconsistency in a) the selection of assessment components; b) the weight assigned to

the component in the overall grading plan; and c) the explanation of the component. A

proposed assessment model should address these inconsistencies by incorporating the

ideas and attitudes of current band directors, the experts in operating instrumental

music classrooms in our schools.

Results show that directors assign a balance among the three suggested

components of a student assessment model for bands. These three components

comprise the most frequently used assessment components reported by the directors

and include musical and non musical traits (Table 5-8).









The following explanation and definition for individual components stem from

results of the study (specifically, the assessment philosophy and assessment

information sections) in regard to purpose, criteria, category, influence, data collection,

and characteristics. During the construction of this model, effort has been made to

effectively blend the music criteria (i.e., musical preparation, musical execution) with the

non music criteria (i.e., attendance, conduct attitude, materials).

Rehearsal attendance and contribution

The attendance of each member of the ensemble at all rehearsals is critical
to the success of the ensemble. Attendance will be graded in terms of
present, excused absence; unexcused absence (also refers to tardiness
and early dismissals). Contribution (as graded through teacher
observation) reflects how a student fulfills individual responsibilities to the
ensemble. Contribution includes the following areas: conduct, attitude,
musical preparation, and materials (instrument, music, accessories, etc).
Students will receive a weekly grade for rehearsal attendance and
contribution.

Performance attendance and contribution

The attendance of each member of the ensemble at all performances is
critical to the success of the ensemble. Attendance will be graded in terms
of present, excused absence; unexcused absence (also refers to tardiness
and early dismissals). Contribution (as graded through teacher
observation) reflects how a student fulfills individual responsibilities to the
ensemble. Contribution includes the following areas: conduct, attitude,
musical preparation, musical execution, and materials (instrument, music,
accessories etc.). Students will receive a grade per performance.

Individual testing and evaluation

Students will participate in a performance-based assessment (playing test)
each week. The material for the assessments will include scales/rudiments
and band music. Students will also have four written tests (one per grading
period) addressing basic music knowledge (including applicable music
theory and history) (Table 5-9).

The following represent a synthesis of the ideas and concepts directors have

indicated as important to an assessment model. The specific grading scale and grade-









type assignment are flexible and can be altered to the specific school requirements at

each director's school (Figure 5-1).

Implications for Music Education

This study provides a broad view of secondary band directors' current

assessment practices. Many components of the assessment process were investigated

and conclusions were discussed in view of results. These results provide a better

understanding of what is happening, and to a certain degree, why directors make their

assessment choices. The following discussion revolves around what could (or in some

cases should) be taking place concerning student assessment in band classes.

Directors responding to the survey stated that they are interested in finding other ways

to assess their students. The following discussion provides alternatives concerning

student assessment that band directors may have not yet explored.

Purpose

The purposes of student assessment considered most important by directors were

to provide their students and themselves with feedback concerning the instructional

process. This is encouraging and may indicate a shift in emphasis away from

performance-based purposes found in previous research. A logical outcome of this shift

might include increased emphasis on the individual testing and evaluation component of

a student assessment model and decreased emphasis on the performance-based

components (performance attendance and contribution, and rehearsal attendance and

contribution).

Criteria

Directors continue to emphasize criteria centered on performance in their

assessment decisions (i.e. ability to play an instrument in an ensemble, and individual









playing ability on an instrument). Performance remains an important part of the high

school band program. However, in an effort to develop well-rounded musicians, other

criteria should be emphasized in both assessment and curricular decisions. Band

directors should be striving to produce musicians, not just music. Assessment criteria

rated lowest by directors were knowledge of music theory; know edge of music history;

ability to improvise melodies, variations, and accompaniment; and ability to compose

music. Having musicians with better understandings of these music fundamentals

would only serve to enhance future performances, not detract from them. Directors

should make efforts to find creative ways to incorporate this important musical

knowledge into their rehearsals. Their assessment decisions should reinforce

acquisition of this knowledge.

An example of incorporating music theory, history, improvisation, and

composition into the preparation of a piece of music for performance is the use of

Variations on a Korean Folk Song by John Barnes Chance. The use of the pentatonic

scale in each of the folk song melodies could be the basis of lessons in music theory.

Regarding music history, the use of Korean folk songs could initiate a discussion on the

musical nationalism that flourished in the mid-nineteenth century. Music improvisation

could be experimented with by having students create melodies using only the notes of

the A-flat pentatonic scale heard at the opening of the piece. Music composition could

be addressed by having students compose their own "folk song" using the pentatonic

scale as the basic material for the melodic and harmonic elements of the piece.

The next step in this process would be for the director to assess the content

taught in the various areas. The music theory and history components could easily be


100









assessed in written form. The improvisation would most likely be assessed through

teacher observation in class while the students complete the activity. The composition

could either be assessed in written form or through a performance of the students'

works by student groups. The most important step in this process is to assess the

information taught to students. This sends a clear message to all involved in the

learning process that this information is important and valued in addition to preparing

the piece for performance. Incorporating the comprehensive musical education of

students serves to enhance and support musical performances.

Preparation

Directors responded that their colleagues were the best source of preparation for

assessing students in their band programs (discussion wth colleagues, and clinics at

professional conference). Directors also reported that their undergraduate and

graduate coursework only moderately prepared them for student assessment, and that

professional organizations and standards (national, state and district) were preparing

them to an even lesser degree. These results indicate a major disconnect among the

current music education curriculum, our professional organizations, and practicing

music educators.

Efforts should be made to better address the topic of student assessment

(especially in ensemble situations) during the training of our future music educators.

Assessment methods should be discussed and models suggested to students before

they are sent into the classroom. In addition, our professional organizations should

continue to develop programs and support research addressing student assessment.

These organizations can play an important role in the direction of student assessment in

the future.









Data Collection

Most directors use in-class performance-based activities when collecting

assessment data on their students. More than 80% of the directors assess their

students while they play individually in class, and play in a group in class. Materials

most used by directors on performance-based test are scales/rudiments (93.3%) and

band music (92.4%). The survey did not investigate how these performance-based

tests (playing tests) were administered by the directors. How tests are administered

determines their effectiveness. In many instances, playing tests are used as a threat

and are initiated after a certain level of frustration is experienced by the director

because of a lack of student preparation. The following is a suggested method of

incorporating playing tests into a student assessment plan.

Playing tests can be an excellent assessment method for directors. Playing tests

align with the performance objectives of the ensemble, foster individual preparation and

practice, emphasize the individual's responsibility to the ensemble, and hold students

accountable for musical development on their instrument. When used properly, playing

tests save rehearsal time and improve the overall quality of performances. As with

other assessment methods, playing test should be given on a regular basis (weekly)

and should align with in-class content. The schedule of the specific content of playing

tests should be logical and should emphasize important musical fundamentals as well

as the preparation of major performances.

In addition to in-class performance-based activities, other data collection

methods should be explored by band directors. Curriculum, instruction, and

assessment in secondary band rooms have remained static for the past 50 years. In

view of our current technology, a variety of alternatives are available to today's music









educator, to assist in instruction and assessment. Results show that the least-used

procedures for collecting assessment data were Smart MusicTM (13.1%) and computer-

assisted programs (5.1%). Directors must be willing to explore these procedures and

find ways to incorporate them into their programs.

Suggested Model

A suggested model of student assessment should incorporate results of current

research, suggestions and recommendation from professional organizations, and

practical experience gained from the classroom. This model serves as a guide,

remaining flexible to allow band directors the freedom to modify according to their

specific teaching situation.

The three main assessment components incorporate both musical and non

musical criteria; individual testing and evaluation, performance attendance and

contribution, and rehearsal attendance and contribution. Weight of the components is

adjusted to emphasize testing and evaluation of both performance-based skills and

music knowledge criteria (music theory and music history). Resulting percentages are

individual testing and evaluation = 40%; performance attendance and contribution =

30%; and rehearsal attendance and contribution = 30%.

In both the performance attendance and contribution and rehearsal attendance

and contribution components, the category of attitude has been replaced with behavior.

Directors reported that most of their student assessment involves teacher observation.

Behavior can be accurately observed (empirical). Attitude is a hypothetical construct

that cannot be accurately measured through observation, and therefore becomes

difficult to quantify when assessing students and assigning grades.


103









The remainder of the model is supported by results of this study and aligns with

current student assessment practices of high school band directors (Figure 5-2).

Future Research

Future research in the area of student assessment should attempt to develop a

globally accepted assessment model for use by directors in high school band programs.

Such a model could be incorporated into both the undergraduate and graduate music

education curriculum, giving prospective music educators the knowledge and tools to

effectively assess their students.

1. A similar study could be administered to exemplary high school band directors
(i.e., directors who have had bands perform at the Midwest Band Clinic) to offer
"expert" information on assessment practices. These results could further justify
the construction of an assessment model.

2. Student assessment studies in other school music genres (i.e., chorus, orchestra)
to determine commonalities and differences in strategies and practice.

3. Student assessment studies in other grade levels (i.e., middle school, college) to
determine commonalities and differences in strategies and practice.

4. Investigation of student assessment from the perspectives of students, parents,
and school administrators.

5. Longitudinal study of student assessment in relation to educational and
informational in-services designed for music educators.

6. Experimental study testing the effect of a specific student assessment model on
student learning and ensemble development.

The results of this study represent an important first step in improving student

assessment in high school bands in that they reveal current assessment methods and

the factors that underlie the reasons those methods are used. These findings stimulate

questions for investigation and discussion about how new student assessment methods

might encourage a more comprehensive curriculum while supporting the goals and

objectives of these performance-based ensembles. Few would dispute the proposition









that the music education profession would benefit from the development of a

comprehensive band assessment model. The findings of this study suggest that such a

model would strengthen band assessment practices, improve the reliability and validity

of student assessment data, and, as a result, positively influence band curriculum,

classroom instruction, and performance preparation. As a profession, we are obligated

to continue this work as we endeavor to attain one of our most important goals the

improvement of student music learning.







Table 5-1. Assessment purposes including category (N = 454)
Purpose Category M
To provide feedback to students I 4.63
To determine what concepts students are failing to understand I 4.45
To determine what concepts students are understanding I 4.41
To determine whether instruction has been successful 4.33
To demonstrate student accountability for learning I 4.27
To determine future instructional direction 4.26
To identify individual student abilities P 4.23
To set or maintain class standards E 4.11
To provide feedback to parents E 4.05
To help students prepare for public performance P 4.00
To determine the level of musical preparedness for public P 3
performance
To establish or maintain credibility for the band program E 3.85
To identify general class abilities P 3.81
To motivate students to practice their instruments E 3.75
To determine whether students are practicing at home E 3.37
To rank students according to individual performance levels P 2.81
I = Instructional purpose, P = Performance purpose, E = External purpose


105









Table 5-2. Assessment criteria including categories and national standard (N = 454)
Criteria Standard Category M
Ability to play an instrument in an ensemble 1 P 4.26
Individual playing ability on an instrument 2 P 4.16
Ability to evaluate music and music performances 7 CT 3.92
Ability listening to, analyze, and describe music 6 CT 3.54
Ability to understand the relationships between music, 8 CT 3.15
the other arts, and disciplines outside the arts
Ability to understand music in relation to history and 9 CT 3.13
culture
Knowledge of music theory K 3.05
Knowledge of music history K 2.51
Ability to improvise melodies, variations, and 3 C 2.32
accompaniment
Ability to compose music 4 C 2.02
P = Performance, CT = Critical Thinking, K = Knowledge, C = Creative




Table 5-3. Factors influencing assessment methods including categories (N = 454)
Factor Category M
Your personal philosophy of education P 4.38
The objectives or goals of your class P 4.31
The amount of available class time CT 4.20
The demands of the ensemble's performance schedule CT 3.90
The expectations of your students E 3.53
The number of students enrolled in the class CT 3.36
Available equipment (computers, recording) L 3.29
Professional development you have participated in T 3.11
Influence from your music colleagues E 3.11
Your undergraduate coursework T 3.07
The expectations of your students' parents E 3.04
The expectation of your school principal E 3.02
Your graduate coursework T 2.94
Influence from professional organizations E 2.80
Requirements set by the school district E 2.73
The assessment method implemented in the high school band T 2.37
program you attended
P = Personal Philosophy, CT = Class Time, L Logistics, T = Training, E = External


106









Table 5-4. Preparation methods including category (N = 454)
Preparation option Category M
Discussions with colleagues C 3.98
Clinics at professional conference C 3.64
Graduate coursework T 3.07
Professional organizations E 3.06
Undergraduate coursework T 3.04
National standards E 2.83
State or district standards E 2.72
Teacher in-service sessions T 2.27
C = Colleagues, T= Training, E = External methods


Table 5-5. Assessment components usage including category (N = 454)
Assessment component Category Response % (Count)


Participation
Performances
Performance-based Tests
Attend a nce
Conduct/Discipline
Written tests/Worksheets
Attitude
Extra Credit (lessons, concert
attendance)
Practice Log/Journal
Sight-reading Tests
Student Self-assessment
Smart MusicTM
Requirement Checklists
Computer-assisted Programs
Peer Assessment
Portfolios
M = Musical, N = Non musical


M
N
N
M
N
M/N

M
M
N
M
M
M
N
M


95.6 (433)
92.1 (417)
88.7 (402)
77.7 (352)
69.8 (316)
58.1 (263)
54.1 (245)
47.7 (216)
28.0 (127)
24.5 (111)
22.5 (102)
12.8 (58)
12.8 (58)
9.9 (45)
9.5 (43)
7.5 (34)









Table 5-6. Data collection procedures including category (N = 454)
Procedure Category Response % (Count)
Teacher observation C 88.0 (397)
Students play individually in class C 85.4 (385)
Students play in a group in class C 80.0 (361)
Short answer test or assignment T 39.0 (176)
Students record themselves playing 0 330
33.0 (149)
individually
Student self-assessment O 31.5 (142)
Multiple choice test or assignment T 28.4 (128)
Practice log or record O 26.2 (118)
Essay question test or assignment T 25.1 (113)
Students record themselves playing in a O 13.1 (59)
group
Smart MusicTM O 13.1 (59)
Computer-assisted program O 5.1 (23)
C = Classroom Method, O = Outside of the classroom method, T = Test or Assignment





Table 5-7. Weighted component results compared to pilot study results (N = 454)
Assessment component Current Study % Pilot Study %
Rehearsal attendance and contribution 35 33
Performance attendance and contribution 35 33
Individual testing and evaluation 30 34




Table 5-8. Student assessment model: Stage one
Assessment Component Weight % Explanation
Rehearsal attendance and
contribution
Performance attendance and
35
contribution
Individual testing and evaluation 30


108









Table 5-9. Student assessment model:
Assessment Component Weight %
Rehearsal attendance and 35
contribution


Performance attendance
and contribution


Individual testing and 30
evaluation


Stage two
Explanation
The attendance of each member of an
ensemble at all rehearsals is critical to the
success of the ensemble. Attendance will be
graded in terms of present, excused absence;
unexcused absence (also refers to tardiness
and early dismissals). Contribution (as
graded through teacher observation) reflects
how a student fulfills individual responsibilities
to the ensemble. Contribution includes the
following areas: conduct, attitude, musical
preparation, and materials (instrument, music,
accessories, etc.). Students will receive a
weekly grade for rehearsal attendance and
contribution.
The attendance of each member of the
ensemble at all performances is critical to the
success of the ensemble. Attendance will be
graded in terms of present, excused absence;
unexcused absence (also refers to tardiness
and early dismissals). Contribution (as
graded through teacher observation) reflects
how a student fulfills individual responsibilities
to the ensemble. Contribution includes the
following areas: conduct, attitude, musical
preparation, musical execution, and materials
(instrument, music, accessories etc.).
Students will receive a grade per
performance.
Students will participate in a performance-
based assessment (playing test) each week.
The material for the assessments will include
scales/rudiments and band music. Students
will also have four written tests (one per
grading period) addressing basic music
knowledge (including applicable music theory
and history).


109
























Figure 5-1. Current student assessment practices


110


I )I
| udiii
Aem t


Pefomac
Atednc n


Inclividua Testing
and Evaluation^^^^^^^^^


and Contribution
35%
conduct

I e earsa Atten anoe
titude
ical prepartion
rials !7



















































Figure 5-2. Student Assessment Model


Individul Tsing and
Evlaion40

-bndmuic









APPENDIXA
QUESTIONNAIRE

Student Assessment Practices of High School Band Directors

Background Information

Please provide the following background information concerning your school, band

program, and teaching experience.

1. Type of school:

Public Private Charter Other

2. Number of students (9th 12th grade) enrolled in your high school:

1 -500 501 -1000 1001 -1500 1501 -2000 2001 or more

3. In what type of community is the school located?

Urban/ City (high population) Suburban (associated with a larger city)

Town (moderate population) Rural/Remote (low population)

4. What is the socio-economic status of the community?

Low Low/Middle Middle Middle/High High

5. Total number of students involved in the band program:

1-50 51 -100 101-150 151-200 201 or more

6. Number of students involved in concert band(s):

1-50 51 -100 101-150 151-200 201 or more

7. Number of concert bands at your school:

1 2 3 4 Sormore

8. Average number of students in each concert band:

1 -15 16-30 31 -45 46-60 61 or more









9. Total number of minutes each concert band meets PER WEEK:



10. Years of experience teaching high school band (including this year):

1 -4 5-8 9-12 13-16 17 or more

11. Years teaching in your current position (including this year):

1 -4 5-8 9-12 13-16 17 or more

12. Highest degree earned in music/music education:

Associate's Bachelor Masters Doctorate Post Doctoral

13. Number of band directors employed at your school:

1 1.5 2 2.5 3 or more

Grading Information

Please provide the following information concerning your student grading process.

14. How many marking periods does your school have per year?



15. How many weeks long is a typical marking period?



16. What type of grade do you assign at the end of a grading period?

No grades assigned Pass/Fail Letter Grades Number Grades

Written Comments Combination of types (please explain)

17. Does the grade given in your band ensemble class affect the student's

overall GPA?

Yes No Other (please explain)


113









18. Is there a weighted grading system being used in your school (higher level

classes assigned more value in the students overall grade point average)?

Yes No

19. Is there a weighted option in the grade given in your band ensemble class?

Yes No If yes, please explain

Assessment Philosophy

Please provide your opinion to the following philosophical questions concerning

student assessment.

20. How important are the following purposes of student assessment?

(1 = not important ... 5 = extremely important)

a) to provide feedback to students ............ ......... ..... ........ 1 2 3 4 5
b) to provide feedback to parents ...... .......... ............ ............. 1 2 3 4 5
c) to identify individual student abilities .................. ..... .............. 1 2 3 4 5
d) to identify general class abilities ............ ............. ................ 1 2 3 4 5
e) to determine whether instruction has been successful ................ 1 2 3 4 5
f) to determine what concepts students are understanding .............. 1 2 3 4 5
g) to determine what concepts students are failing to understand ...... 1 2 3 4 5
h) to determine future instructional direction .................. .......... 1 2 3 4 5
i) to demonstrate student accountability for learning .............. ..... 1 2 3 4 5
j) to establish or maintain credibility for the band program ............. 1 2 3 4 5
k) to determine the level of musical preparedness for performance ... 1 2 3 4 5
I) to help students prepare for performance ........... ... ........... 1 2 3 4 5
m) to determine whether students are practicing at home ............... 1 2 3 4 5
n) to motivate students to practice their instruments ............. ..... 1 2 3 4 5
o) to set or maintain class standards ...... ..... .. ......... ......... 1 2 3 4 5
p) to rank students according to individual performance levels ......... 1 2 3 4 5









21. What importance do you place on the following criteria in the evaluation of

your band students?

(1 = not important ... 5 = extremely important)

a) individual playing ability on an instrument ............. ........... 1 2 3 4 5
b) ability to play an instrument in an ensemble............... ......... 1 2 3 4 5
c) knowledge of music history ............... ............... ......... 1 2 3 4 5
d) knowledge of music theory...... ..... .... ........... ...................... 1 2 3 4 5
e) ability to improvise melodies, variations, and accompaniment...... 1 2 3 4 5
f) ability to compose m usic.................................................... 1 2 3 4 5
g) ability listening to, analyze, and describe music ........... .. ....... 1 2 3 4 5
h) ability to evaluate music and music performances ..................... 1 2 3 4 5
i) ability to understand the relationships between music, the other arts,
and disciplines outside the arts .............. ....... ............. 1 2 3 4 5
j) ability to understand music in relation to history and culture.......... 1 2 3 4 5


22. How important are the following assessment categories in a student

assessment model for bands?

(1 = not important ... 5 = extremely important)

a) Placement assessments (i.e., auditions, challenges) ................. 1 2 3 4 5
b) Summative assessments (i.e., concerts, festivals, recitals) ............ 1 2 3 4 5
c) Diagnostic assessment (i.e., error detection) ............... .......... 1 2 3 4 5
d) Formative assessment (i.e., playing tests) .......... ............... 1 2 3 4 5


23. How influential are the following factors have on the assessment methods

you use?

(1 = no influential ... 5 = extremely influential)

a) your personal philosophy of education .......... ................ 1 2 3 4 5
b) the amount of available class time ........... ........... ........ 1 2 3 4 5
c) the objectives or goals of your class ....... ......... ....... ........ 1 2 3 4 5
d) the demands of your ensemble's performance schedule ............ 1 2 3 4 5
e) the number of students enrolled in the class ................ .......... 1 2 3 4 5
f) professional development you have participated in .................... 1 2 3 4 5


115









g) influence from music colleagues ............... ......................... 1 2 3 4 5
h) influence from professional organization ............. .... ......... 1 2 3 4 5
i) requirements set by the school district .... ...... ................... 1 2 3 4 5
j) the expectations of your students ............... ............. ............. 1 2 3 4 5
k) the expectations of your students' parents ........................... 1 2 3 4 5
I) the expectation of your school principal ....... ........ ............. 1 2 3 4 5
m) available equipment (computers, recording ) ............... ......... 1 2 3 4 5
n) your undergraduate coursework ............. ............ ......... 1 2 3 4 5
o) your graduate coursework ................... ..... ................. 1 2 3 4 5
p) modeled after the high school program you attended ................. 1 2 3 4 5


24. How do you feel the following have prepared or are preparing you to assess

the students in your band program?

(1 = not well prepared ... 5 = very well prepared)

a) undergraduate coursework .......... ............................... 1 2 3 4 5
b) graduate coursework .... ... ....... ....... ........ 1 2 3 4 5
c) national standards .............................. ..................... 1 2 3 4 5
d) state or district standards ........... .. ......... ...... .... ........... 1 2 3 4 5
e) teacher in-service sessions .................. ...... ...... ......... 1 2 3 4 5
f) clinics at professional conference ..................................... 1 2 3 4 5
g) discussions with colleagues ............ ....... .... .... ......... .. 1 2 3 4 5
h) professional organizations ...... ..... ............ ......... 1 2 3 4 5


Assessment Information

Please provide information concerning assessing the students in your largest

concert band.

25. Which of the following assessment components do you use to determine a

student's grade (select all that apply)?

a) attitude
b) attendance
c) computer-assisted programs
d) conduct/discipline
e) extra credit (lessons, concert attendance)


116









f) participation
g) peer assessment
h) performance-based (playing) tests
i) performances
j) portfolios
k) practice log/journal
I) requirement checklists (scales, exercises)
m) sight-reading tests
n) Smart MusicTM
o) student self-assessment
p) written tests/worksheets


26. Please enter the percentage of each component you use to determine your

grades. Leave unused components blank. Be sure that the total adds up to

100%.

a) attitude grades %
b) attendance grades %
c) computer programs grades %
d) conduct/discipline %
e) extra credit %
f) participation %
g) peer assessment %
h) performance-based tests %
i) performances %
j) portfolios %
k) practice log/journal %
I) requirement checklists %
m) sight-reading tests %
n) Smart MusicTM %
o) student self-assessment %
p) written tests/worksheets %









27. Which of the following procedures for data collection do you use when

assessing your students (select all that apply)?

a) students play individually in class
b) students play in a group in class
c) students record themselves playing individually
d) students record themselves playing in a group
e) multiple choice test or assignment
f) short answer test or assignment
g) essay question test or assignment
h) computer-assisted program
i) Smart MusicTM
j) practice log or record
k) teacher observation
I) student se If-assessment
I) other (fill-in)


28. If you use performance-based tests when assessing students, what materials

do you utilize (select all that apply)?

a) scales / rudiments
b) band music
c) sight-reading
d) all-state / district / county / or honor band audition music
e) method book exercises
f) etudes
g) solo literature
i) do not use performance-based tests


29. The following is a list of characteristics that have been traditionally used in

assessment models of band students. Rate the importance of these

characteristics in your assessment design.

(1 = not important ... 5 = extremely important)

a) assists in the preparation of music for performances .................. 1 2 3 4 5
b) includes a variety of assessment components ........................ 1 2 3 4 5


118









c) aligns w ith instruction ............. .. .... ... .. ......... ............... 1 2
d) is understood by all parties involved (i.e., students, parents) ......... 1 2
e) includes appropriate grading rubrics .......... ......................... 1 2
f) assists in motivating students to learn and develop ................ 1 2
g) reflects the music skills and knowledge that are most important for
students to learn .............. .. .. ...... ............. ........ .. 1 2
h) supports, enhances, and reinforces learning ................. ......... 1 2
i) is reliable and valid .......................... ... ............ ...... ... 1 2
j) requires a student to demonstrate a music behavior in an authentic or
realistic situation ............. ............ ...... .... 1 2
k) is open to review by interested parties ............ .. ... ......... 1 2
I) includes regularly scheduled assessment opportunities ............... 1 2
m) includes both musical and non musical components ................ 1 2
n) is time efficient .................................................. ... ..... ..... 1 2
o) is relatively easy to administer and maintain ............................ 1 2


30. Rate your agreement level with the following statements concerning

assessment.


(1 = strongly disagree ... 5 = strongly agree)


a) I am satisfied with my current band assessment practices ..........
b) My students are satisfied with the current band assessment
practices .................................... ............... ...........
c) My students' parents are satisfied with the current band
assessment practices .................. ......... .. ....................
d) My school administrators are satisfied with the current band
assessm ent practices .......................................... ...... .......
e) My assessment practices are good enough to ensure quality
instruction .............. .... ............. .. .... .... ...... ...... ..........
f) My assessment practices foster the individual musical development
of m y students ............................................ ............ ..... ....
g) My assessment and grading practices are similar to those of most
of the band directors I know .............. ...... ..... .... ...........
h) I would be interested in finding other ways to assess my students .


1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5

1 2 3 4 5
12345


119









Assessment Model


Please answer the following question concerning an assessment model.

31. Using the following three assessment components, (a) individual testing and

evaluation, (b) performance attendance and contribution, and (c) rehearsal

attendance and contribution, assign percentages (totaling 100%) to create

what you believe to be a balanced assessment tool for band students:


(a) individual testing and evaluation

(b) performance attendance and contribution

(c) rehearsal attendance and contribution


Thank you for your participation in this questionnaire


120









APPENDIX B
PILOT STUDY RESULTS

Survey results are presented in three categories: 1) Demographic Information, 2)

Grading Information, and 3) Assessment Information.

Demographic Information

The school enrollment of the 45 schools showed representation in each of the

five population categories. The results were as follows: 1 to 500 students = 31.1% (14

schools); 501 to 1,000 students = 13.3% (6 schools); 1,001 to 1,500 students = 28.9%

(13 schools); 1,501 to 2,000 students = 20.0% (9 Schools); and 2,001 or more students

= 6.7% (3 schools). While the community-type results showed representation in each

category, over half of the schools reported Suburban = 53.3%, (24 schools). The

remaining results were Small Town = 22.2% (10 schools); Rural/Remote = 15.6% (7

schools); and Urban/Inner City = 11.1% (5 schools). Finally, the socio-economic status

of the community also showed representation in each category: Low = 13.3% (6

schools); Low/Middle = 35.6% (16 schools); Middle = 15.6% (7 schools); Middle/High=

28.9% (13 schools); and High = 11.1% (5 schools).

The size of the band programs participating in the study indicated a wide variety

with almost half consisting of between 101 and 150 students. The results were as

follows: 1 to 50 students = 20.0% (9 schools); 51 to 100 students = 20% (9 schools);

101 to 150 students = 46.7% (21 schools); 151 to 200 students = 13.3% (6 schools);

and no schools reported more than 201 students enrolled in band ensemble classes.

The total number of ensemble classes being taught in each band program ranged from

1 to 5 (or more). Those results showed: 1 ensemble = 13.3% (6 schools); 2 ensembles

= 20.0% (9 schools); 3 ensembles = 31.1% (14 schools); 4 ensembles = 15.6% (7









schools); and 5 or more ensembles = 22.2% (10 schools). The number of concert

ensembles being taught showed similar results: 1 concert ensemble = 20.0% (9

schools); 2 concert ensembles = 26.7% (12 schools); 3 concert ensembles = 26.7% (12

schools); 4 concert ensembles = 4.4% (2 schools); and 5 or more concert ensembles =

22.2% (10 schools). Finally, the average number of students in each concert ensemble

results showed almost half ranging from 46 to 60 students (42.2%, 19 schools). The

remaining results showed: 1 = 15 students at 4.4% (2 schools); 16 = 30 students at

8.9% (4 schools); 31 = 45 students at 22.2% (10 schools); and 61 or more students at

22.2% (10 schools).

Participants revealed that almost half have been teaching 17 or more years

(42.2%, 19 directors). Remaining results: 1 to 4 years = 20.0% (9 directors); 5 to 8 years

= 15.6% (7 directors); 9 to 12 years = 8.9% (4 directors); and 13 to 16 years = 13.3% (6

directors). The years teaching in their current positions showed a very balanced result:

1 to 4 years = 28.9% (13 directors); 5 to 8 years = 22.2% (10 directors); 9 to 12 years =

20.0% (9 directors); 13 to 16 years = 8.9% (4 directors), and 17 or more years = 20.0%

(9 directors). The results showed that all directors have received at least a bachelor's

degree (48.9%) with more than half possessing a master's degree (66.7%). None of the

respondents reported associates, doctorate, or post doctoral degrees. Finally the

number of directors employed at each school results showed almost half of the directors

manages the band program alone (44.4%, 20 schools). Other results: 1.5 directors =

4.4% (2 schools); 2 directors = 24.4% (11 schools); 2.5 directors = 2.2% (1 school); and

3 or more directors = 24.4% (11 schools).









Grading Information

The directors reported a variety of grading periods, from 2 semesters to 12

grading periods. However, the majority of the directors indicated 4 grading periods

(quarter system). The length of the grading period also varied, from 4.5 weeks to 18

weeks. Here the majority of the directors indicated that 9 weeks was the typical length

of a grading period. A large majority of the directors indicated that they assign a letter

grade (80.0%, 36 schools); with the remainder reporting to assign number grades

(20.0%, 9 schools) at the end of a grading period. One hundred percent of the directors

responded that the grade given in the band ensemble class affects the student's overall

GPA (grade point average).

Finally, while 71.1% (32 schools) reported that their school uses a weighted

grading system only 17.8% (8 schools) reported that the band ensemble class grade is

weighted. Responses of the 8 directors showed how their band ensemble class grade is

weighted in their grading system: (a) Students may enroll as juniors and seniors Band III

and IV Honors. These are weighted courses with respect to overall GPA; (b) Band taken

for honors credit during the student's junior and senior year carries more weight; (c)

Students 10th grade and above can contract to earn a weighted grade by fulfilling a

number of additional achievements beyond the class period; (d) An upperclassman,

under certain rare circumstances can get honors credit for top ensemble participation.

This mainly hinges on meeting certain performance criteria (all-district or all-state band,

one rating at festival, etc.); (e) The grade is a combination of performance activities and

learning activities; (f) Entry level classes are 1.0. Performing ensembles are weighted

1.2 to 1.6. Students can get .2 higher for participation in all-district band in the fall term

and district solo and ensemble contest in the spring; (g) Band is automatically weighted


123









to equal any other college-prep class grade; (h) Honors Credit available for grades 10-

12. Students must perform a jury and write a research paper.

Assessment Information

Table B-1 shows the percentages and counts of the directors' use of selected

assessment components to determine a student's grade in their ensemble classes. The

two most prevalent components used by directors were participation (95.6%) and

performances (95.6%). The two least prevalent were portfolios (6.7%) and peer

assessment (6.7%). Additional assessment components offered by directors include

rhythm dictation, extra credit (i.e., lessons, outside groups), and credit for attending

concerts and recitals (ones in which the student is not performing).

Table B-1. Directors' use of assessment components (N = 45)
Assessment component Response % (Count)
Participation 95.6 (43)
Performances 95.6 (43)
Performance-based tests 91.1 (41)
Attendance 82.2 (37)
Conduct/discipline 66.7 (30)
Written tests/worksheets 57.8 (26)
Attitude 55.6 (25)
Practice log/journal 31.1 (14)
Sight-reading tests 15.6 (7)
Computer-assisted programs 13.3 (6)
Smart MusicTM 11.1 (5)
Student self-assessment 11.1 (5)
Requirement checklists 8.9 (4)
Peer assessment 6.7 (3)
Portfolios 6.7 (3)

Table B-2 shows how frequently each assessment component is used by the

director in determining the student's grade in their ensemble class. Participation

(84.4%), attendance (75.6%), and conduct/discipline (60.0%) are used weekly by many

directors. Sight-reading tests, computer-assisted programs, Smart MusicTM, student









self-assessment; requirement checklists, peer assessments, and portfolios are rarely

used by directors.

Table B-2. Frequency of use of assessment components (N = 45)
Assessment component Weekly % Monthly % Grading Semester % Never %
period %
Participation 84.4 2.2 13.3 0.0 0.0
Performances 4.4 24.4 55.6 11.1 4.4
Performance-based Tests 35.6 37.8 11.1 8.9 6.7
Attendance 75.6 4.4 11.1 0.0 8.9
Conduct/Discipline 60.0 0.0 17.8 2.2 20.0
Written tests/Worksheets 8.9 20.0 17.8 13.3 40.0
Attitude 48.9 0.0 13.3 2.2 35.6
Practice Log/Journal 20.0 6.7 11.1 0.0 62.2
Sight-reading Tests 2.2 2.2 8.9 8.9 77.8
Computer-assisted Programs 0.0 4.4 6.7 6.7 82.2
Smart MusicTM 0.0 6.7 0.0 6.7 86.7
Student Self-assessment 0.0 4.4 6.7 8.9 80.0
Requirement Checklists 4.4 2.2 6.7 6.7 80.0
Peer Assessment 2.2 2.2 8.9 2.2 84.4
Portfolios 0.0 2.2 2.2 6.7 88.9

Respondents were presented with a list of sixteen factors that might influence their

choice of assessment methods. Participants rated the level of influence using a 5-point

Likert-type scale ranging from 1 (not at all influenced) to 5 (extremely influenced). The

directors' personal philosophy of education (M = 4.44) and the amount of available class

time (M = 4.04) had a high degree of influence on the choice of assessment method.

Requirements set by the school district (M = 2.40), influence from a professional

organization (M = 2.16), and modeling after the high school program they attended (M=

2.13) had a low degree of influence. Also influential in determining assessment methods

were the objective or goals of the class (M = 3.76), the demands of the ensemble's

performance schedule (M = 3.76), and the number of students enrolled in the class (M=

3.38) (Table B-3).


125









Table B-3. Factors that influences assessment methods (N = 45)
Factor Rating mean
Personal philosophy of education 4.44
Amount of available class time 4.04
Objectives or goals of the class 3.98
Demands of the ensembles performance schedule 3.76
Number of students enrolled in the class 3.38
Influence from your music colleagues 3.00
Professional development 3.00
Expectation of the students 2.98
Modeled after a colleague's program 2.82
Graduate coursework 2.58
Expectation of your school principal 2.56
Unde rgrad uate co ursework 2.53
Expectation of the students' parents 2.49
Requirements set by the school district 2.40
Influence from a professional organization 2.16
Modeled after the high school program you attended 2.13


Participants were also presented with a list of sixteen possible purposes for

assessing students in band. The directors rated the importance of these purposes using

a 5-point Likert-type scale ranging from 1 (not at all important) to 5 (extremely

important). Directors responded that providing feedback to students (M = 4.49) and

identifying student needs (M = 4.38) were among the most important purposes. To

determine whether students were practicing at home (M = 3.47), to establish or maintain

credibility for the band program (M = 3.47), and to rank students according to individual

performing levels (M = 3.27) were the least important purposes of student assessment

(Table B-4).


126









Table B-4. Purposes of student assessment (N = 45)
Purpose Rating mean
To provide feedback to students 4.49
To identify individual student needs 4.38
To determine future instructional direction 4.29
To identify general class needs' 4.20
To demonstrate student accountability for learning 4.18
To determine what concepts students are failing to understand 4.18
To determine whether instruction has been successful 4.16
To motivate students to practice their instruments 4.11
To set or maintain class standards 4.04
To determine the level of musical preparedness for public performance 4.00
To help students prepare for public performance 3.93
To provide feedback to parents 3.60
To establish or maintain credibility for the band program 3.47
To determine whether students are practicing at home 3.47
To rank students according to individual performance levels 3.27


Respondents were asked to provide an estimate of the weight that a set of 15

components had on their band grade calculations. For this question, the directors used

a 5-point Likert-type scale with the following assignments: 1 = 0%, 2 = 1 to 25%, 3 = 26

to 50%, 4 = 51 to 75%, and 5 = 76 to 100%. Performances (M = 3.09), performance-

based tests (M = 2.95), and participation (M = 2.91) ranked very high in the percentages

assigned by the directors. Peer assessment (M = 1.29), student self-assessment (M=

1.26), and portfolios (M = 1.25) were among the lowest ranked components (Table B-5).

Table B-6 shows results of the final survey question, which asked directors to

assign percentages of weight that overall components should have in a balanced

assessment protocol for band. The directors assigned the following mean percentages

to these components: (a) individual testing and evaluation 34.72%, (b) performance

attendance and contribution 32.79%, and (c) rehearsal attendance and contribution -

31.79%. The mean results suggest that the directors believe an optimal assessment

protocol should use a nearly equal percentage of the three components.









Table B-5. Percentage use of assessment components (N = 45)
Component Rating mean
Performances 3.09
Performance-based tests 2.95
Participation 2.91
Attendance grades 2.56
Conduct/discipline 2.21
Written tests/works heets 2.00
Attitude grades 1.97
Practice log/journals 1.68
Sight-reading tests 1.45
Smart MusicTM 1.34
Requirement checklists 1.31
Computer program grades 1.30
Peer assessment 1.29
Student self-assessment 1.26
Portfolios 1.25


Table B-6. Model assessment components' percentages (N = 44)
Assessment Component Mean Percentage
Individual testing and evaluation 34.72
Performance attendance and contribution 32.79
Rehearsal attendance and contribution 31.79


128









LIST OF REFERENCES


Abeles, H.F. (1992). A guide to interpreting research in music education. In R. Colwell,
Handbook of research on music teaching and learning (pp. 227-243). New York,
New York: Schirmer Books.

Abeles H.F., Hoffer C.R., & Klotman R.H. (1995). Foundations of music education (2nd
ed.). Belmont, California: Thomson Schirmer.

Alreck, P.L., Settle R.B. (2004). The survey research handbook (3rd ed.). Boston:
McGraw-Hi Il/Irwin.

Antmann, M. D. (2007). Assessment and grading in the beginning band classroom,
PhD. Dissertation. Florida State University, Tallahassee, Florida, United States.
Retrieved February 12, 2008, from ProQuest Digital Dissertations database.

Asmus, E. P. (1999). Music assessment concepts. Music Educators Joural, 86 (2), 19-
24.

Bessom, M. E., Tatarunis A.M., & Forcucci S.L. (1980). Teaching music in today's
secondary schools: A creative approach to contemporary music education (2
ed.). New York, New York: Holt, Reinhart, and Winston.

Bloom, B.S., Hastings, J.T., and Madaus, G.F. (1971). Handbook of formative and
summative evaluation of student learning. New York, New York: McGraw-Hill.

Bowman, S.E., Calahan, G.L., Colwell, R., Drummond, R., Dubash, F., Formo, P.,
Pucciani, L., Srupak, R.T., & Hickey, W. (1984). Point of view: Grading
performance groups. Music Educators Joural, 70 (7), 59-62.

Boyle, J.D., Radocy, R.E. (1987). Measurement and evaluation of music experiences.
New York, New York: Schirmer Books.

Bradford, C.B. (2003). Sound assessment practices in the standards-based choral
curriculum. Choral Journal, 43 (9), 21-27.

Branum, K., Fusco, L., Haag, R., Richmond, F., & Russo, M.D. (1988). Idea bank:
Evaluating music students. Music Educators Journal, 75 (2), 38-41.

Brookhart, S. M. (1993). Teacher's grading practices: Meanings and values. Journal of
Educational Measurement, Vol 30 (No. 2), 123 & 139.

Brophy, T. S. (2000). Assessing the developing child musician. Chicago, Illinois: GIA
Publications.

Burrack, F. (2002). Enhanced assessment in instrumental programs. Music Educators
Journal, 88 (No. 6), 27-32.


129









Chapman, G. H. (2004). College admissions: The effect of application factors and the
quality of applicants. PhD. Dissertation. Syracuse University, New York, United
States. Retrieved November 12, 2007, from ProQuest Digital Dissertations
database. (Publication No. AAT 3149043).: Syracuse University.

Chiodo, P. (2001). Assessing a cast of thousands. Music Educators Journal, 87 (No. 6),
17-23.

Cole, D.J., Ryan, C.W., & Kick, F. (1995). Portfolios across the curriculum. Thousand
Oaks, California: Corwin Press.

Colwell, R. (1982). Evaluation in music education: Perspicacious or peregrine. A
Symposium in Music Education (p. 158). Urbana: The University of Illinois.

Cope, C. O. (1996). Steps toward effective assessment. Music Educators Journal, 83
(1), 39-42.

Council of Chief State School Officers. (2009). Arts education assessment consortium.
Retrieved December 8, 2009, from Council of Chief State School Officers:
http://www.ccsso.org/projects/SCASS/Projects/Arts_Education_Assessment_Co
nsortium/

Creswell, J. W. (2002). Educational research: Planning, conducting, and evaluating
quantitative and qualitative research. Upper Saddle River, New Jersey: Merrill
Prentice Hall.

Dewey, J. (1916). Democracy and education. New York: The Macmillan Company.

Dewey, J. (1910). How we think. Lexington: DC Heath and Company.

Diehl, D. (2007). Factors related to the integration of the national standards into the
secondary school wind band. Ph.D. dissertation. Ball State University, Muncie,
Indiana, United States. Retrieved November 17, 2009, from ProQuest Digital
Dissertations database. (Publication No. UMI 3255053).

Dirth, K. A. (2000). Implementing portfolio assessment in the music performance
classroom. Ed.D. dissertation. Columbia University Teachers College, New York,
New York, United States. Retrieved November 26, 2007, from ProQuest Digital
Dissertations database. (Publication No. AAT 9976711).

Drake, A. (1984). A survey of music performing group grading practices. Bulletin of the
Council for Research in Music Education, 78, 33-37.


130









Eyerman, G. C. (2002). Changes in high school curricular offerings before and after the
implementation of the Florida Comprehensive Assessment Test (FCAT). Ed.D.
dissertation. Florida Atlantic University, Boca Raton, Florida, United States.
Retrieved November 19, 2007, from ProQuest Digital Dissertations database.
(Publication No. AAT 3055359).

Farrell, S. R. (1997). Tools forpoverful student evaluation (2nd ed.). Milwaukee,
Wisconsin: Meredith Music Publications.

Goolsby, T. W. (1999). Assessment in instrumental music. Music Educator Journal, 86
(2), 31-35, 50.

Hanzlik, T. J. (2001). An examination of Iowa high school instrumental band directors'
assessment practices and attitudes toward assessment. Ed.D. dissertation. The
University of Nebraska Lincoln, Nebraska, United States. Retrieved October 25,
2007, from ProQuest Digital Dissertations database. (Publication No. AAT
3009721).

Hart, D. (1994). Authentic assessment: A handbook for educators. Menlo Park,
California: Addison-Wesley Publishing.

Harvard Graduate School of Education. (2010). Project zero. Retrieved January 13,
2010, from Arts PROPEL: http://pzweb.harvard.edu/research/propel.htm

Heritage School Band. (2008). Concert band grading policy. Retrieved November 30,
2009, from Heritage School Band:
http://www.heritagebands.org/concert_band_grading_policy.htm

Hill, K. W. (1999). A descriptive study of assessment procedures, assessment attitudes,
and grading policies in selected public high school band performance classrooms
in Mississippi. Mus.Ed.D. dissertation. The University of Southern Mississippi,
Hattiesburg, Mississippi, United States. Retrieved February 1, 2008, from
ProQuest Digital Dissertations database. (Publication No. AAT 9935693).

Hoffer, C. R. (2000). Teaching music in the secondary schools (5th ed.). Belmont,
California: Wadsworth Publishing Company, Inc.

Kancianic, P. M. (2006). Classroom assessment in U.S. high school band programs:
Methods, purposes, and influences. Ph.D. dissertation. University of Maryland,
Maryland, United States. Retrieved October 25, 2007, from ProQuest Digital
Dissertations database. (Publication No. AAT 3222315).









Kotora, E. J. (2001). Assessment practices in the choral music classroom: A survey of
Ohio high school choral music teachers and college choral methods teachers.
Ph.D. dissertation. Case Western Reserve University, Ohio, United States.
Retrieved December 5, 2009, from Dissertations & Theses (Publication No. AAT
3036343).

LaCognata, J.P. (2010). Student assessment in the high school band ensemble class.
In T. Brophy (Ed.), The practice of assessment in music education: Frame vorks,
models, and designs, Proceedings of the 2009 Florida Symposium on
Assessment in Music Education, Gainesville, Florida, April 15-17, 2009 (pp. 227-
236).

Lake Highlands Area Band Club. (2008). Grading in the Lake Highlands band program.
Retrieved January 18, 2010, from Lake Highlands Bands:
http://www. lakehighlandsbands.org/Default.aspx?tabid=161

Lehman, P. R. (1997). Assessment and grading. Teaching Music, 5 (3), 58-59.

Linn, R., Miller, M.D. (2005). Measurement and assessment in teaching (9th ed.). Upper
Saddle River, New Jersey: Pearson Merrill Prentice Hall.

Locke, L.F., Spirduso W.W., & Silverman S.J. (2000). Proposals that work: A guide for
planning dissertations and grant proposals. (4th ed.). Thousand Oaks: California.

Lopez, S. M. (2006). Effect of the Florida A+ plan on curriculum and instruction in Title I
public elementary schools. Ed.D. dissertation. Florida International University,
Miami, Florida, United States. Retrieved November 20, 2007, from ProQuest
Digital Dissertations database. (Publication No. AAT 3217574).

Mabry, L. (1999). Portfolios plus: A crticial guide to alternate assessment. Thousand
Oaks, California: Corwin Press.

Mark, M. L. (1996). Contemporary music education (3rd ed.). Belmont, California:
Schirmer.

Mark, M.L., Gary, C.L. (1999). A History of American music education (2nd ed.).
Reston, Virginia: Music Educators National Conference.

Mathison, C. (1971). A bibliography of research on the evaluation of music teacher
education programs. Journal of Research in Music Education, 19 (1), 106-114.

McCoy, C. W. (1988). An exploratory study of grading criteria among select Ohio
ensemble directors. Contributions to Music Education, 15, 15-19.









McCoy, C. W. (1991). Grading students in performing groups: a comparison of
principals' recommendations with directors' practices. Journal of Research in
Music Education, 39 (3), 181-190.

McCreary, T. J. (2001). Methods and perceptions of assessment in secondary
instrumental music. Ph.D. dissertation. University of Hawai'i, Manoa, Hawaii,
United States. Retrieved February3, 2008, from ProQuest Digital Dissertations
database. (Publication No. AAT 3030187).

McMillan, J. (2003). Understanding and improving teachers' classroom assessment
decision making: Implications for theory and practice. Educational Measurement:
Issues and Practicies, 22 (4), 34-43.

Meaux, R. J. (2004). A descriptive analysis of twenty-six undergraduate music
education programs at Texas four-year colleges and universities accredited by
the National Association of Schools of Music. D.M.A. dissertation. University of
Houston, Texas, United States. Retrieved October 17, 2007, from ProQuest
Digital Dissertations database. (Publication No. AAT 3123919).

MENC: The National Association for Music Education. (1998). Grading practices in
music. Music Educators Journal, 84 (5), 37-40.

MENC: The National Association for Music Education. (2008). National standards for
music education. Retrieved February 6, 2008, from MENC: The National
Association for Music Education: http://www.menc.org/resources/view/national-
standards-for-m usi c-education.

Miller, J. A. (2007). Direct and indirect effects of selected factors on school grades in
public high schools in the state of Florida. Ed.D. dissertation. University of
Central Florida, Florida, United States. Retrieved November 9, 2007, from
ProQuest Digital Dissertations database. (Publication No. AAT 3256933).

National Assessment of Education Progress. (2008). The Nation's Report Card Arts.
Retrieved January 12, 2010, from The Nation's Report Card:
http://nationsreportcard.gov/arts_2008/

National Association of Schools of Music. (2009). Handbook 2009-10 (December 2009
Edition). Reston, Virginia: National Association of Schools of Music

National Commission on Excellence in Education. (1983). A Nation at Risk: The
imperative for educational reform. Retrieved November 9, 2009, from
http://www.ed.gov/pubs/NatAtRisk/risk. htm I


133









Norrington, D. M. (2006). Instrumental music instruction, assessment, and the block
schedule. M.M. dissertation. Southern Illinois University, Carbondale, Illinois,
United States. Retrieved November 6, 2007, from ProQuest Digital Dissertations
database. (Publication No. AAT 1437506).

Northeastern Clinton Central School District. (2004-2005). Senior band grading policy.
Retrieved November 28, 2009, from NCCS Instrumental Music:
http://www.nccscougar.org/nwarner/

Paswaters, R. W. (2006). A study of Florida public elementary school principals' job
satisfaction following the implementation of Florida's A+ system for grading
schools. Ed.D. dissertation. University of Central Florida, Orlando, Florida, United
States. Retrieved November 8, 2007, from ProQuest Digital Dissertations
database. (Publication No. AAT 3242461).

Pizer, R. A. (1990). Evaluation programs for school bands and orchestras. West Nyack,
New York: Parker Publishing Company.

Pontious, M. (2008). Comprehensive musicianship through performance: A paradigm
for restructuring. Retrieved from Wisconsin Department of Public Instruction:
http://dpi.wi.gov/cal/mucmppap.html

Reid, M. (2005). Music assessment collaboration model for secondary music teachers.
Ed.D. dissertation. University of California Los Angeles, California, United States.
Retrieved October 29, 2007, from ProQuest Digital Dissertations database.
(Publication No. AAT 3202761).

Reimer, B. (1989). A philosophy of music education (2nd ed.). Englewood Cliffs, New
Jersey, United States: Prentice-Hall, Inc.

Reimer, B. (2002). A philosophy of music education: Advancing the vision (3rd ed.).
Upper Saddle River, New Jersey: Prentice Hall.

Schopp, S. (2008). A study of the effects of national standards for music education,
number 3 (improvisation) and number 4 (composition) on high school band
instruction in New York state. Ph.D. dissertation. Columbia University, New York,
New York, United States. Retrieved November 17, 2009, from ProQuest Digital
Dissertations database. (Publication No. UMI 3225193).

Sears, M. (2002). Assessment in the instrumental music classroom: Middle school
methods and materials. M.M. dissertation. Dissertation. University of
Massachusetts Lowell, Massachusetts, United States. Retrieved November 22,
2007, from ProQuest Digital Dissertations database. (Publication No. AAT
1409387).









Sherman, C. P. (2006). A study of current strategies and practices in the assessment of
individuals in high school bands. Ed.D. dissertation. Columbia University, New
York, New York, United States. Retrieved November 14, 2007, from ProQuest
Digital Dissertations database. (Publication No. AAT 3237098).

Simanton, E. G. (2000). Assessment and grading practices among high school band
teachers in the United States: A descriptive study. Ph.D. dissertation. The
University of North Dakota, Grand Forks, North Dakota, United States. Retrieved
October 30, 2007, from ProQuest Digital Dissertations database. (Publication No.
AAT 9986536).

Stauffer, S. L. (1999). Beginning assessment in elementary general music. Music
Educators Journal, 86 (No. 2), 25-30.

Sudman, S. (1976). Applied sampling. New York: Academic Press.

Thorndike, R. (2005). Measurement and evaluation in psychology and education (7th
ed.). Upper Saddle River, New Jersey: Pearson Merrill Prentice Hall.

Todd Beamer High School. (2009). Todd Beamer Campus Music Website. Retrieved
November 28, 2010, from Todd Beamer High School:
http://schools.fwps.org/tbhs/music/band/

Tracy, L. H. (2002). Assessing individual students in the high school chorale ensemble:
Issues and practices. Ph.D. dissertation. Florida State University, Tallahassee,
Florida, United States. Retrieved November 19, 2007, from ProQuest Digital
Dissertations database. (Publication No. AAT 3065486).

Trice, A. (2000). A handbook of classroom assessment. New York, New York:
Longman.

U.S. Department of Education. (1996). Creating betterstudent assessments. Retrieved
January 17, 2010, from Improving America's schools: A newsletter on issues in
school reform: http://www.ed.gov.pubs/IASA/newsletters/assess/pt.1.htm I

U.S. Department of Education. (2002) P.L. 107-110, No Child Left Behind Act of 2001.
Retrieved May 25, 2010, from ED.gov:
http://www2.ed.gov/policy/elsec/esea02/107-110.pdf

Wilbur, J. P. (1955). Training of secondary school music teachers in western colleges
and universities. Journal of Research in Music Education, 3 (2), 131-135.

Wisconsin Lutheran High School Band. (2009). 2009-10 Grading policy. Retrieved
November 21, 2009, from Wisconsin Lutheran High School Band:
http://wiscoband.tripod.com/sitebuildercontent/sitebuilderfiles/2009-
10GradingPolicy.pdf


135









Zitek, J. S. (2008). An examination of Nebraska high school band directors'
implementation of and attidues toward the national standards in music. Ph.D.
dissertation. University of Nebraska, Lincoln, Nebraska, United States. Retrieved
November 17, 2009, from ProQuest Digital Dissertations database. (Publication
No. UMI 3331177).


136









BIOGRAPHICAL SKETCH

John P. LaCognata was appointed Assistant Professor of Music and Director of

Bands at the University of North Carolina Wilmington in 2010. His responsibilities

include supervising the band program, conducting the Wind Symphony, Chamber

Winds, and Pep Band, and teaching Basic Conducting and Applied Trumpet. In

addition, he will conduct the New Horizons Concert Band for the UNCW Osher Lifelong

Learning Institute (OLLI).

Mr. LaCognata received his Bachelor of Science in Music Education from the

University of Illinois (1986), Master of Music in Trumpet Performance from Auburn

University (1989) and a PhD in Music Education with an emphasis in Wind Conducting

at the University of Florida (2010) where he was awarded a Doctoral Teaching

Fellowship. Prior to his appointment at UNCW, LaCognata held a variety of teaching

positions throughout his twenty-four year career as a music educator. He served on the

faculties of Southeastern Oklahoma State University, Louisiana State University, and

Iowa State University and at the secondary level he held positions at Hillcrest High

School (Country Club Hills, Illinois), Tavares High School (Tavares, Florida), Cypress

Creek High School (Orlando, Florida), and Winter Park High School (Winter Park,

Florida).

At Winter Park, the band program received recognition within the state of Florida

and throughout the country under his leadership. The Sound of the Wildcats Marching

Band made appearances at the 2005 Autozone Liberty Bowl in Memphis, Tennessee,

the 2002 Blue Cross Blue Shield Fiesta Bowl National Band Championship in Phoenix,

Arizona, and the 2000 Sylvania Alamo Bowl in San Antonio, Texas. The Wind

Ensemble at Winter Park performed at the 2002 Bands of America National Concert









Band Festival in Indianapolis, Indiana, and was a featured ensemble at the "President's

Concert" at the 2007 Florida Music Educators' Association Conference in Tampa,

Florida. The highlight of Mr. LaCognata's tenure at Winter Park was the Wind

Ensemble performance at the 60th Anniversary of the Midwest Clinic in Chicago, Illinois

in 2006.

Mr. LaCognata is an active adjudicator, clinician and performer. He has served as

a guest conductor and clinician for bands and honor bands throughout the United

States. He is a former member of the Cathedral Brass and a freelance trumpet player.

His professional affiliations include the College Band Directors National Association, the

Music Educators National Conference, and the International Trumpet Guild.


138





PAGE 1

1 CURRENT STUDENT ASSESSMENT PRACTICES OF HIGH SCHOOL BAND DIRECTORS By JOHN P. LACOGNATA A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2010

PAGE 2

2 2010 John P. LaCognata

PAGE 3

3 This work is dedicated t o my father, John J. LaCognata, my role model mentor and best friend

PAGE 4

4 A CKNOWLEDGMENTS First, I would like to thank the band directors who took time out of their busy schedules to complete the survey and share their expertise. Without you this research would not have been possible. I would also like to thank Sue Rarus, Director of Information Resources and Public ation and her colleague s at the National Association for Music Education ( MENC ) Your assistance in administering the survey was greatly appreciated. I sincerely thank my graduate committee members Dr. Timothy Brophy, Dr. David Waybright, Dr. Russell Robinson, and Dr. David Therriault for their guidance and support throughout this entire process. It was a privilege to learn from you an honor to teach with you, and a pleasure to work for you I want to thank my colleagues in the Band Department at the University of Florida. Y our camaraderie and friendship has made the last three years memorable and enjoyable. I also would like to thank t he wonderful people in the College of Fine Arts and the faculty and students in the Music Department. GO GATORS I express thanks to my present and former band colleagues across the country. The importance of your work with and for students cannot be measured. I thank those people that have had a positive impact on my musical life. Your willingness to share your time and talents with me will continue to guide me. I would also like to thank Dan Massoth and Ron Raup at MakeMusic Inc. for taking an interest in my research. Finally, I thank my family and friends for their constant encouragement, support, and love. It is through you that I find purpose for my life. Lastly, I am most grateful to my wife Leigh, a nd my children John and Alexa. I am blessed to have you Everything I do, I do for you.

PAGE 5

5 TABLE OF CONTENTS page ACKNOWLEDGMENTS .............................................................................................................. 4 LIST OF TABLES ......................................................................................................................... 8 LIST OF FIGURES ....................................................................................................................... 9 ABSTRACT ................................................................................................................................. 10 CHAPTER 1 INTRODUCTION................................................................................................................. 12 Examples .............................................................................................................................. 12 Wisconsin ...................................................................................................................... 12 California ....................................................................................................................... 14 New York ....................................................................................................................... 14 Washington ................................................................................................................... 15 Texas ............................................................................................................................. 16 Observations ........................................................................................................................ 17 Significance of the Problem ............................................................................................... 19 Purpose of the Study .......................................................................................................... 23 Delimitations ........................................................................................................................ 24 2 LITERATURE REVI EW ...................................................................................................... 26 Definition of Assessment ................................................................................................... 26 Philosophical Rationales of Assessment ........................................................................ 27 Rationalism ................................................................................................................... 27 Empiricism ..................................................................................................................... 28 Pragmatism ................................................................................................................... 29 Assessment History ............................................................................................................ 30 Assessment in Education .................................................................................................. 33 Assessment in Music Education ....................................................................................... 35 Student Assessment in Music Education ........................................................................ 38 Assigning Grades to Band Students ................................................................................ 44 Developing Music Assessment Models ........................................................................... 45 Arts PROPEL ................................................................................................................ 46 Comprehensive Musicianship through Performance (CMP) ................................ 47 State Collaborative Assessment and Student Standards (SCASS) .................... 48 Current Research ................................................................................................................ 49 Summary of Resea rch: Study Implications ..................................................................... 54 3 METHODOLOGY AND PROCEDURES ......................................................................... 56

PAGE 6

6 Research Method ................................................................................................................ 56 Subjects ................................................................................................................................ 56 Procedures ........................................................................................................................... 57 Data Collection .................................................................................................................... 57 Statistical Procedures ......................................................................................................... 59 Pilot Study ............................................................................................................................ 60 4 RESULTS ............................................................................................................................. 64 Background Information ..................................................................................................... 64 Grading Information ............................................................................................................ 67 Assessment P hilosophy ..................................................................................................... 68 Assessment Information .................................................................................................... 70 Assessment Model .............................................................................................................. 72 5 DISCUSSION AN D CONCLUSIONS .............................................................................. 84 Discussion of the Results .................................................................................................. 84 Backgr ound Information.............................................................................................. 84 Grading Information ..................................................................................................... 85 Assessment Philosophy.............................................................................................. 85 Purpose .................................................................................................................. 85 Criteria .................................................................................................................... 87 Category ................................................................................................................. 90 Influence ................................................................................................................. 90 Preparation ............................................................................................................ 91 Assessment Information ............................................................................................. 92 Components .......................................................................................................... 92 Data collection ....................................................................................................... 93 Characteristics ....................................................................................................... 94 Reflection ............................................................................................................... 95 Assessment Model ...................................................................................................... 96 Conclusions .......................................................................................................................... 96 Research Question 1: In What Specific Ways are Current High School Band Directors Assessing Students in Their Ensemble Classes? ............................. 96 Research Question 2: What are High School Band Directors Attitudes toward the Assessment Process? ......................................................................... 97 Research Question 3: How Can the Results of this R esearch Contribute to the Development of a Student Assessment Model for Bands? ........................ 97 Rehearsal attendance and contribution ............................................................ 98 Performance attendance and contribution........................................................ 98 Individual testing and evaluation ........................................................................ 98 Implications for Music Education ............................................................................... 99 Purpose .................................................................................................................. 99 Criteria .................................................................................................................... 99 Preparation .......................................................................................................... 101 Data Collection .................................................................................................... 102

PAGE 7

7 Suggested Model ................................................................................................ 103 Future Research ........................................................................................................ 104 APPENDIX A QUESTIONNAIRE ............................................................................................................ 112 B PILOT STUDY RESULTS ................................................................................................ 121 Demographic Information ................................................................................................ 121 Grading Information .......................................................................................................... 123 Assessment Information .................................................................................................. 124 LIST OF REFERENCES ......................................................................................................... 129 BIOGRAPHICAL SKETCH ..................................................................................................... 137

PAGE 8

8 LIST OF TABLES Table page 1 1 Summary of example assessment components and percentage assignment. .... 17 4 1 Importance of purposes of assessment ...................................................................... 79 4 2 Criteria importance in the evaluation of band students ............................................ 80 4 3 Importance of assessment categories ........................................................................ 80 4 4 Factors influencing assessment methods .................................................................. 80 4 5 Assessment preparation ............................................................................................... 81 4 6 Assessment components used with the assigned percentage assigned .............. 81 4 7 Data collection procedures ........................................................................................... 82 4 8 Materials used in performance based tests ............................................................... 82 4 9 Importance of characteristics of assessment models ............................................... 82 4 10 Agreement level with statements concerning assessment ...................................... 83 5 1 Assessment purposes including category ................................................................ 105 5 2 Assessment criteria including categories and national standard .......................... 106 5 3 Factors influencing assessment methods including categories ............................ 106 5 4 Preparation methods including category .................................................................. 107 5 5 Assessment components usage including category ............................................... 107 5 6 Data collection procedures including category ........................................................ 108 5 7 Weighted component results compared to pilot study results ............................... 108 5 8 Student a ssessment model Stage o ne ................................................................... 108 5 9 Student a ssessment model Stage t wo ................................................................... 109

PAGE 9

9 LIST OF FIGURES Figure page 4 1 School type ...................................................................................................................... 73 4 2 School enrollment ........................................................................................................... 73 4 3 Co mmunity type of school ............................................................................................ 74 4 4 Socio economic stat us of school community ............................................................. 74 4 5 Student enrollment in band program ........................................................................... 75 4 6 Student enrollm ent in concert band(s) ........................................................................ 75 4 7 Co ncert bands per school ............................................................................................. 76 4 8 Average number of stu dents per concert band ......................................................... 76 4 9 Directors years of teaching experience ..................................................................... 77 4 10 Directors years teaching at current s chool ................................................................ 77 4 11 Dire ctors education level .............................................................................................. 78 4 12 Grade types assigned .................................................................................................... 78 4 13 Create a balanced assessment tool ............................................................................ 79 5 1 Current student assessment practices ..................................................................... 110 5 2 Student a ssessment m odel ........................................................................................ 111

PAGE 10

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy CURRENT STUDENT ASSESSMENT PRACTICES OF HIGH SCHOOL BAND DIRECTORS B y John P. LaCognata August 2010 Chair: Timothy S. Brophy Major: Music Education Measurement and assessment are becoming increasingly important to all music educators. The purpose of this study was to investigate the following questions: 1) i n what spec ific ways are current high school band directors assessing stude nts in their ensemble classes; 2) w hat are high school band directors attitudes toward the assessment process; and 3) h ow can the results of this research contribute to the development of a s tu dent assessment model for bands ? The subjects for this study were 454 high school band directors from across the United States R esults show that the main purpose of student assessment for high school band directors centered on providing their students and themselves with feedback concerning the instructional process in the classroom. Directors reported that performance skills were the m ost important criteria to assess in their students and the main influences of the assessment methods they use are their personal philosophy of assessment and available class time. Directors reported the best source of preparation for assessing their stude nts came from their colleagues and that they are interested in finding new ways to assess their students. Directors suggest that an effective assessment model for b and would be weighted: rehear sal attendance and contribution 34. 95% ;

PAGE 11

11 performance attendance and contrib ution 3 4 .7 0 % ; and individual t esting and evaluation 30.57% .

PAGE 12

12 CHAPTER 1 INTRODUCTION While many high school band directors do a very good job of teaching and preparing ensembles for performanc es, very few have developed and implement ed an effecti ve assessment tool to use when assigning grades for their students. Having a way to assign student grades with proper rationale and supporting data is essential in this age of school accountability and gradepoint minded students and parents. The days of abst ract grading systems or teacher bias grade assignment are gone. Instrumental music educators can no longer simply grade their students on attendance and perceived effort or interest ( Abeles, 1995; Pizer, 1990) A Google search of the phrase high school band grading policy results in approximately 460,000 hits. Consider the following five examples regarding current grading policies listed in high school band handbooks found on various band web pages. Attention should focus on the selection of sp ecific assessment components, the wording or explanation of that component, and the percent the component is assigned in the overall grading plan. Examples Wisconsin Attendance: 30% Absences and tardiness disrupt the learning environment. Students need re gular day to day attendance and must be punctual to maintain a sense of continuity in their program. Even one absence can affect the success and educational outcome for the individual and the entire class on that day. Music rehearsals are particularly diff icult to make up since the process is so experiential. It is impossible to re create what the other students experienced the preceding day. Attendance at all performances is critical for every student in the ensemble. Every student in the band has a specif ic role in the group and an absence is

PAGE 13

13 very noticeable. If you will miss a performance or rehearsal (outside of the school day), please complete a Notice of Planned Absence form. If a students absence is because of involvement in another school activity, it is your responsibility to notify the director of the activity via the Notice of Planned Absence form. If an emergency situation arises, please notify the director of the situation as soon as is feasible. For each unexcused absence from a band performanc e, the final semester grade will be reduced by one letter grade. Assessments: 30% The number of assessments will vary each quarter. Each assessment will receive points for the quality of work. Assessments will be given a due date and it is the students responsibility to see that the assessment is completed on time. Work turned in late will be reduced by one letter grade for each day that it is late. It is the students responsibility to make up any missed assessments. Audio recordings may be submitted for playing tests. This can be done by cassette tape, CD, DVD, mp3, wav or any other electronic format that is compatible with the high school's technology capabilities. Difficulties with technology do not relieve you of the responsibility to turn in assessmen ts by the due date. Typical assessments might include, but are not limited to, any of the following: playing tests (scales, excerpts from literature, etc.) chair auditions, quizzes and/or other written assignments. Participation by Effort, Attitude and Pre paration: 30% A positive attitude is the underlying ingredient necessary to the success of each ensemble and in turn each member of that ensemble. The student is expected to leave ones ego at the door and become a team player approaching all new music and ideas with an open mind, seeing each as an opportunity to learn. We must work together and encourage one another to achieve success. Full and positive participation in every rehearsal and performance is expected of every band student. Lessons/Sectionals: 10% Many things are accomplished in individual or small group settings that cannot be accomplished in full ensemble rehearsals. Therefore, attendance is required at all scheduled lessons and sectionals (Wisconsin L utheran High School Band, 2009)

PAGE 14

14 California Participation: 25% The participation grade includes being prepared for class, having your instrument, music, and pencil. Students are expected to actively participate in class. Activities which don't allow students to participate daily will ca use a drop in the participation grade. If students are not allowed to participate due to behavior, this will negatively affect their participation grade for the day. Students who have regularly non working instruments, and delay in getting repairs, will have their participation grade lowered for the days involved. Performances: 25% Students are expected to participate in scheduled performances, during or outside of school hours. Very few reasons will be accepted for failing to participate in a scheduled concert. As our concerts are scheduled WELL in advance, ample planning is easy. If students are not in the required uniform for a performance, they will not be allowed to participate, and will receive a zero grade for that performance. Playing and Writte n Tests: 50% Students will be tested during band class on a regular basis. These grades will be averaged and will account for half of the band grade. Tests will be weighted evenly, whether announced or not announced ahead of time. Missed tests must be made up, just as in any other class (Heritage School Band, 2008) New York Rehearsals: 34% Preparedness have all necessary equipment needed for class ( i.e., Instrument, music, pencil, marching band lyre/music) and be able to play to the best of your ab ility. Behavior/Attitude pay attention, listen to directions, and do not disrupt class or rehearsals by talking or passing notes, etc. with others in the band. Be On Time Lessons: 34% Attendance The number of lessons attended will directly affect students grade. Each lesson is important to attend when scheduled as students may be assigned group assignments to be worked on together for an evaluation each quarter. Preparedness practice your assignments. I will be checking band lockers to see if instrum ents are going home for practice on

PAGE 15

15 weekends. Practice in school during study hall counts!! (6% of total grade) Responsibility If you have to miss a lesson ( i.e., Test/quiz in class, etc.) let me know in advance and schedule a make up lesson time. Make u p lessons can receive a maximum score of 8 pts out of the possible 10 pts for regularly scheduled lessons, unless you inform me of the missed lesson in advance. Events Attendance: 12% Attend performances the band plays in! Every member is important to the success of the performance. Wear proper attire. Written/Playing Assignments: 12% Complete them and hand them in on time. Your grade will be directly affected be whether or not you do these assignments. Final Project: 10% (Northeastern Clinton Central School District, 20042005) Washington Individual Practice/Practice Sheets: 20% In order for students to improve their playing skills they must practice on their own. Without individual practice a student will simply not improve, nor will they buildup the physical and mental stamina needed to make it through a concert. Students will receive Practice Sheets roughly every two weeks. Students log their practice time (which parent(s) verify with their initials), turn the sheets in to the band office by the du e date. Practice time will be posted in the grade book and calculated as follows: (A) Exceeding Standard = 3 hours per week (1/2 hr. daily) (B) Meeting Standard = 2 hours per week (C) Approaching Standard = 1 hour per week (F) Below Standard = 0 Practic e Time Performance Assessment: 20% Performance Assessments (playing tests) will be done on assigned material covered in class. Most playing tests involve a student being recorded in a practice room, alone, playing through the assigned music. The teache r will listen to and evaluate each student performance, and provide feedback regarding their skills. In the Concert Band, Jazz Ensemble, and Intermediate Band, some performance assessments may take place in the classroom during rehearsal. Grades for perf ormance assessments will be calculated and posted using the scale above (95=A, 85=B, etc.).

PAGE 16

16 Written Assessments: 10% Written assessments will be given on material covered in class, such as notation, rhythmic dictation, and music theory, terms/definitions, etc. Written assessments will be graded and posted using the same scale as above. Daily Activity Assessment: 50% Activity assessment will indicate a students understanding and application of proper rehearsal etiquette during daily rehearsals and related band activities (concerts, assemblies, etc.). Proper rehearsal etiquette, simply put, means proper rehearsal manners. This involves teamwork, consideration, respect, LISTENING, etc, and is vital to a positive learning environment in band. Activity Ass essment will be calculated and posted using the same scale as above. (Todd Beamer High School, 2009) Texas Participation: 25% The student will receive a grade for each before and after school sectional and rehearsal during a grading period. The student will be on task on focused during all rehearsals. The student will have instrument, music, pencil, and supplies. The student will mark music and take notes as needed. An unexcused absence from a before or after school rehearsal or sectional will lower a students participation average of the six weeks by 20 points. Tardies lower a students participation average by 5 points. Skills: 25% The student will be expected to improve individual music skills. The students individual skill development will be ev aluated through taped music tests, individual playing tests, scale tests, and written tests. The student will be evaluated on improvement of ensemble skills during daily rehearsals. Fundamentals: 25% The student will be expected to improve performance fundamentals. The student will be evaluated for improvement of music fundamentals through daily observation during the basics part of each rehearsal and during sectionals. The student will be expected to demonstrate correct posture, hand position, embo uchure, air production, articulation and attentiveness as monitored during rehearsals. The student will be expected to develop a historical knowledge of the literature relative to his/her respective instrument.

PAGE 17

17 Performance: 25% The student will receive a grade for each performance during a grading period. Performances will be counted as major exams. The number of performances will be determined by the performance calendar. If no public performance occurs during a grading period, the performance grade wi ll be based upon informal classroom performances determined by the director (Lake Highlands Area Band Club, 2008) Table 1 1. Summary of ex ample assessment components and percentage assignment. Example number Assessment component Assigned percentage 1 At tendance Assessments Participation Lessons/s ectionals 30 30 30 10 2 Participation Performances Playing/written t ests 25 25 50 3 Rehearsals Lessons Event a ttendance Written/playing a ssignments 34 34 12 10 4 Individual practice/practice s heets Performance a ssessments Written a ssessments Daily a ctivit y a ssessment 20 20 10 50 5 Participation Skills Fundamentals Performance 25 25 25 25 Observations The first observation concerning these excerpts involves the vari ety of components used in these grading policies. C omponents incor porated in the excerpts included attendance, events attendance, assessments, playing tests/assignments, written tests/assignments, participation, rehearsals, daily activity assessment, lessons, sectionals, performances, and indiv idual practice. We assume each director has selected individual assessment components in an effort to reinforce important aspects of their ensemble class and/or enforce policies they feel are essential to the efficient

PAGE 18

18 operation of their band program. Wh ile there appears to be certain commonalities in the design or purpose of the components selected by individual directors, there is not an agreement of what specific components should be incorporated in their grading policies. One basic characteristic of the individual assessments components does exist. Each of these assessment components can be divided into two distinct categories: musical, and non musical Musical components mentioned in the excerpts include playing and written tests/assignments, perf ormances, lessons, and sectionals. Examples of non musical components are participation, attendance, effort, and attitude. It is evident that each director has combined assessment components from each of these categories in their grading policies. The se cond observation concern ing the se excerpts involves the wording or explanation associated with each individual assessment component. There is no evidence from these excerpts of any standard definitions for the components, nor does there seem to be a commonly accepted way to incorporate these components into an overall assessment tool. Each director appears to be isolated when defining how and why individual components are being included in their grading policies. Directors are required to create their ow n explanation and rationale associated with each component they select. The final observation of these excerpts is the variety of emphasis the directors place on the different assessment components. Some directors place nearly equal weights to each assess ment component ( i.e., 30%, 30%, 30%, 10%), while others place much more weight on one component versus its counterparts ( i.e., 50%, 25%, 25%). There is great variety in the importance each individual assessment component is

PAGE 19

19 assigned in the included gradin g policies. Further, it becomes the job of each director to create a justification for assigning emphasis to the various components used. There is no evidence of a generally accepted model or plan for weighing the various assessment components in an asse ssment tool. In the process of grading their students, band directors must 1) select appropriate assessment components, 2) define those components to ensure they are accurately measuring what is intended, and 3) combine and weigh the various components to produce an effective assessment tool. Each of these decisions is inter related and affects the validity of their assessment tool. There is little evidence to suggest there is a widely accepted assessment model that secondary band directors can refer to wh en designing their grading policies. Significance of the Problem Assessment has become a n important and visible part of todays educational environment In many states, schools are assessed and assigned a letter grade based on how they score in certain ar eas (Miller, 2007). These grades are made public, reported through the media, and listed on department of education websites. Schools receiving high marks are praised and often receive additional funding from state and federal agencies Schools receivin g failing marks are placed on probation. Often a sense of crisis can be felt throughout that learning community. Administrators are often fired and teachers questioned in an effort to bring that grade up the next year (Paswaters, 2006). These school grades have changed the way teachers and administrators approach curriculum, class assignment and scheduling ( Eyerman, 2002; Lopez, 2006). The concept of school grades h as also changed the way students and parents view grading and assessment in each of their classes (Lehman, 1997).

PAGE 20

20 A problem for many band directors is the lack of training concerning grading and student assessment Grading systems are rarely i f ever discussed in an undergraduate music education curriculum and most discussion s on assessm ent only apply to general education courses, which have little or no application in a performance based music ensemble class. There is little guidance for music educators on how to properly design an assessment tool. Further, t here are no commonly accept ed assessment models that educators can copy and adapt to their specific situation (Tracy, 2002). An early study (MENC, 1953) divided the undergraduate music education curriculum into four categories: general culture, basic music, musical performance, and professional education. Since that study, there have been numerous articles and dissertations concern ed with the subject of educating music educators (Mathison, 1971; Wilbur, 1955). An examination of the current curriculum of undergraduate music educati on majors includes the following components: music theory, ear training, music history, instrument techniques courses, piano, conducting, applied music study, general education courses, general studies, ensembles, and a student teaching internship (Meaux, 2004). T he National Association of Schools of Music (NASM) emphasizes the importance of assessment training in the undergraduate music education curriculum. As stated in their handbook under the sub heading of Teaching Competencies the following statem ent addresses this topic. An understanding of evaluative techniques and ability to apply them in assessing both the musical progress of students and the objective and procedures of the curriculum (National Association of Schools of Music, 2009, p. 100)

PAGE 21

21 We continue to observe developments in the music education curriculum as collegiate teachers make an effort to address the needs of futu re music educators However there still exists the need to examine and refi ne what is offered to these students in an ef fort to set them up to be success ful teachers. One area currently overlooked in preparing these music educators is the subject of student grading and assessment (Reid, 2005). Another factor that has placed a great deal of attention on classroom assessment is college entrance requirements. It is becoming more competitive for high school students to gain acceptance into many colleges and universities (Chapman, 2004). Students begin concerning themselves with entrance exam sco res and their high school gradepoint average (GPA ) before they ever arrive at high school. Th is results in a great deal of attention given to each grade placed on their transcript. Teachers are held accountable for every mark they assign students with the understanding of how it may affect their future college plans (Cope, 1996). The problem of proper assessment and grading is further compounded by the pressures and time constraints placed on high school band programs and the perception by directors that the process of assessment is time consuming and tedious (Lehman, 1997). Most high school band directors begin their school year with a marching performance every Friday night, and rehearsals during and after school many nights of the week. Many directors would argue that they barely have enough time to prepare their ensembles for these public performances and do not have the luxury of dealing with assessment in any detail. Bennett Reimer further discusses this view in his book A Philosophy of Music Education:

PAGE 22

22 Performance directors are driven to perform fine concerts; that is how their success is judged. This is further intensified in the community of school music teachers, whose values, shaped by the surrounding f orces center strongly on producing the best possible players, singers, and groups. The emphasis in the music program is almost completely on performance, and that emphasis over the years has garnered strong support from both parents and school administrat ors (Reimer, 1989). In addition, many high school band programs enroll over a hundred students and the idea of tracking and accounting for each student in terms of assessment becomes daunting to most directors (Chiodo, 2001). There is a growing need for the development of an effective student assessment model for high school bands. An effective model should be easy to implement and should address the important musical, educational, and organizational issues common to most band programs. An effective mod el should also motivate student s to develop on their instrument s and have a positive experience in their high school music careers. There currently is not a widely accepted model for assessment in the high school instrumental ensemble class. While ther e are generally accepted models for warm ups, ensemble tuning, and even literature selection, nothing in the way of assessment models is typically discussed or practiced. Each band director is i ndividually responsible for development and implementation of an assessment tool for use when grading his or her students. While each assessment tool needs to have individual flexibility in relation to the specific situation (school, director, etc.), I believe certain assessment components and methods of implementation apply to all band programs. Despite the many differences among band program s a great many fundamental elements are common to all. These elements become the framework for the assessment tool implemented by those directors.

PAGE 23

23 With little research done in the area of student assessment, the question becomes how do we find or develop an accurate assessment model that can be used by secondary band directors when grading their students? Theories and assumptions can only go so far when we are dealing with this very real topic. In addition to the study of assessment mod el building, I believe a great deal can be learned from studying the individual assessment systems that current band directors have developed for their band programs. In the end, these are the experts in operating instrumental classrooms in our schools. These are the people running our high school band programs across the country. This is the one group that should understand the issues and challenges in assessing students and assigning grades to students participating in high school band programs. Purpose of the Study The purpose of this study is to investigate current student assessment practices of high school band directors in the United States The study will also investigate the dire ctors attitudes toward the student assessment process. The end goal of the study is to develop a valid and practical student assessment model that can be used by high school band directors. The development of this model will be guided by new research results in the form of a national survey. The following research questions guide d this study: 1. In what specific ways are current high school band directors assessing stude nts in their ensemble c lasses? 2. What are high school band directors attitudes toward the assessment process ? 3. How can the results of this research con tribute to the development of a student assessment model for bands?

PAGE 24

24 Delimitations 1. While all levels of music education would benefit from an effective student assessment model, this study specifically deal t with the high school (9th to 12th grade) level. M any similarities exist from level to level (especially middle school and high scho ol band ). H owever, the pressure placed on performance ensembles at the high school level from the school and surrounding community dictate many aspects of their program mak ing it unique. 2. Whi le all ensemble disciplines in a high school music program (band, chorus, and orchestra) share many of the same challenges in terms of student grading and assessment, many factors are unique to each discipline. In this study the scope of the investigation was limited to classes involving band performance based ensemb le s. 3. S a mple size for this study was limited to approximately 5,000 subjects and only include d responses from high school band programs in the United States whose directors were members of The National Association for Music Education ( MENC ) This was necessary because the largest available national database for high school band direc tors is maintained by MENC Further, the largest random sample the researcher could acquire from MENC was 5,000 subjects. The sample should be sufficient to generalize re sults of the questionnaire to all U.S. high school band programs whose directors are members of MENC. De finition of Terms ASSESSMENT: an observation of what a student knows and is able to do. Assessment is the process of collecting, describing, and analyz ing information about student performance or program effectiveness in order to make educational decisions. ALTERNATIVE ASSESSMEN T: a ny assessment technique other than traditional norm referenced or criterion referenced pencil and paper tests, that uses st rategies for collecting and analyzing information. A UTHENTIC A SSESSMENT: a ssessment techniques that gather information about students ability to perform tasks found in real world situations CRITERION-REFERENCED TEST: a measurement of achievement of specific criteria or skills in terms of absolute levels of mastery. The focus is on performance of an individual as measured against a standard or criteria, rather than against the performance of others who take the same test as with norm referenced tests. EVALUATION: t he collection and use of information (assessments) to make informed educational decisions. FORMATIVE ASSESSMEN T: an ongoing assessment made in an educational program for the purpose of improving the program as it progresses.

PAGE 25

25 MEASUREMENT: t he use of a systematic methodology to observe behaviors in order to represent the degree of performance ability, task completion, and concept attainment. PERFORMANCE ASSESSME NT: a n evaluation in which students are asked to engage in a complex task, often involving the creation of a product. Student performance is rated based on the process the student engages in and/or based on the product of his/her task. Many performance assessments emulate actual workplace activities or r eal life skill a pplications that require higher order processing skills. Performance assessments can be individual or group oriented. PERFORMANCE TASK: a demonstration in which a student is able to show his or her ability to use learned material in real world situations. PORTFOLIO: a file of student work centered in a particular topic or content area. PORTFOLIO ASSESSMENT: a n analysis of a collection of student work to show student achievement and attainment of standards in a specific content area. Student progress is decided by reviewing the collected works in reference to previously conceived criteria. RUBRIC: a set of guidelines for giving scores. A typical rubric states all the dimensions being assessed, contains a scale, and helps the rater place the given work properly on the scale. SELF-ASSESSMENT: analysis of ones own achievement or abilities. STUDENT ASSESSMENT: t he judgment of students capabilities in a subject, formed from information collected from performance tasks directly related to well defined, educationally sound performance criteria. SUMMATIVE ASSESSMENT: a n assessment, administered at the conclusion of an education program, used to determine the overall effectiveness of that program.

PAGE 26

26 CHAPTER 2 LITERATURE REVIEW This literature review presents a description and synthesis in the areas identified as central to the present study: definition of assessment, assessment history, assessment in education, assessment in music education, student assessment in music, assigning grades to band stude nts, and the development of an assessment model. The review will conclude with a discussion of recent research in the area of student assessment in the music classroom as they are related to the purposes of the present study. Definition of Assessment Asse ssment can be defined as the process of documenting knowledge, skills, attitudes, and beliefs. Assessment can focus on the individual learner, the institution, the learning community, or the educational system as a whole (Council of Chief State School O fficers, 2009) Whil e this definition appears adequate, our profession lacks agreement on the terminology used in assessment research. Other terms commonly associated with assessment include evaluation, measurement, and testing. These terms are often us ed interchangeably causing much confusion regarding the name s and labels associated with current assessment techniques and principles (Bessom, 1980) Hart (1994) attempts to clarify these discrepancies by providing separate definitions for each specific t erm. He defines assessment as gathering information about what students know and can do. Teachers collect this information in a variety of ways including observing students, examining their work, and testing their knowledge and skills. He defines evaluat ion as a process that involves interpreting the information

PAGE 27

27 gathered from assessment as a means of determining whether the students have learned what the teacher wanted them to learn. A test is defined as a type of assessment instrument or tool use d to d etermine whether a student has achieved the main goals of instruction. An additional term often associated with assessment is grading. Grading can be defined as the process of reporting and recording student progress (Council of Chief State School Officer s, 2009) Hoffer (2000) said the teacher ultimately needs to establish clear cut criteria for grading, consistent with the overall evalu ation procedures of the school The teachers role relating to grades can be complex. The common goal of this proce ss should be to make the assigned grades as fair, consistent, and objective as possible (Brookhart, 1993) Philosophical Rationales of Assessment Contemporary philosophical views on assessment are based on the earlier schools of thought on the subjects of learning and education. This literature review will begin with an examination of three basic philosophical schools and their views on learning an d assessment. Rationalism Rationalism (often referred to as idealism) maintains that a persons consciousness of what is perceived is an integral part of reality. The central thesis of rationalism is that knowledge is a fixed body of truth that applies in all times and places. It began with Socrates (470?300 B.C.) and Plato (427347 B.C.) in ancient Greece, an d proponents include Rene Descartes (15961650), Immanuel Kant (17241804), Georg Hegel (17701831), and a number of English and American philosophers.

PAGE 28

28 Probably the greatest strength of rationalism is its conscious intellectual approach to re ality, the way reality is known, and the values that should be held. Another strength of rationalism is its stability. It provides conclusions that are not going to be buffeted about by each novel breeze or whim. What is true is true, always was true, and always will be true Rationalists have a rather great interest in evaluating students learning. They see evaluation as an important part of education. Traditionally the rationalists, especially Socrates, followed the dialogue procedure in which a teacher and student probed and searched together to uncover truth. Over the ages the emphasis changed more to students learning what was believed to be valuable and lasting. Student learning is evaluated not just on factual knowledge or skill development; but rather fr om more subjective, more probing, and comprehensive evaluations of the students work (Abeles, 1995). Empiricism The roots of Empiricism (often called realism) reach back to Aristotle (384322 B.C.). The heart of realism is the acceptance of what is clear to everyone Th ings are what they appear to be; not representations of some greater but invisible reality. Empiricists believed the road to truth is through observation and scientific evidence. Some of the important names associated with Empiricism include Baruch Spinoza (16321677), John Locke (16321704), and the American philosopher psychologist William James (18421910). The main strength of empiricism lies in its practical quality. Empiricists take whatever information they have and work with it as best they can, even though they

PAGE 29

29 realize their knowledge is not perfect or complete. In short, this philosophical position deals with reality as it can best be known. Like rationalists, empiricists are interested in evaluating the results o f instruction. However, they are more interested in the acqui sition of s pecific information and skills ( the ones deemed necessary to function in society and an area of work ) Empiricists see teachers as central in the educational process. Teachers largely decide what will be taught and how it will be taught. If t hey are not the only source of information, teachers tell students where to locate it (Abeles, 1995). Pragmatism The roots of pragmatism go back to Heraclitus (sixth to fifth centuries B.C.) and the Sophists in ancient Greece. Heraclitus emphasized the idea that all things change; nothing is permanent. The logic of pragmatism is the scientific method. People associated with pragmatism in clude Francis Bacon (15611626); Auguste Comte (17981857) ; and American philosophers Charles Sanders Peirce (18391914), and John Dewey (18591952). The strength of pragmatism lies in its attention to the process of uncovering the truth. It does not depend on what one thinks is natural, or on mental cognition, or on perception of the world. Instead it proposes the scientific method as the best means for determining reality. Logically, pragmatists are more interested in evaluation than are the holders of other philosophical positions, since consideration of the results is a part of the scientific process. The evaluation, however, is not concerned solely with what content has been learned, but concentrates on the methods of learning us ed by the teacher. Pragmatists see teachers as agents who impart to the young the techniques for living and acquiring

PAGE 30

30 knowledge. Teachers also instruct students how to meet the new situations that will inevitably arise; in a sense, the students are educated for change (Abeles, 1995). Abeles, Hoffer, and Klotman (1995) suggest three reasons why all music educators should think about philosop hical matters as they relate to teaching (research). One reason for doing this is that all teachers make decision s as part of their work, and most of these decisions have philosophical implications. A second reason for consi dering such matters is that basic understandings and beliefs provide, or at least should provide, a sense of direction and perspective. A third reason for thinking about philosophical topics is that teachers should be consistent in the different actions they take A basic understandi ng of these three philosophical viewpoints (rationalism, empiricism, and pragmatism) pr ovides some background for making research decisions concerning assessment Examining the differences and similarities in how each viewpoint approach es education, evalu ation, and assessment can provide strength to future direction s in these fields. Assessment History While some view assessment as an outgrowth of educational reform, its history can be traced back more than 4,000 years. As early as 2,000 BC, there is e vidence that civil service testing in China was established in an effort to select employees based on merit rather than preference. In 200 BC, Socrates developed and used conversational methods of assessment to test his students ability to describe and r hetorically defend their stated views and opinions. In addition, early Olympic Games included the evaluation of poets and musicians as well as athletes (Cole, 1995)

PAGE 31

31 More recently, when America entered World War I in 1917, tests were needed to determine wh o was fit for officers training school at a time when the U.S. Army was drafting large numbers of soldiers. The Army Alpha test, the first widely distributed multiple choice test, determined intelligence by measuring verbal ability. Upon discovering that many of the military recruits were functionally illiterate, the Army produced a second test. The Army Beta test used mazes and puzzles to measure intelligence and required no specific language skills (Thorndike, 2005) Educational testing appeared in the United States during the 18th century in the form of oral exams given by university faculty to determine the quality of their students academic performance. Edward Thorndikes 1904 publication, An Introduction to the Theory of Mental and Social Measu rements became the foundation for much of the testing effort in the early 1900s and earned him the recognition as the father of educational measurement (Mabry, 1999) The beginning of standardized testing can also be traced to the work of Alfred Binet who developed the use of intelligence testing in Paris, France in the early 20th century. City and educational leaders asked Binet to develop a test to help determine which students would be more apt to succeed, and which might be more apt to fail. Bin ets work led to the creation of the first intelligence tests, and the model for the intelligence quotient know as IQ. The resulting success of the 1913 Stanford Binet test led to the creation of many new achievement and aptitude tests from the 1920s thro ugh the 1950s. Revised forms of some early 20th century tests are still used today, such as the Iowa Test of Basic Skills, the Stanford 9 and the Scholastic Assessment Test (Trice, 2000)

PAGE 32

32 In the 1830s, Horace Mann devised the first standardized written exams in Massachusetts and Connecticut. His 1846 Boston Survey was the first largescale test printed for use in assessing student achievement in the areas of grammar, geography, history, philosophy, geometry, astronomy, writing, and math. By th e middle of the twentieth century, educational and psychological testing became a lucrative business. Many new standardized tests were published including the American College Test and the General Aptitude Test Battery In 1947, Henry Chauncey founded t he Educational Testing Service which continues to provide tests and other services to the education community (Kancianic, 2006) Americas 1941 entry into World War II required the creation of many new batteries of tests. Louis Leon Thurstones refineme nt of factor analysis procedures enabled tests to categorize individuals across several dimensions. The success of the factor analysis method resulted in fewer military dropouts and the creation of several taxonomies of human behavior. In particular, psy chologist Benjamin Blooms (1971) Taxonomy of Educational Objectives dominated assessment and educational psychology textbook chapters (McMillan, 2003) The current state of assessment has been greatly influenced by technology. With the invention of the computer in the 1960s, data could be gathered, stored, and analyzed with greater efficiency and less cost. Current technology makes it possible to assess large populations in various locations and have statistical results of data instantaneously. This ty pe of technology can also be programmed to guide and adapt an assessment to address the specific responses of an individual, allowing countless options in assessment formats.

PAGE 33

33 Assessment tests are currently used in a number of settings. In education assessments, aptitude an d achievement tests are used for a variety of purposes. Career assessment tests are used for job placement and employment screening tests by companies to help determine the skills and knowledge of future employees. In addition, there are personality type assessments, and assessment tests used by government agencies to determine need for admittance in specialized programs. Assessment in Education According to John Dewey (1916) the role of assessment should be to interact with instructi on to help the child realize full growth through successive habit formations. Active habits involve thought, invention, and initiative in applying capacities to new aims Dewey also said (1910) concepts enable us to generalize, extend and carry over our understanding from one thing to another. It would be impossible to overestimate the educational importance of arriving at concepts. They apply in a variety of situations, are in constant referral, and give standardized, known points of reference. Without this conceptualizing, nothing is gained that can be carried over to the better understanding of new experiences. The deposit is what counts educ ationally speaking Dewey stressed the importance of assessment being applicable to course content and its l ogical outreach of the material being presented in the classroom. A ssessment of material should reinforce important concepts related to the content being studied. The idea of applying thes e concepts to related material is important in Deweys approach to assessment and education. Dewey emphasized means as being equal to ends; that is, the way one gains information is as important as the information itself (Abeles, 1995).

PAGE 34

34 Assessment is needed to appraise student progress, to provide guidance and motivatio n for learning, and to identify areas where improvements are needed in either instruct ion or the program (Colwell, 1982). Assessment should also serve as a useful and essential tool in the classroom. It can be used to evaluate student progress, set stand ards, guide instruction, and communicate student progress to parents and administrators (Farrell, 1997). In 2001, the National Board for Professional Teaching Standards (Linn, 2005) developed guidelines for teacher competencies in assessment. They recommended that succes sful teachers should be able to 1. Create a variety of assessment tasks and materials for assessing student learning 2. Plan assessments before planning instruction 3. Present assessments at appropriate times in the instructional sequence 4. Ensure that students understand what they are expected to know and be able to do 5. Ensure that students understand how they will be assessed, upon what criteria they will be judged, and how this information will help them to improve 6. Use a variety of meaningful student self assessment techniques In addition, our federal government has had a huge impact on assessment in education. The federal government invested $1.3 billion in public education through the Elementary and Secondary Ac t of 1965. With this significant financial investment came a heightened expectation of student performance and accountability (Mark 1999) In 1983, A Nation at Risk: The Imperative for Educational Reform (National Commission on Excellence in Education, 1983) reported many shortcomings in the American Education system, including a steady decline in standardized test scores. The most recent legislation influencing student assessment has been the No Child Left Behind Act

PAGE 35

35 (NCLB) of 2001 (U.S. Department of Education, 2002) The NCLB act req uires subject specific accountability for student learning which has resulted in large scale testing at the state level. Effective assessment measures reveal even more than what students know and understand. They must also indicate how those new understan dings evolved. A ssessment serves as evidence of the broadening and deepening of students capacities to solve sophisticated problems, make sensitive judgments, and complete complex projects. It would seem that the development of a complete inventory of a ssessment types and how to implement these techniques would be of great assistance for educators and administrators (Farrell, 1997). Assessment in Music Education Reimer states the following in regard to music tests and testing: The profession needs much more experience in gathering and presenting evidence about the growth of essential musical behaviors, and we need, as well, good tests to help us gather and present this evidence. Tests in the future will be more holistic, more oriented to real world pro blem solving and processing of musical information and the making of musical judgments and decisions; that is, to the measurement of musical intelligence in a variety of manifestations. Such tests and other modes of professional evaluation will add to our status as a bona fide curriculum and add to our professional expertise. Tests can be abusive, as we know all too well, but they can also be powerful aids in effective education (Reimer, 1989). Reimer continues: We are already expert at assessing perform ance skills and must continue to refine this expertise. Especially important will be improvements in regard to giving out students a variety of specific musical performance problems to solve, involving technique, notation, stylistic interpretation, ensemble, and so forth. We also need to evaluate how performers engage themselves intelligently in dealing with problems of process how they structure a performing problem they are faced with, what imaginative ways they employ to solve it, how they use their musical understanding as an aid, the steps they go through, and their critical judgments about their solutions. We must continue to evaluate the growth of skills, but we must pay far more attention

PAGE 36

36 to assessing the growth of musical intelligence and music al independence as demonstrated by problem solving as relevant to performance (Reimer, 1989). Brophy (2000) defines one of the purposes of assessment in the music classroom as the opportunity to obtain evidence of the musical growth and progress of students. For the teacher, assessment can also be used to guide instruction and aid in the choice of teaching strategies. Another reason assessment is important is to further validate the music program with parents, students, and administrators. Finally, asse ssment can provide evidence of accountability for student learning. F our music national assessments have been administered to students in the United States. In 1971, the National Assessment of Educational Progress (NAEP) administered the first national mu sic assessment to stud ents in three age groups: 9, 13, and 17 years The purposes of the test were to determine what the music students knew, what they could do, and their attitudes toward music education. The NAEP brought together scholars, teachers, and curriculum specialists to develop the objectives for this assessment. In association with the Educational Testing Service (ETS) of Princeton, New Jersey, the following broad categories of objectives were used in the first of these national music asses sments. 1. Perform a piece of music 2. Read standard musical notation 3. Listen to music with understanding 4. Be knowledgeable about some musical instr uments, some of the terminology of music, methods of performance, some of the standard literature of music, and some aspects of the history of music 5. Know about musical resources of the community and seek experiences by performing music

PAGE 37

37 6. Make judgments about music, and value the personal worth of music R esults of the assessment indicated that w hile students attitudes to ward music were positive, their performance on the exercises was largely quite low (Mark 1996) The NAEP administered a second national music assessment in 1978. The same age groups were measured. H owever the objectives for this assessment changed from the first. 1. Value music as an important realm of human experience 2. Perform music 3. Create music 4. Identify the elements and expressive controls of music 5. Identify and classify music historically and culturally Some criticisms of the second assessment : it did not include performance assessment like the first (due to a lack of funding), and the results were underreported. Overall, the information f rom the two National Assessment reports were of great potential value to the music education profession, but actual ly had little influence on practices (Mark 1996) The next assessment w as not administered until 1997 because of a lack of funding and concern for arts education. By means of funding from the National Endowment for the Arts, the assessment project was adm inistered by the Council of Chief State School Officers. This assessment was largely based on the National Standards for Arts Education and measured all of the arts disciplines (music, arts, dance, and theater). Only eighth grade students were administer ed the test, which measured students knowledge and ability in creating, performing, and responding. Overall results indicated that while students who participated in music activities performed better than those who did not, a great deficit in students m usic knowledge and skills existed (Mark 1996)

PAGE 38

38 The most recent national assessment took place in 2008 and was again administered by NAEP. F indings were published by NAEP in their series the Nations Report Card: Arts 2008 (Music and Visual Arts). The assessment was given to a nationally representati ve sample of 7,900 eighth grade public and private school students (half in music, half in visual arts). The music portion of the assessment measured students ability to respond to music in various ways. Students were asked to analyze and describe aspect of music they heard, critique instrumental and vocal performances, and demonstrate their knowledge of standard musical notation and music s role in society. The average responding score for music was reported on a n NAEP scale of 0 to 300. Scores ranged from 105 (for the lowest performing students) to 194 (for the highest performing students). In both music and visual arts scores were hig her for White and Asian students compared to Black and Hispanic students. Scores were also higher for female students versus their male counterparts. S cores wer e significantly lower for lower income students (eligible for free/reduced lunch) than those n ot eligible. In the music assessment, scores were hi gher for private school versus public school students and eighth graders attending city schools had a lower average responding score than students who attended suburban, town, or rural schools. Approxi mately onethird of the students participated in a musical activity such as band, choir, or orchestra (National Assessment of Education Progress, 2008) Student Assessment in Music Education The music classroom is a unique environment in the school setting. The variety in activity and subject matter require the music educator to approach classroom assessment very carefully. The National Association for Music Education ( MENC )

PAGE 39

39 provides the following guidelines for music classroom assessment (MENC: The Nati onal Association for Music Education ): Assessment should be standardsbased and should reflect the music skills and knowledge that are most important for students to learn. Assessment of student achievement should not be based on the skills and knowle dge that are easiest to assess n or on those for which ready made assessment devices are available. Instead, it should be based on the extent to which each student has met the standards established, and it should reflect the priorities of the instructional program. Assessment should not be based primarily on where the student ranks relative to a particular class or group. It should be based on whether the student has met specific criteria. In these performance standards separate criteria have been established f or basic, proficient, and advanced levels of achievement. Assessment should support, enhance, and reinforce learning. Assessment should be viewed by both students and teachers as a continuing, integral part of instruction rather than as an intrusion into (or interruption of) the process of learning. The assessment process should itself be a learning experience, and it should not be conducted or viewed as separate from the learning process. Students should regard assessment as a useful tool rather than as a source of fear or anxiety. They should use it as a means of further learning and as a means of measuring their own progress. When assessment tasks are designed to provide information concerning the extent to which students meet standards that have been es tablished for them, teachers can adjust their instructional programs so as to be more effective.

PAGE 40

40 Assessment should be reliable. Reliability refers to consistency. If an assessment is reliable, then another assessment of the same skills or knowledge will produce essentially the same results. For assessment to be reliable, every student must be assessed by identical procedures and the assessors must share the same levels of expectation so that a student's score does not depend on who is doing the scoring. Assessment should be valid. Validity means that the assessment technique actually measures what it claims to measure. The mental processes represented by the scores correspond to the mental processes being assessed. No measurement instrument should be use d to measure something it was not designed to measure. If there is a mismatch between assessment strategies and the objectives of the curriculum, the assessment strategies are not valid for that curriculum. Assessment should be authentic. Authentic assessment means the assessment tasks reflect the essential nature of the skill or knowledge being assessed. The student should actually demonstrate a music behavior in an aut hentic or realistic situation ra ther than merely answer written questions about it. For example, the ability to play the recorder should be assessed by havin g the student play the recorder; not by having the student answer test questions concerning fingerings, hand position, phrasing, and notereading. Assessment does not need to be based on multiple choice tests or even on paper and pencil tests, though those techniques have their uses. Portfolios, performance based assessment, and other techniques of authentic assessment have been used successfully by music educators for many years; however,

PAGE 41

41 these techniques cannot by themselves solve the assessment problems facing educators. A portfolio is simply a collection of samples of a student's work taken periodically for a specific purpose throughout the instructional process. Those samples must still be assessed, and t he assessment requires careful thought about what should go into the portfolio and also great care in developing suitable assessment strategies and appropriate scoring procedures. Assessment should take a holistic view of music learning. It should not concentrate on isolated facts and minutiae; but should deal with broad concepts, whole" performances, and complete works of music. Authenticity, like reliability, is a prerequisite to validity. The process of assessment should be open to review by interested parties. Although assessment of music learning can best be carried out by qua lified music teachers, it is important that students, parents, and the public be given sufficient information and help so they too can make judgments about the extent to which music learning is taking place in their schools. If their evaluations are faulty it should be because of their lack of professional qualifications and not because of lack of information concerning the assessment process. It is especially important that students know what they are to be assessed on, how they are to be assessed, and wh at criteria will be used to judge their achievement. When appropriate, they should be allowed to participate in developing the criteria by which their work will be assessed. These guidelines can assist the music educator in making important decisions conce rning assessment in the music classroom. However, the added requirements of performance based ensembles and the larger number of students a teacher deals with at one time make assessment in this environment especially challenging. Four types of assessmen t (Goolsby, 1999) can be used for evaluation in the instrumental classroom

PAGE 42

42 in a re latively straightforward manner: placement, summative, diagnostic, and formative assessments. 1. Placement assessments include auditions, challenges, and seating assignments, all aimed at determining a students abilities in order to properly place the student in a program. 2. Summative assessments include concerts, festivals, recitals, and other events where the final product of the groups learning is publicly demonstrated a nd evaluated. 3. Diagnostic assessment is used to determine where learning difficulties exist. The most obvious and frequently used tool in instrumental music is error detection. 4. Formative assessment is concerned with the regular monitorin g of students to ma ke sure learning is takin g place. O ne requirement for effect ive formative assessment is students clear understanding of what they should learn A wide variety of assess ment components have been discussed in the area of instrumental music education (Antm ann, 2007; Asmus, 1999; Burrack, 2002; Chiodo, 2001; Cope, 1996; Dirth, 2000; Goolsby, 1999; Hanzlik, 2001; Kancianic, 2006; McCoy, 1991; Norrington, 2006; Pizer, 1990; Reid, 2005; Sears, 2002; Sherman, 2006; Simanton, 2000; Stauffer, 1999; Tracy, 2002). Each comp onent should have a purpose in the broader assessment tool. Each component should also support the goals and instruction of the classroom (Asmus, 1999). The following are commonly used music assessment tools: ATTENDANCE (concert, rehearsal): a ccounting for a students participation in an event. Attendance may also incorporate penalties for students arriving late or leaving early. COMPUTER MUSIC THEORY PROGRAMS: m usic theory programs that are computer generated. CONDUCT/DISCIPLINE: assessing a student based on their behavior. PARTICIPATION: to take part in an event or activity. PRACTICE LOG/JOURNAL: a self reported record of an individuals practice.

PAGE 43

43 PLAYING TEST: a performance demon stration by the student on the students instrument. The material for a playing test varies but may consist of band music, etudes, scales, rudiments, or audition music. PORTFOLIO: a collection of supporting material. REQUIREMENT CHECKLIS TS: a l ist of accomplishments student s progress through at their own pace. SELF-ASSESSMENT: a students assessment of his or her own work. SIGHT-READING TESTS: a performance demonstration by the student (on the students instrument ) of music he or she is not previously familiar with. TEACHER OBSERVATIONS: a ny assessment that relies on observable behavior of a student by the teacher. WRITTEN TESTS: a ny test or quiz in written form. A teacher should focus on assessment o ptions that occur naturally in a music context, that are authentic to your classroom, an d that are congruent with your instructional goals (Stauffer, 1999). Given the importance of assessment music teachers need a management system that is as efficient and effortless as possible, while still producing detailed information about individual s tudents (Chiodo, 2001). Current educational research suggest s that teachers should develop and use authentic performance strategies designed to allow students to demonstrate what they have learned and further what they can do with their knowledge ( Asm us, 1999). Authentic assessments involve the use of alternative strategies for collecting and analyzing information. Students are expected to demonstrate what they have learned by drawing on their knowledge, abilities, and past acheivements to solve prob lems that require them to perform under real world conditions (U.S. Department of Education, 1996).

PAGE 44

44 Assigning Grades to Band Students The assignment of grades is a complex topic in educational assessment. The grade serves as the primary way a teacher c ommunicates a students progress and achievement. Suggestions for establishing grading systems for performing arts ensembles have been offered by many of the leading experts on music education ( Bowman, 1984; Boyle, 1987; Branum, 1988) ; but very little res earch has examine d the actual grading practices of ensemble directors. The National Association for Music Education (MENC ) conducted a survey concerning g rading practices in 1997. R esults show ed that music teachers were responsible for assigning grades to a large range of students (from twenty five to one thousand) and the majority did this using traditional letter grades (A, B, C, D, F). Most teachers who responded assigned grades base d on performance based criteria; while others used criteria such as attendance, effort, behavior, and attitude. Finally, some teachers used precise criteria and point systems, while others used grading procedures that were imprecise ( MENC 1998). Attendance, eff ort, behavior and attitude have long been an important part of music classes. H owever, it is important to separate non musical criteria from the grading process. Effort, behavi or, and attitude are dificult ( if not impossible) to grade objectively. Attendance can be graded objectively, but does not represent a students musical understanding or acheivment. There are many reasons that music teachers use these non musical criteria when determining students grades. With the large numbers of students music teachers have, it can be difficult to thouroughly and accurate l y assess all of them on musical criteria. Also, categories such as attendance,

PAGE 45

45 effort, and behavior are important to productive music rehearsals; so many teachers may feel it is necessary to include them in grading practices (MENC 1998). In another study concerning grading practices, Drake (1984) found that attendance and participation were the principle criteria for assigning grades to students in performing groups. Attitude, preparation, a nd satisfactory performance were rarely mentioned as critieria for grades. McCoy (1988) found similar results when investigating the grading practices of high school band and choral directors. Ninety five percent of the reported grading systems included s ome type of nonmusic criteria such as attendance and behavior. The study also found that seventy five percent of these grading systems included criteria related to performance (psychomotor criteria), sixty six percent included criteria related to attitude (affective critieria), and forty two percent included criteria related to factual knowledge about music (cognitive criteria). Bradford (2003) states that students, par ents, and administrators benefit when curriculum based assessment is used by teachers. Students see grades not as subjective rewards or pun ish ments, but as accurate reflections of knowledge and ach ie vement. They become more confident and independent. Parents find it easier to gauge their students progress and understand the development o f their student in regard to the performing ensmeble. Administrators begin to see music as an academic class, rather than solely a venue for entertainment. Developing Music Assessment Models The next section is a review of existing assessment models developed in the last forty years in the area of art and, more specifically music. These models advanced the study of student assessment and serve as a guide to educators enhancing the options available in the classroom.

PAGE 46

46 Arts PROPEL Since 1967, a research group at the Harvard Graduate School of Education has investigated the development of learning processes in children and adults. The name Arts PROPEL is an acronym for Production (making music by sing ing, playi ng an instrument, or composing); Perception (listening to music); and Reflection (thinking about what one does, both in words and in the appropriate symbol system) (Mark 1996) Project Zero was founded by the philosopher Nelson Goodman to study and improve education in t he arts. Goodman believed arts learning should be studied as a serious cognitive activity. Project Zeros mission is to understand and enhance learning, thinking, and creativity in the arts, as well as humanistic and scientific disciplines, at the individual and institutional levels. David Perkins and Howard Gardner served as co directors of Project Zero from 1972 to 2000, when the current director Dr. Steve Seidel was named. Over the years, Project Zero has maintained a st rong research commitment in the arts. Much of its work takes place in American public schools, particularly those that serve disadvantaged populations (Harvard Graduate School of Education, 2010) In 1985, a joint project among Project Zero, the Pittsbur gh (Pennsylvania) Public Schools, and the Educational Testing Service observed and documented music learning for two years. Their findings determined that students are the constructors of knowledge. Understanding occurs when students organize, manipulate and apply concepts themselves. In addition, it was found that students learn best when they actively perform and create. The Arts PROPEL instructional and assessment strategies are based on helping the student develop independence and expertise in learni ng the procedural knowledge associated with music. This model encourages the use of

PAGE 47

47 portfolios in order for students to examine their progress over time. Portfolios may include video and/or audiotapes of student performance, teacher evaluations, and stud ent self evaluations (Sears, 2002) Comprehensive Musicianship through Performance (CMP) The Comprehensive Musicianship through Performance Project (CMP) was initiated in Wisconsin in 1977 as a reaction to performance based music programs that were accuse d of producing outstanding performance groups without developing a depth of musical understanding. The CMP is a process of instruction that promotes performance with understanding, The project began with a group of respected ensemble teachers selected from a diverse group of school districts. The group developed and tested a process for planning rehearsal instruction to include performance skills and also general knowledge about music. In the CMP model the teacher serves as a facilitator who, in addi tion to rehearsing selected works, questions the students on a variety of subjects including musical style, composer background, form, keys, and other music knowledge. Student involvement in the learning process is an important aspect of the CMP model. T eachers are instructed to involve students in a variety of hands on activities including listening, analyzing, arranging, composing, discussing, and evaluating music. The students are encouraged to recognize these activities as real life applications of musical knowledge ; not isolated classroom exercises. Student involvement can also extend to concert performances in the form of students researching and writing program notes or demonstrating important musical traits of a given piece of music to the aud ience. Student involvement is also encouraged in the process of assessment. The CMP model promotes the use of self and peer assessments through recordings of their

PAGE 48

48 own performances. Student portfolios are also used in an effort to help students become i ndependent learners possessing the ability to make decisions about their own work including possible direction for future study (Pontious, 2008) State Collaborative Assessment and Student Standards (SCASS) In 1994, the Council of Chief State School Offi cers established the State Collaborative Assessment and Student Standards (SCASS) to assist states in developing standards and assessment tools. The SCASS ARTS is a nation wide group addressing the refinement of arts education assessment. Its objective i s to develop assessment materials for the largescale, district level, and classroom based assessment in dance, music, theater, and visual art. According to the SCASS website, the group has developed and implemented a web based item development process t hat uses professional development training at the state level, the submission of items to a website where they are screened for content and assessment accuracy by a panel of experts according to criteria developed by the group, and either sent back to the originator or advanced to the final pool. The SCASS ARTS roster currently includes representatives from California, Louisiana, Minnesota, New Hampshire, and New Jersey. The group offers numerous aids to educators concerning arts assessment including an A rts Handbook, an Arts Assessment Glossary, and an Arts Assessment Bibliography. In addition, the group (Council of Chief State School Officers, 2009) offers the following publications for purchase: 1. Guidelines for VideoTaping Performance Assessment 2. Present ation Materials from the National Arts Assessment Institute

PAGE 49

49 3. Arts Education Assessment Consortium Year End Report and Collection of Refined Exercises 4. Collection of Unrefined Arts Assessment Exercises Developed for the 1997 NAEP Arts Education Assessment C urrent Research Previous researchers designed studies to identify current assessment practices music educators are implementing. Each study has unique purpose s and results but is related to the current study and influenced how this study was designed. T his review will conclude with a summary of the research and a discussion on the implications for the current study Antmann (2007) designed a survey to determine assessment methods and grading practices of middle school band directors. S ubjects selected f or his study were middle school band directors (N=59) of successful middle school band programs throughout the state of Florida. Of the twenty seven surveys returned, Antmann discovered that the most commonly used assessment tool in middle school band cl asses is the individual playing test. Other frequently used assessment components included practice journals, self student assessment s, and requirement c hecklists. The categories these directors found important to assigning grades included playing tests, participation, concert attendance, conduct/discipline, and attendance (rehearsals). The study revealed some common assessment and grading habits of successf ul teachers. These include regular assessment of a students ability to perform on instruments and to read and notate music; assessment of musical skills and abilities during performance; musicianship requirements w hen determining student grading; and non musical criteria such as attendance, participation, and conduct (Antmann, 2007).

PAGE 50

50 In 2002, Sears developed a study whose purpose was to describe how instrumental music educators document and assess their students progress. The study specifically targeted whether instrumental music is formally assessed and what types of assessment are currently in use by middle school instructors in southeastern Massachusetts. Forty two instructors completed a survey ; results showed 61% of teachers surveyed consider a students attendance as criteria for assessment. Almost 90% of these instructors consider a stu dent s effort as a part of the assessment. The most common as sessment components identified we re scale performance tests and practice logs. T eachers also u se d portfolios, method books, concerts worksheets, and quizzes as assessment strategies. S ears recommend ed t hat all arts educators take time to customize teaching materials with appropriate assessments. We have a myriad of options available to us. There is no shortage of available rubrics for us to modify. There are endless ways to build a portfolio over the length of a students instrumental study. None of this is accomplished with a check mark for attendance or a pat on the back for effort. The extra time we take will help provide our students with a meaningful experience in the arts (Sears, 2002). H anzlik (2001) examined the types and frequency of assessment methods used by Iowa high school band directors and their attitudes toward such assessment. He also examined the effects of selected teacher background variables in teachers attitudes toward as sessment. Of the 200 band directors randomly surveyed from the

PAGE 51

51 400 high school s listed in the 1988 8 9 Iowa High School Music Associations membership list, 154 surveys (77%) were returned. A ssessment practices used by band directors 80% of the time were playing band music/scales/rudiments, sight reading music, teacher observations, and playing etudes. Assessment practices such as student journals, portfolios, reflective writing teacher surveys and student displays were never used by at l east 80% of the band directors. Band directors in Iowa indicated the assessment practices they used most often were related to the psychomotor task of playing an instrument. The other five assessment practices identified by Iowa band directors as being us ed most often: contest ballots, concert attendance, teacher observation, student discussion, and s ight reading. T he instructional process in Iowa band rooms emphasized performance learning and not cognitive or affective learning (Hanzlik, 2001). In 2006, Kancianic investigated relationships among characteristics of high school band directors and their school settings, purposes and use of classroom assessment methods, and factors that influence the use o f classroom assessments. The National Association for Music Education (MENC) provided a membership list from which 2,000 high school band directors were selected by simple random sampling. The overall survey return rate was 39.75% (N=795); the usable response was 31.7% (N=634). C lassroom assessments used by high school band directors tend to focus on evaluating student performance skills. Students are not generally involved in the planning or execution of assessment. Those who teach more band classes use student self assessment more often. High school ban d directors use practice logs less frequently. Three prevalent issues emerged from the results: teacher autonomy, the

PAGE 52

52 role of assessment training, and teacher workload. Lack of time was viewed as a major impediment to assessment (Kancianic, 2006). Sherma n (2006) researched the following questions: What tools are currently used for assessment in band programs in public high schools?; Who performs the assessments?; Is there a distinction between assessment and grading? ; and Is the process accepted by all constituencies? A survey was distributed to a random sample of 500 high school band directors from the Eastern Region of Music Educators National Conference (National Association for Music Education). Participation was voluntary; the response rate was statistically significant, with 158 useable responses. There is some degree of consistency among high school band directors about the types of materials used for assessments, the way assessments were performed, and calculation and conversions in assig ning grades. Most directors included some means for assessing their students attendance and demeanor or behavior during contact hours. T erms used to describe these items were rehearsal technique, class participation, class preparation, and effort. Dist urbing perspectives on the issues of assessment and grading were as follows : 1) too much time consumption, 2) assessments only served the purpos e of grade justification; and 3) some directors tend to give up during the process and succumb to assigning As to all of their students simply to eliminate any backlash (Sherman, 2006). Simanton (2000) included t he following purposes for his study: (a) examine current assessment and grading practice s in American high school bands; (b) gauge local satisfaction with current a ssessment and grading practices; and (c) investigate variations in practice satisfaction based on regional, school, and band director variables.

PAGE 53

53 Data were collected (via surveys) from 202 high school band directors using a regionally stratified s ample, the six regions comprising the Music Educators National Conference (MENC). On average, 56% of band grades come from non performing criteria (attendance, participation, and attitude). Performance of band music accounts for another 25.9% of band grades. The remainder of student grades comes from a combination of technique and other practices (mostly quizzes and practice logs). Within these criteria weights, grading appears to be rather generous. Band directors report giving As to 75.4% of their s tudents and Bs to another 16.3% (Simanton, 2000). In 1999, Hill investigated assessment procedures, assessment attitudes, and grading policies currently u sed in band classrooms in Mississippi public schools. Data were obtained from 327 student members of the Mississippi Bandmasters Association State Band Clinic, 93 members of the Mississippi Bandmasters Association, and 38 randomly selected public school administrators. Results indicated grades were an important part of the instrumental classroom and s tudents were motivated to make good grades. All three survey groups indicated nonmusic criteria such as attendance, participation, and attitude were use d in determining students nineweek grades. While traditional forms of evaluati on such as portfolios and paper and pencil tests were recognized as useful in the ba nd classroom, 0 25% of the nineweek grade comprised these assessment types (Hill, 1999). Finally, in 2001, McCreary examine d methods and procedures currently used in evaluating secondary school instrumental (ba nd and orchestra) students, and compared student perceptions and their teacher perceptions of asses sment. Survey respondents

PAGE 54

54 comprised 467 secondary instrumental music students and their ten respective teachers on the island of Oahu, ( st ate of Hawaii ) Findings indicated that instrumental music teachers predominantly used traditional forms of assessment. Paper and pencil tests, playing tests, practice time, and attendance and/or attitude were used to evaluate their students, with a preference for playing tests and practice time. Eighty percent of the teachers and 93% of the students surveyed responded that none or mostly none of the grade was based on journals and/or portfolios. A relatively equal balance was found between music and nonmusic assessment crite ria. Results show ed that most students and teachers perceived that the criteria of playing tests comprised roughly half of the grade and the nonmusic criteria of attendance and/or attitude comprised the other half of the grade (McCreary, 2001). Summary o f Research: Study Implications First, there is great similarity in the way these studies were developed and carried out. All included a researcher developed survey with survey groups ranging from 27 to 634. Most of the studies were fairly regional in their make up dealing mostly with band programs from an individual state. In all cases researchers suggest future studies in this area should include a wider cross section of teachers in various parts of the country. Second, all studies were interested in finding commonalities among music educators in the using assessment components in their current prac tice of grading students. R esults of these studies indicate a wide spread use of performance based tests and nonmusic criteria to establish student grades P ortfolios, journals and student self assessment were rarely used by teachers in these studies.

PAGE 55

55 Third, many studies separated the assessment components into either a musical or non musical category. Some studies also examined what per cent of the stude nts grade comprised each of these categories. Farrell (1997) say s, There is no one right way to assess students. Balancing assessment strategies to use a variety of formats is most likely to result in reliable and valid information. Expanding our asses sment practices has enormous implications because assessment is tied to the content of the curriculum, to what teachers do in the classroom, and to the standards we set for. Farrells statement is important to consider as we address the topic of current assessment practices. We observe many commonalities in assessment approaches but continue to witness a wide variety of assessment tools being implemented by directors across the country. Any effort to construct a general assessment model should keep in m ind that each teaching situation is unique. A good model should have the ability to be applied to different teaching environments and still remain effective in its results.

PAGE 56

56 CHAPTER 3 METHODOLOGY AND PROCEDURES This chapter includes an explanation of the methodology and procedure used in this study. The chapter will begin with a definition of the research method used in the study followed by a description of the participating subjects procedures, and data collection method. The chapter will cont inue with a description of the statistical procedures and conclude with an examination of a pilot study administered by this researcher on the subject of student assessment in bands Research Method The research method of this study was descriptive, administering a survey to collect data. Surveys represent on e of the most common types of quantitat ive research. Survey research is the method of gathering data from respondents thought to be representative of some population, using an ins trument composed of closed structure or open ended items (questions). Creswell (2002) states that surveys help describe the trends in a population (p. 421). Survey research is an efficient method for gathering data from a large population and is a common and valuable approach to determine status (Abeles H, 1992, p.231) Subjects S ubjects for the study consist ed of high school band directors, teaching in the United States, who are members of the MENC: The National Association for Music Education. The MENC maintains the complete population of directors on the NetForum 2.2 customized database. The sample size for the study was limited to 5,000 as this was the maximum number random sample MENC could provide. Alreck and Settle (2004) said a sample

PAGE 57

57 larger that 10% of the target population is rarely necessary, because as sample size increases, sampling error decreases. Creswell (2002) suggested using a sample size of 350 for survey research, and Sudman (1976) recommended using at least 1,000 participants for a national survey. Simple random sampling was used to select participants from the MENC list. A simple random sample is preferred to other sampling procedures as it represents the target population more accurately and gives each member of the sampling frame an equal probability of selection (Alreck, 2004) The MENC provided a randomly selected list of 5,000 directors from a total population of approximately 15,000 using the reports function of the NetForum 2.2 database. Procedures The questionnaire (discussed in detail in the data collection section) was converted to a web document using SurveyMonkey and w as administered electronically using the internet. Participants were notified of the study through an email generated by MENC which included (a) the topic of the study, (b) approximately how long the questionnaire will take to complete, (c) the deadline for completion of the questionnaire, and (d) a link to the questionnaire. The MENC also sent out a reminder one week before the questionnaire deadline. Data Collection A questionnaire was designed to collect data on the assessment practices of high school band directors. The questionnaire was based on questionnaires and surveys used in earlier research (Hanzlik, 2001; Hill, 1999; Kancianic, 2006; Mc Creary, 2001; Sherman, 2006; Simanton, 2000). The questionnaire used in this study was constructed using a combination of open and closedended questions. Closedended question

PAGE 58

58 desi gns included (a) mutually exclusive answers, (b) exhaustive response categories, (c) numerical rating scales, and (d) semantic differential scaling systems. The questionnaire was designed in five sections: 1) Background Information, 2) Grading Information, 3) Assessment Philosophy, 4) Assessment Information, and 5) Assessment Mo del ( Appendix A). The Background Information section i s divided into three areas: (a) high school/community information, (b) band program information, and (c) director information. The high school/community section provided data on (a) type of school, (b) school enrollment, and (c) the type and socioeconomic status of the community in which the school is located. The band program section provided information on (a) band program enrollment, (b) the number of students involved in concert band(s), (c) the num ber of concert bands, (d) the average number of students in each concert ensemble, and (e) the total number of minutes each concert band meets per week. The director information section provided data on the directors (a) years of teaching experience, (b) the number of years spent in their current position, (c) their educational background, and (d) the number of band directors employed at the school. In the Grading Information section directors were asked to provide (a) specifics concerning the type of grades they assign and (b) how those grades are incorporated into the school grading system. Specific data requested included (a) the number and duration of marking periods; (b) the type of grade assigned and its affect on the students overall GPA; and (c) i f a weighted grading system is used at the school, how the band grade is weighted.

PAGE 59

59 The Assessment Philosophy section provided questions associated with why directors assess their students and asked them to rate the importance of a variety of criteria relat ed to assessment. Questions include (a) how important are the following purposes of student assessment, (b) what importance do you place on the following criteria in the evaluation of your band students, and (c) how important are the following assessment categories in a student assessment model for bands. This section also required directors to address what factors influence their decisions concerning student assessment and what has best prepared them to make decisions concerning student assessment. The A ssessment Information section presented specific questions about the way directors assess the students in their concert bands The questions in this section asked directors to provide (a) the specific asse ssment components they use and the percentage these components are assigned in the band grade, (b) the procedure for data collection they use (c) what materials they u sed in performance based tests, and (d) the importance of varying characteristics in their assessment design. The final section, Assessmen t Model, includes one question ask ing the directors to create what they believe to be a balanced assessment tool by assigning percentages (totaling 100%) to the following three assessment components: (a) individual testing and evaluation, (b) performance a ttendance and contribution, and (c) rehearsal attendance and contribution. Statistical Procedures R esults of the questionnaire produce d a data set that was quantitatively analyzed The researcher used a variety of descriptive statistics to summarize and explain the results of the information collected.

PAGE 60

60 In the Background Information and Grading Information sections, the researcher used the categorical information gathered to describe the subject who responded to the questionnaire and his or her teaching si tuation. The results of these questions were analyzed for measures of central tendencies (mean) in an effort to better understand the range of categories the questionnaire data came from. The researcher used appropriate charts and graphs to illustrate th is information. S tatistical analysis for the Assessment Philosophy and Assessment Information sections differ ed for each question. Based on the design of the question different statistical analyse s were done. For rating questions a mean score analysis determine d the most common response. For question s using rankings, a mean response w as reported in descending mean order to accurately show the highest ranking response to the question. Measures of variability (in the form of stan dard deviation) w ere used on all questions to indicate what commonalities are present among directors responses. S tatistical analysis for the final section ( Assessment Model ) show ed the mean responses to the question about the importance placed on the as sessment componen ts by participating directors. Pilot Study A pilot study (LaCognata, 2010) was administered in 2008. Locke, Spirduso, and Silverman (2000) s aid a pilot study has the potential to provide useful information relative to the design of the m ain study. The pilot included 158 high school band directors, 61 from North Carolina and 97 from Missouri. Participants were invited via email to complete a survey through the website SurveyMonkey.com addressing student assessment in the high school band ensemble class A designated band director in each state distributed email invitations. The designated band directors were sent an

PAGE 61

61 electronic cover letter of invitation, and they forwarded this through their respective listserves to high school directors in their states. Participants were asked to complete the survey within a two week period. Of the 158 directors contacted, a total of 45 completed the survey resulting in a response rate of 28%. The purpose of this study was to gather information about the assessment practices used in secondary band programs from a sample of in service band directors. The 45 directors who responded provided valuable data discussed in the context of the research qu estions that guided this inquiry. Research Question 1: In what specific ways are high school band directors assessing students in their ensemble classes? F indings of this study indicate that participation, performances, and performance based tests are the primary w ays directors are assessing students in their band ensemble classrooms. Attendance and conduct/discipline also play a vital role in this process. These results were consistent with those of McCreary (2001) who found that playing tests were among the most popular assessment methods used by directors. In addition, Sherman (2006) found an emphasis on performance in regard to student assessment. Sherman also found that attendance and participation were important assessment components used by the direc tors. Finally, Kancianic (2006) also found that the classroom assessments used by high school band directors tend to focus on the evaluation of student performance skills. Results of the study indicated that portfolios, peer assessments, and requirement ch ecklists were assessment components rarely used by directors. Hanzlik (2001) had similar findings stating that assessment practices such as student journals, portfolios,

PAGE 62

62 reflective writing, teacher surveys and student displays were never used by at least 80% of band directors. Research Question 2: What frequency are assessments components being implemented by high school band directors? The directors indicated a wide variety in frequency of use concerning individual assessment components by directors. On a weekly basis, results indicated that participation, attendance, conduct/discipline, and attitude are the most frequently used components. Performance based tests and written tests/worksheets are most often used monthly, and performances are used per grading period by directors to assess their students. Kancianic (2006) found similar results in frequency of use of performance based tests by teachers. Students playing in small ensembles, playing alone for the teacher, playing alone in front of the class, an d playing with others in a concert all rank among the top 10 most frequently used assessment components. The least frequently used components are portfolios, Smart Music peer assessment, and computer assisted programs. Again, Kancianic (2006) echoed thes e results finding students creating portfolios and students using computers to assess their learning were among the least frequent assessment components directors used to assess their students. Research Question 3: What degree of importance do the high s chool band directors give to (a) individual testing and evaluation, (b) performance attendance and contribution, and (c) rehearsal attendance and contribution in an assessment model? R esults of the current study indicate directors assign the following percentages to the three listed assessment components: 34.72%, individual testing and evaluation; 32.79%, performa nce attendance and contribution; and 31.79%, rehearsal attendance and contribution. The importance placed on these components by the directors is also

PAGE 63

63 evident in the directors use of assessment components: performances, performance based tests, participation, and attendance grades ranked as the top four responses receiving the highest percentage allocation by directors. The next two ranked assessment components were written test/worksheets and attitude grades. The fact tha t directors cited an almost perfect balance among the three suggested assessment components may indicate a star t ing place in the development of an assessment model for high s chool band ensemble classes. Certainly, incorporati ng these assessment components seems necessary in the band classroom for successful student assessment. The pilot study provided valuable feedback that was used to guide the present study ( Appendix B).

PAGE 64

64 CHAPTER 4 RESULTS The MENC sent a random sample of 5,000 high school band directors from across the United States an emailed invitation to participate in the survey Because of incorrect or changed addresses 500 emails were immediately returned as undeliverable. Of the 4,500 directors who received the email, a total of 607 directors followed the survey link and opened the survey. From that group, 454 directors completed the survey resulting in a total resp onse rate of 10% of the original sample; and of the 607 responding directors 75% completed the survey D escription and analysis of the survey results are presented in conjunction with the questions as they appear on the questionnaire. Background Informati on Question 1: Type of school. The school type of the directors participating in the questionnaire showed that a large percentage teach in public schools. R esults were as follows : Public = 89.6% (407 schools), Private = 8.6% (39 schools), Charter = 0.9% ( 4 schools), and Other = 0.9% ( Figure 4 1). Question 2: Number of students (9th to 12th grade) enrolled at your high school. S chool enrollment of the 454 schools showed a more balanced representation in each of the five population categories with smaller schools having higher percentages R esults were as follows : 1 to 500 students = 32 4 % (14 7 schools); 501 to 1,000 students = 22. 2 % ( 101 schools); 1,001 to 1,500 students = 20 5 % ( 9 3 schools); 1,501 to 2,000 students = 12 8 % ( 58 Schools); and 2,001 or more students = 12. 1 % ( 55 schools) (Figure 4 2)

PAGE 65

65 Question 3: In what type of community is the school located? T he community type results again showed representation in each category with Urban/Inner City schools being the least represented at 11. 7 % (5 3 schools) The remaining results were Suburban = 31. 9 %, ( 145 schools) T own = 2 6 9 % (1 22 schools), and Rural /Remote = 29.5% (134 schools) (Figure 4 3) Question 4: What is the socio economic status of the community? While the socio econom ic status of the community showed representation in each category almost half of the schools reported Low/Middle at 42. 3 % ( 192 schools), and a small percentage of schools reported High at 4 4 % ( 20 schools) The remaining results were Low = 1 1 2 % ( 51 scho ols), Middle = 22. 9 % ( 104 schools), and Middle/High = 19. 2 % ( 87 schools) (Figure 4 4) Question 5: Total number of students involved in the band program. The size of the band programs participating directors taught had representation in each of the five categories R esults were as follows : 1 to 50 students = 26 7 % ( 121 schools); 51 to 100 students = 34.4 % ( 156 schools); 101 to 150 students = 22. 2 % ( 101 schools); 151 to 200 students = 9 5 % ( 43 schools); and 201 or more students = 7.3% (33 schools) (Figure 4 5) Question 6: Number of stude nts involved in concert band(s). The number of student s involved in concert bands also had representation in each category with 1 to 50 students = 38. 1 % ( 173 schools); 51 to 100 students = 33.0 % ( 150 schools); 101 to 150 students = 17 4 % ( 79 schools); 151 to 200 students = 6 6 % ( 30 schools); and 201 or more students = 4.8% (22 schools) (Figure 4 6)

PAGE 66

66 Question 7: Number of concert bands at your school. The number of concert bands being taught showed the following results: 1 concert band = 53. 1 % ( 241 schools), 2 concert band s = 2 7 8 % (12 6 schools), 3 concert band s = 14. 3 % ( 65 schools), 4 concert band s = 3 1 % ( 14 schools), and 5 or more concert band s = 1 8 % ( 8 schools) (Figure 4 7) Question 8: Average number o f students in each concert band. Finally t he average number of students in each concert band showed representation in each of the five categories : 1 to 15 students = 4. 0 % ( 18 schools); 16 to 30 students = 19. 8 % ( 90 schools); 31 to 45 students at 30. 4 % (1 38 schools); 46 to 60 students = 29. 7% ( 1 35 schools) ; and 61 or more students = 16 1 % ( 73 schools) (Figure 4 8) Question 9: Years of experience teaching high school band (including this year). Most participants have been teaching 17 or more years ( 36 8 %, 167 directors). Remaining results: 1 to 4 years = 19. 8 % (9 0 directors); 5 to 8 years = 18 3 % ( 83 directors); 9 to 12 years = 15 6 % ( 71 directors); and 13 to 16 years = 9 5 % ( 43 directors) (Figure 4 9) Question 10: Years teaching in your current position (including this year). A lmost half of the directors had taught 1 to 4 years = 40. 5 % (1 84 directors) R emaining results were as follows : 5 to 8 years = 21 4 % ( 97 directors); 9 to 12 years = 12. 8 % ( 58 directors); and 13 to 16 years = 7 7 % ( 35 directors); and 17 or more years = 17. 6 % ( 80 directors) (Figure 4 10) Question 11: Highest degree earned in music/music education. Most directors reported having a m aster s degree (55.5%, 252 directors). Many had a bachelor s degree (40.3%, 183 directors). On e dir ector is teaching with an

PAGE 67

67 a ssociates degree, sixteen earned doctoral degrees, and two directors reported post doctoral study (Figure 4 11) Question 12: Number of band di rectors employed at your school. M ore than half of the di rectors manage the band prog ram alone ( 69. 2 %, 314 schools). Other results: 1.5 directors = 8 1 % ( 37 schools); 2 directors = 16. 3 % ( 74 schools); 2.5 directors = 2. 4 % (1 1 school); and 3 or more directors = 4. 0 % (1 8 schools). Grading Information Question 13: How many marking periods does your school have per year? D irectors reported a variety of grading periods from 2 semesters to 12 grading periods. Most directors indicated 4 grading periods (quarter system) ( mean = 4.48 median = 4, mode = 4) Question 14: How many weeks long is a typical marking period? The length of the grading period also varied, from 4.5 weeks to 18 w eeks. Most directors indicated that 9 weeks was the typical length of a grading period ( mean = 9.49, median = 9, mode = 9) Question 15: What type of grade do you assign at the end of a marking period? Most directors indicated that they assign a letter grade ( 54.2 %, 246 schools). T he next largest group said they assign number grades ( 31 5 %, 143 schools) at the end of a grading period. Other r esults : No grades assigned = 0.2% (1 school), Pass/Fail = 0.2% (1 school), Written Comments = 0.2% (1 school), and Combination of types = 13.7% (62 schools) (Figure 4 12).

PAGE 68

68 Question 16: Does the grade given in your band ensemble class affect the students overall grade point average (GPA)? Most directors reported that the grade given in the band ensemble class affects the students overall grade point average (GPA ) at 95.2% (432 schools) Only 4% (18 schools) of directors indicated that the band grade does not affect students GPA, and 0.9% (4 schools) cited other circumstances. Question 17: Is there a weighted grading system being used in your school? Most directors reported that a weighted grading system (higher level classes assigned more value in the stude nts overall GPA) is used in their schools (64.5 %, 293 schools). Question 18: Is there a weighted option in the grade given in your band ensemble class? Of 293 schools who indicated they have a weighted grading system, only 19.8% (90 schools) of directors said there was a weighted option in the grade given in their band ensemble class. D irectors reporting yes indicated a weighted grade was assigned in the following instances: top ensemble, upper level students (11th and 12th graders), honors credit option, or students choosing to do extra work or participate in extra assessments. Assessment Philosophy Question 19: Ho w important are the following purposes of student assessment? Participants were presented with a list of sixteen purposes for assessing students in band. D irectors rated the importance of these purposes using a 5 point Likert type scale ranging from 1 (not important) to 5 (extremely important ). T o provide feedback to students ( M = 4. 63) and to determine what concepts students are failing to understand ( M = 4. 45) were among the most important purposes. To determine whether students

PAGE 69

69 were practicing at home ( M = 3. 37) and to rank students according to individual performing levels ( M = 2 81) were the least important purposes of student assess ment ( Table 4 1 ). Question 20: How important are the following criteria in the evaluation of your band students? Participants were asked to rate the importance of ten different criteria in the evaluation of their band students. The rating scale was a 5 point Likert type s cale ranging from 1 (not important) to 5 (extremely important). The ability to play an instrument in an ensemble ( M = 4.26) and individual playing ability on an instrument ( M = 4.16) were rated among the most important. The ability to improvise melodies, variations, and accompaniment ( M = 2.32) and the ability to compose music ( M = 2.02) were rated as the least important ( Table 4 2 ). Question 21: How important are the following assessment categories in a student assessment model for band? Participants were asked to rate the importance of four assessment categories in a student assessment model for bands. Summative assessment ( i.e., concerts, festivals, recitals) was ranked at the most important ( M = 4.27) ; formative assessment ( i.e., playing tests) was ranked second ( M = 4.03). Diagnostic assessment ( i.e., error detection) was ranked next ( M = 3.89) and placement assessment ( i.e., auditions, challenges) was ranked the least important ( M = 3.27) ( Table 4 3 ). Question 22: How influenti al are the following factors on the assessment methods you use? Respondents were presented with a list of sixteen factors that might influence their choice of assessment methods. Participants rated the level of influence using a 5point Likert type s cale ranging from 1 (not influential) to 5 (extremely influential ). The

PAGE 70

70 directors personal philosophy of education ( M = 4. 38) and the objectives or goals of your class ( M = 4. 31) had a high degree of influence on the choice of assessment method. Requirements set by the school district ( M = 2. 73) and the assessment method implemented in the high school band program you attended ( M = 2.16) had a low degree of influence (below the moderately influential response) Also influential in determi ning assessment method s were the amount of available class time ( M = 4.20), the demands of the ensembles performance schedule ( M = 3. 90), and the expectations of your students ( M = 3. 53) ( Table 4 4 ). Question 23: How do you feel the following have prepared or are preparing you to assess the students in your band program? Participants were asked to rate how well eight different factors have prepared ( or are preparing ) them t o assess the students in their band program. Discussions with colleagues ( M = 3.98) and clinic at profess ional conference ( M = 3.64) had the highest response rate, while state or district standards ( M = 2.72) and teacher inservice sessions ( M = 2.27) had the lowest response rate ( Table 4 5 ). Assessment Information Question 24: Which of the following assessment components do you use to determine a students grade? Participants were presented with a list of sixteen assessment components and asked to select which they use to determine a students grade. The two most prevalent components used by director s were participation (95.6%) and performances (92.1%). The two least prevalent were peer assessment (9.5%), and portfolios (7.5%) (Table 4 6).

PAGE 71

71 Question 25: Please enter the percentage of each component you use to determine your grades. Using the same list of assessment components, participants were then asked to enter the percentage of each component used in determining their grades. Again performances (26.63%) and participation (22.27%) were assigned the highest percentages, and portfolios (6.67%) and peer assessments (5.72%) were as signed the lowest percentages ( Table 4 6 ). Question 26: Which of the following procedures for data collection do you use when assessing your students? From a list of twelve options, participants were asked which procedures for data collection you use when assessing your students. Teacher observation (88.0%), students play individually in class (85.4%), and students play in a group in class (80.0%) were the most used procedures. Students record themselves playing in a group (13.1%), Smart Music (13.1%), and computer assisted program (5.1%) were the least used procedures f or collecting assessment data ( Table 4 7 ). Question 27: If you use performancebased tests when assessing students, what materials do you utilize? P articip ants who use performance based tests in the assessment of their students were asked to select what materials they us e from a list of seven options. The two most prevalent responses were scales/rudiments (93.3%) and band music (92.4%). Method book exercis es sight reading, etudes and audition music were also selected by about half of the participants. Other materials directors use includes chamber ensemble music chorales, and rhythm sheets ( Table 4 8 ).

PAGE 72

72 Question 28: The following is a list of character istics that have been traditionally used in assessment models of band students. Participants were asked to rate the importance of fifteen characteristic traditionally used in assessment models of band students. The range of responses rated each of the ch aracteristics between moderately important and extremely important response options Reflects the music skills and knowledge that are most important for students to learn ( M = 4.32); supports, enhances, and reinforces learning ( M = 4.29); and is reliable and valid ( M = 4.25) were rated as the most important. The least important characteristics were includes a variety of assessment components ( M = 3.69); is open to review by interested parties ( M = 3.49); and includes both musical and non musical component s ( M = 3.15) There were no characteristics rated below the moderately important response option by directors ( Table 4 9 ). Question 29: Rate your agreement level with the following statements concerning assessment. Participants were asked to rate their agreement level of eight statements concerning assessment. Statements ranged from the satisfaction of directors, students, parents, and administrators with current band assessment practices to the level of interest in finding oth er ways to assess students ( Table 4 10 ) Assessment Model Question 30: Using the follow ing three assessment components: (a) individual testing and evaluation, (b) performance attendance and contribution, and (c) rehearsal attendance and contribution, assig n percentages (totaling 100%) to create what your believe to be a balanced assessment tool for band students. The final survey question asked participant s to assign percentages of weight that overall components should have in a balanced assessment tool for band. D irectors assigned the following mean percentages to these components: (a) individual testing

PAGE 73

73 and evaluation = 3 0 57% ( SD = 13.78) ; (b) performan ce attendance and contribution = 3 4 .7 0 % ( SD = 13.38) ; and (c) rehears al attendance and contribution = 3 4 95% ( SD = 12.86) (Figure 413) Figure 4 1. School t ype (N = 454) Figure 4 2 School e nrollment (N = 454) 0 50 100 150 200 250 300 350 400 450 Other Charter Private Public 0 20 40 60 80 100 120 140 160 1 500 501 1000 1001 1500 1501 2000 2001 or more S t u d e n t s

PAGE 74

74 Figure 4 3. Community type of school (N = 454) Figure 4 4. Socioeconomic status of school community (N = 454) 0 20 40 60 80 100 120 140 160 Urban/Inner City Suburban Town Rural/Remote 0 50 100 150 200 250 Low Low/Middle Middle Middle/High High

PAGE 75

75 Figure 4 5. Student enrollment in band program (N = 454) Figure 4 6. Student enrollment in concert band(s) (N = 454) 0 20 40 60 80 100 120 140 160 180 1 50 51 100 101 150 151 200 201 or more S t u d e n t s 0 50 100 150 200 1 50 51 100 101 150 151 200 201 or more S t u d e n t s

PAGE 76

76 Figure 4 7. Concert bands per school (N = 454) Figure 4 8. Average number of students per concert band (N = 454) 0 50 100 150 200 250 300 1 2 3 4 5 or more C o n c e r t B a n d s 0 20 40 60 80 100 120 140 160 1 15 16 30 31 45 46 60 61 or more S t u d e n t s

PAGE 77

77 Figure 4 9 Directors years of teaching experience (N = 454) Figure 4 10. Directors years teaching at current school (N = 454) 0 20 40 60 80 100 120 140 160 180 1 4 5 8 9 12 13 16 17 or more Y e a r s 0 50 100 150 200 1 4 5 8 9 12 13 16 17 or more Y e a r s

PAGE 78

78 Figure 4 11. Directors education level (N = 454) Figure 412. Grade types assigned (N = 454) 0 50 100 150 200 250 300 Post Doctorate Doctorate Masters Bachelor Associate's 0 50 100 150 200 250 300 Written Comments Pass/Fail No grades assigned Combination of types Number Grades Letter Grades

PAGE 79

79 Figure 413. Create a balanced assessment tool (N = 454) Table 4 1. Importance of p urposes of assessment (N = 45 4 ) Purpose M SD To provide feedback to students 4.63 0.64 To determine what concepts students are failing to understand 4.45 0.78 To determine what concepts students are understanding 4.41 0.76 To determine whether instruction has been successful 4.33 0.81 To demonstrate student accountability for learning 4.27 0.80 To determine future instructional direction 4.26 0.82 To identify individual student abilities 4.23 0.87 To set or maintain class standards 4.11 0.90 To provide feedback to parents 4.05 0.84 To help students prepare for public performance 4.00 0.97 To determine the level of musical preparedness for public performance 3.97 1.01 To establish or maintain credibility for the band program 3.85 1.15 To identify general class abilities 3.81 1.02 To motivate students to practice their instruments 3.75 1.05 To determine whether students are practicing at home 3.37 1.17 To rank students according to individual performance levels 2.81 1.27 30% 35% 35%Assessment Tool Individual Testing and evaluation Performance attendance and contribution Rehearsal attendance and contribution

PAGE 80

80 Table 4 2. Criteria importance in the evaluation of band students (N = 454) Criteria M SD Ability to play an instrument in an ensemble 4.26 0.80 Individual playing ability on an instrument 4.16 0.89 Ability to evaluate music and music performances 3.92 0.93 Ability listening to, analyze, and describe music 3.54 1.06 Ability to understand the relationships between music, the other arts, and disciplines outside the arts 3.15 1.07 Ability to understand music in relation to history and culture 3.13 1.04 Knowledge of music theory 3.05 0.90 Knowledge of music history 2.51 0.88 Ability to improvise melodies, variations, and accompaniment 2.32 0.92 Ability to compose music 2.02 0.89 Table 4 3. Importance of assessment categories (N = 454) Category M SD Summative assessment 4.27 0.84 Formative assessment 4.03 0.89 Diagnostic assessment 3.89 0.94 Placement assessment 3.27 1.20 Table 4 4. Factors influencing assessment methods (N = 454) Factor M SD Your personal philosophy of education 4.38 0.76 The objectives or goals of your class 4.31 0.73 The amount of available class time 4.20 0.89 The demands of the ensembles performance schedule 3.90 1.05 The expectations of your students 3. 53 1.15 The number of students enrolled in the class 3.36 1.28 Available equipment (computers, recording) 3.29 1.21 Professional development you have participated in 3.11 1.16 Influence from your music colleagues 3.11 1.14 Your undergraduate coursework 3.07 1.17 The expectations of your students parents 3.04 1.15 The expectation of your school principal 3.02 1.20 Your graduate coursework 2.94 1.38 Influence from professional organizations 2.80 1.11 Requirements set by the school district 2.73 1.22 The assessment method implemented in the high school band program you attended 2.37 1.27

PAGE 81

81 T able 4 5. Assessment preparation (N = 454) Preparation option M SD Discussions with colleagues 3.98 0.95 Clinics at professional conference 3.64 1.05 Graduate coursework 3.07 1.37 Professional organizations 3.06 1.08 Undergraduate coursework 3.04 1.22 National standards 2.83 1.03 State or district standards 2.72 1.12 Teacher in service sessions 2.27 1.18 Table 4 6. Assessment components used with the assigned percentage assigned (N = 454) Assessment c omponent Response % (Count) Weighted % (Count) Participation 95.6 (433) 22.27 (394) Performances 92.1 (417) 26.63 (375) Performance based t ests 88.7 (402) 20.54 (342) Attendance 77.7 (352) 18.67 (287) Conduct/d iscipline 69.8 (316) 12.23 (251) Written test s/w orksheets 58.1 (263) 11.11 (228) Attitude 54.1 (245) 12.51 (219) Extra c redit (lessons, concert attendance) 47.7 (216) 10.25 (126) Practice log/j ournal 28.0 (127) 12.19 (118) Sight reading t ests 24.5 (111) 7.34 (92) Student s elf assessment 22.5 (102) 8.10 (89) Smart Music 12.8 (58) 11.79 (56) Requirement c hecklists 12.8 (58) 7.47 (45) Computer assisted p rograms 9.9 (45) 7.68 (37) Peer a ssessment 9.5 (43) 5.72 (43) Portfolios 7.5 (34) 6.67 (39)

PAGE 82

82 Table 4 7. Data collection procedures (N = 454) Procedure Response % (Count) Teacher observation 88.0 (397) Students play individually in class 85.4 (385) Students play in a group in class 80.0 (361) Short answer test or assignment 39.0 (176) Students record themselves playing individually 33.0 (149) Student self assessment 31.5 (142) Multiple choice test or assignment 28.4 (128) Practice log or record 26.2 (118) Essay question test or assignment 25.1 (113) Students record themselves playing in a group 13.1 (59) Smart Music 13.1 (59) Computer assisted program 5.1 (23) Table 4 8. Materials used in performance based tests (N = 454) Material Response % (Count) Scales/rudiments 93.3 (418) Band music 92.4 (414) Method book exercises 56.5 (253) Sight reading 48.2 (216) Etudes 42.2 (190) All state / district /county / or honor band audition music 40.6 (182) Solo literature 25.9 (116) Table 4 9. Importance of characteristics of assessment models (N = 454) Characteristic M SD Reflects the music skills and knowledge that are most important for students to learn 4.32 0.76 Supports, enhances, and reinforces learning 4.29 0.78 Is reliable and valid 4.25 0.82 Assists in motivating student to learn and develop 4.21 0.82 Aligns with instruction 4.12 0.86 Is understood by all parties involved ( i.e., students, parents) 4.06 0.90 Is time efficient 4.06 0.88 Is relatively easy to administer and maintain 4.03 0.90 Requires a student to demonstrate a music behavior in an authentic or realistic situation 4.02 0.89 Assists in the preparation of music for performances 3.96 0.95 Includes appropriate grading rubrics 3.88 1.02 Includes regularly scheduled assessment opportunities 3.74 0.98 Includes a variety of assessment components 3.69 0.95 Is open to review by interested parties 3.49 1.12 Includes both musical and non musical components 3.15 1.14

PAGE 83

83 Table 4 10. Agreement level with statements concerning assessment (N = 454) Statement M SD I would be interested in finding other ways to assess my students 4.29 0.86 My school administrators are satisfied with the current band assessment practices 4.09 0.77 My assessment practices foster the individual musical development of my students 3.89 0.87 My assessment practices are good enough to ensure quality instruction 3.87 0.83 My students parents are satisfied with the current band assessment practices 3.82 0.76 My students are satisfied with the current band assessment practices 3.68 0.81 I am satisfied with my current band assessment practices 3.54 0.93 My assessment and grading practices are similar to those of most of the band directors I know 3.51 0.97

PAGE 84

84 CHAPTER 5 DISCUSSION AND CONCLUSIONS This chapter presents a discussion of the results of the current study with reference to past research in this area as well as the previously discussed suggestions for teachers concerning student assessment by various professional organizations. The discussion section is present ed in the five questionnaire categories: 1) Background Informati on, 2) Grading Information, 3) Assessment Philosophy, 4) Assessment Information, and 5) Assessment Model. Conclusions are presented within the context of the research questions that guided this study, followed by implications for music education and future research suggestions in the area of band student assessment Discussion of the Results Background Information Particip ants included a representative sample of band directors from across the United States. While only 10% of the sample (N = 4,500) completed the survey, the total of 454 completed questionnaires makes this one of the largest completed studies in this area of research. The limited response rate was likely the result of numerous variables including the band directors busy schedules, interest and comfort with the topic, and the method of invitation and follow up administered. Members may not give their full a ttention to all emails distributed by MENC and the study was restricted to one followup email to encourage directors to participate. The 454 directors completing the survey teach at schools that are representative of high schools in the United States. Th e school type, school size, community type, and socio ec onomic categories were all represented. These directors also represent a balance of all categories of band program size and the administration of the concert

PAGE 85

85 band component of their programs in relat ion to ensemble enrollment and size. Finally, the sample includes directors who have a variety of years of teaching experience and years teac hing in their current positions. Most directors had at least a m aster s d egree in music/music education. Grading Information A wide variety of grading systems are used by school systems across the country. The number of grading periods along with th eir duration varied greatly in the sample as did the type of grade the directors assign at the end of a marking period. This variation of systems would have to be accounted for in any projected assessment model and may explain the uniqueness of each directors assessment system. Band directors reported that 95.2% of the grades they assign to students in band ensemble cl asses affect the students g rade point average. This encouraging result supports the decision to include the arts as a core class in the No Child Left Behind Act (NCLB) of 2001 (U.S. Department of Education, 2002) A d isappointing and somewhat contradictory result was that only 19.8% of those directors teaching in a school offering a weighted grading system had the option of issuing a weighted grade to their band students. Assessment Philosophy T he assessment p hilosophy section of the questionnaire provided valuable feedback from directors including the motivation behind their assessment choices, their views on assessment issues, and the factors that influence their assessment decisions. Purpose The sixteen purposes directors rated are divided into three categories: instructional purposes (I) ; performance purposes (P) ; and external purposes (E).

PAGE 86

86 Instructional purposes (I) relate to the process of teaching and learning and the feedback associated with that process (i.e. to provide feedback to students to determine what concepts students are failing to understand to determine what concepts students are understanding, to determine whether instruction has been successful to demonstrate student accountability for learning, and to deter mine future instructional direction). Performance purposes (P) are associated with any aspect of individual or group performance ability or levels (i.e. to identify individual student abilities to help students prepare for public performance to determine the level of musical preparedness for public performance to identify general class abilities and to rank students according to individual performance levels ). External purposes include factors that do not directly relate to the instructional or performance aspects of the classroom (i.e. to set or maintain class standards to provide feedback to parents to establish or maintain credibility for the band program to motivate students to practice their instruments and to determine whether student s are practicing at home). The purposes of student assessment considered most important by the directors were to provide their students and themselves with feedback concerning the instructional process in the classroom ( to provide feedback to students to determine what concepts the students are failing to understand to determine what concepts the students are understanding, and to determine whether instruction has been successful ). These r esults were consistent with findings of the pilot study (2008) and align with the MENC guideline (MENC: The National Association for Music Education, 1998) : a ssessment should support, enhance, and reinforce learning

PAGE 87

87 The purposes of student assessment considered least important b y the directors centered on mo tivation and placement of students ( to motivate students to practice their instruments to determine whether students are practicing at home and to rank students according to individual performance levels ). Again, these r esults were consistent with findings of th e pilot study and indicate directors are not concerned with external factors associated with student assessment. It is important to note that the current study may indicate a shift in the directors purpose for student assessment from previous research. The sixteen purposes presented to the directors can be divided into three basic categories: instructional purposes, performance purpose s, and external purposes. R esults of the current study clearly rank these categories: 1 = instruction purposes, 2 = performa nce purposes an d 3 = external purposes (Table 5 1). Earlier findings by Kancianic (2006) Hanzlik (2001) and Hill (1999) found a much greater emphasis on performances purposes ( i.e., to help students prepare for public performance and to determine the level of musical preparedness for public performances ) ; ranking instructional purposes second. Consistent findings in the research show external purposes ( i.e., to establish or maintain credibility for the band program and to provide feedback to parents ) least important to directors. Criteri a The ten criteria directors rated are divided into four categories which can be related to Blooms taxonomy of learning domains (Bloom, 1971) : performance criteria (P) ; critical thinking criteria (CT) ; knowledge criteria (K) ; and creative criteria (C) Performance criteria (P) align with Blooms psychomotor domain relating to the manual or physical skills associated with musical performance (i.e. ability to play an instrument

PAGE 88

88 in an ensemble, and indiv idual playing ability on an instrument ). Criti cal thinking criteria (CT) relate most with Blooms cognitive domain, and to a lesser extent the affective domain and are associated with the evaluation, analysis, description, and understanding of music in relation to other areas (i.e. ability to evaluate music and music performances ability listening to, analyze, and describe music ability to understand the relationships between music, the other arts, and disciplines outside the arts and ability to understand music in relation to history and culture). Knowledge criteria (K) is directly associated with Blooms knowledge domain and relates to mental skills or recall in relation to music (i.e. knowledge of music theory and knowledge of music history). Creative criteria (C) are associated with Blooms knowledge and psychomotor domains and encompass the mental and physical compositional and improvisational skills associated with music (i.e. ability to improvise melodies, variations, and accompaniment an d ability to compose music ). The assessment criteria directors deemed most important in the evaluation of their band students centered on performance skills ( i.e., ability to play an instrument in an ensemble, and individual playing ability on an instrument ). This result was expected with the understanding that the classes in question are performance based ensembles with the primary purpose of preparing music for performance. The next category directors found important involved some type of critical think ing including evaluating and describing music and musical performances. Included with this category were understanding musi c and its relationship with other arts disciplines, outside disciplines, and music in relation to history and culture. The assessme nt criteria the directors deemed least important revolved around music knowledge and external performance

PAGE 89

89 skills not directly associated with the performance of tradition al concert band literature ( i.e., knowledge of music theory and history and the abili ty to improvise melodies, variations, accompaniments, and compose music ) The original survey question (#20) and criteria responses were designed to investigate if high school band directors were asses sing their students based on the national standards for music education. These nine standards were offered to our profession by the Music Task Force (MENC, 2008) on March 11, 1994 in association with the Goals 2000: Educate America Act 1. Singing, alone and with others, a varied repertoire of music. 2. Performing on instruments, alone and with others, a varied repertoire of music. 3. Improvising melodies, variations, and accompaniments. 4. Composing and arranging music within specified guidelines. 5. Reading and notating music. 6. Listening to, analyzing, and describing music. 7. Evaluating music and music performances. 8. Understanding relationships between music, the other arts, and disciplines outside the arts. 9. Understanding music in relation to history and culture The directors most valued criteria associated with standards 1 and 2 with mean response levels of M = 4.26 and M = 4.16 respect ively. Standards 7 and 6 followed, with mean response levels of M = 3.92 and M = 3.54. While the next standards ranked are 8 and 9, their corresponding mean levels of M = 3.15 and M = 3.13 fall just above the moderately important r esponse option. S tandards 3 and 4, along with music theory and music history knowledge, all fall around or below the moderately important response option with standard 4 ( M = 2.02) approaching t he not important response

PAGE 90

90 option. These findings are consistent with current research by Zitek ( 2008), Schopp ( 2008), Diehl ( 2007), and Antmann (2007) who found that band directors curricular activities and assessment are centered on the actual playing o f music vers u s the creation of new music, either through composition or improvisation (Table 52). Category The assessment categor y considered most important by directors was summative assessment ( i.e., concerts, festivals, recitals) followed by formative assessment ( i.e., playing tests). As these two categories of assessments align with the objectives of performance based ensembles this result was expected, with both categories reporting strong respon ses ( M = 4.27 and M = 4.03 ) Diagnostic assessment ( i.e., error detection) and placement assessment ( i.e., auditions, challenges) were considered less important by the directors but still received above average mean responses of M = 3.89 and M = 3.27. T hese results indicate that the directors are assessing their students using assessments from all four categories which align with the National Board for Professional Teaching Standards (Linn, 2005) suggestion: create a variety of assessment tasks and mater ials for assessing student learning. Influence The sixteen factors directors rated are divided into five categories: personal philosophy (P) ; class time (CT); logistics (L) ; training (T) ; and external factors (E). Personal philosophy (P) relates to the directors opinion or view on assessment (i.e. your personal philosophy of education, and the objectives or goals of your class ). Class time (CT) includes any factors relating to perceived time constraints (i.e. the amount of available class time, the demands of the ensembles performance schedule, and the number of students enrolled in the class ). Logistics (L) include available resources (i.e.

PAGE 91

91 available equipment, computers, and recording devices ). Training (T) relates to any education or training th e director has experienced (i.e. professional development you have participated in, your undergraduate coursework your graduate coursework and the assessment method implemented in the high school band program you attended). External factors (E) include e xternal expectations or influences (i.e. the expectation of your students the expectations of your students parents influence from your music colleagues influence from professional organizations and requirements set by the school district ). The factor s that most influence the assessment methods used by directors centered on personal philosophy, available class time, and logistics ( i.e., the objectives or goals of your class the demands of the ensembles performance schedule, and available equipment ). Training ( i.e., undergraduate and graduate coursework and professional development ) and external factors ( i.e., the expectations of students, students parents, and school principal and influence from music colleagues and professional organizations ) leas t influenced assessment methods directors use (Table 5 3). These results are consistent with the pilot study responses and other research by Kancianic (2006) and Kotora ( 2001) who found that band directors are influenced more by internal goals and objectives related to musical performance than by external requirements or expectations set by others. Preparation The eight preparation methods directors rated are divided into three categories: colleagues (C) ; training (T) ; and external methods (E) Colleagues (C) relate to preparation gained from other music educators (i.e. discussions with colleagues and clinics at professional conference ). Training (T) relates to any education or training the

PAGE 92

92 director has experienced (i.e. graduate coursework undergraduate coursework and teacher inservice sessions). External methods (E) refer to outside organizations or published standards (i.e. professional organizations national standards and state or district standards ). Directors co nsidered their colleagues the best source of preparation for assessing studen ts in their band program. D irectors responded strongly for discussions with colleagues ( M = 3.98) and clinics at professional conference ( M = 3.64) as the methods that prepared them best for the task of student assessment. D irectors considered their training ( i.e., graduate and undergraduate course work ) and external methods ( i.e., professional organization, national or state standards ) to have moderately well prepared or not well prepared them with a range of r esponse ( mean value s between 3.07 and 2.27) ( Table 54) Assessment Information R esults from the Assessment Information section of the questionnaire provided specific information about how the directors are currently assessing their students. This section also asked the directors to reflect on the im portance of specific assessment model characteristics and to reflect on the effectiveness of their current assessment met hod. Components The sixteen assessment components the directors commented on are divided into two categories: musical (M) and non musical (N). The musical components (M) relate to any and all aspects of music (i.e. performances performance based tests written tests/worksheets practice log/journal sight reading tests Smart Music requirement checklists, computer assisted programs and portfolios ). Non music components (N)

PAGE 93

93 are external music (i.e. participation, attendance conduct/discipline, attitude student self assessment and peer assessment ). A ssessment components u sed by most of the directors were participation, performances and performance based tests Attendance and conduct/discipline were also used by many directors. Assessment com ponents used by fewer than ten percent of the directors include computer assisted programs peer assessments and portfolios The four most frequently selected components were al so assigned the most weight in the directors overall assessment method: per formances (26.63%); participation (22.27%); performance based tests (20.54%); and attendance (1 8.67%). The least weighted components were sight reading tests (7.34%); portfolios (6.67%); and peer assessment (5.72%). These findings were virtually identica l to the pilot study and mirrored component usage results from research conducted by Antmann (2007), Sears (2002), and Sherman (2006). Directors place clear emphasis on components that reinforce the preparation of performance materials and the performance s themselves. D irectors also stress the importance of team related concepts such as participation, attendance and conduct/discipline in assessing their students. These concepts become extremely important in the setting of performance based ensembles where the su ccess of the group is relies on each member fulfilling individual responsibilities. D irectors indicate the use of both musical and non musical assessment components in their overall assessment me thod ( Table 55 ) Data c ollection The twelve data collection procedures that directors commented on are divided into three categories: classroom method (C) ; outside of the classroom method (O); and

PAGE 94

94 test or assignment (T). Classroom method (C) relates to all data collection procedures occurring in the classroom (i.e. teacher observation, students play individually in class and students play in a group in class ). Outside of the classroom method (O) relates to data collection procedures not occurring in the classroom (i.e. s tudents record themselves playing individually student self assessment practice log or record, students record themselves playing in a group, Smart Music and computer assisted programs ). Test or assignment (T) includes any written evaluation (i.e. short answer test or assignment multiple choice test or assignment and essay question test or assignment ). The most used methods for data collection occur in the classroom and involve performance based activities. Over 80% of the directors use teacher observations, and students playin g individually or in a group, when assessing their students. Other datacollection methods used by far few er directors include outsidethe classroom methods ( i.e., students recording themselves playing individually or in a group, practice log or record and Smart Music ) and written and knowledge based methods ( i.e., short answer multiple choice or essay question tests or assignments ) (Table 56) Characteristics C haracteristics of student assessment considered most important by the directors align with the guidelines provided by the MENC for music classroom assessment. Assessing the most important music skills and knowledge and using assessment to support, enhance, and reinforce learning were most valued by directors, as well as having assessments that are reliable and valid. Other characteristics considered important and aligning with MENCs guidelines include having assessments that are understood by all parties involved, and having assessments that require a student to

PAGE 95

95 demonstrate a music behavior in an authentic or realistic situation. Characteristics considered less important by directors include using appropriate grading rubrics, including regularly scheduled assessment opportunities; and including a variety of assessment compone nts. The directors also rated time efficient and relatively easy to administer and maintai n as important charac teristics of an assessment model. This support s research by Kancianic (2006), Chiodo (2001), and Sherman (2006) who found that the main problem directors perceive with student assessment is time constraints in dealing with large numbers of students. Reflectio n When reflecting on eight statements concerning their assessment practices, directors agreed most strongly with the s tatement I would be interested in finding other ways to assess my students (M = 4.29) and only moderately agreed with the statement I am satisfied with my current band assessment practices ( M = 3.54) These res ults create a sense of optimism for the futu re of student assessment in high school band programs. Not only have the directors identified assessment as an area of concern, but they have also indicated a willingness to explore new assessment methods. D irectors rate d school administrators satisfact ion with the current band assessment practices highest ( M = 4.09); with students parents ( M = 3.82) ; the students themselves ( M = 3.68); and themselves ( M = 3.54) following. The groups less directly involved in the assessment process ( school administrator s and students parents ) are more satisfied. The individuals most directly involved ( i.e. directors and students ) are the le ss satisfied with the assessment process Directors also responded at slightly above the moderately agree level that their assessment practices foster the

PAGE 96

96 individual musical development of their students ( M = 3.89), and that their assessments are good enough to ensure quality instruction ( M = 3.87) Assessment Model In assigning weight values to the three assessment comp onents in this study in an effort to create a balanced student assessment model for bands, directors assigned similar weight to the two performance oriented components (rehearsal attendance and contribution [ M = 34.95%] and performance attendance and cont ribution [ M = 34.70%]) ; and only slightly less weight to the assessment component (individual testing and evaluation [ M = 30.57%]). These weight distribution r esults were very similar to pilot study findings reinforcing the idea that directors suggest an equal distribution among the thre e assessment components (Table 57). Conclusions The purpose of this study was to investigate current student assessment practices of high school band directors. Research Question 1: In What Specific Ways are Current High School Band Directors Assessing S tudents in Their Ensemble C lasses? Results show that participation, performances, and performance based test are the primary components high school band directors are us ing to assess the students who participate in thei r programs. These individual assessment components are typically responsible for 20 to 25% of the students grade; and when combined with other components comprise a total assessment plan that includes both musical and non musical assessment components. Directors primarily rely on in class data collection methods (that include teacher observation) and playing tests (that require

PAGE 97

97 students to play their instruments individually and in a group setting) These playing test s typically consist of scales/rudim ents and band music. Research Question 2: What are High School Band Directors Attitudes toward the Assessment P rocess? R esults show that the main purpose of student assessment for high school band directors centered on providing their students and themse lves with feedback concerning the instructional proce ss in the classroom. D irectors reported that performance skills were the most important criteria to as sess in their students and the main influences of the assessment methods they u se are their personal philosophy of assessment and available c lass time. D irectors reported the best source of preparation for assessing their students came from their colleagues D irectors are interested in finding new ways to assess their students. Research Question 3: Ho w Can the Results of this Research Contribute to the Development of a Student Assessment Model for B ands? The five examples of current grading policies included in the introduction (Wisconsin, California, New York, Washington, and Texas) demonstrat ed an i nconsistency in a) the selection of assessment components; b) the weight assigned to the component in the overall grading plan; and c) the explanation of the componen t A proposed assessment model should address these inconsistencies by incorporating the ideas and attitudes of current band directors the experts in operating instrumental music classrooms in our schools. R esults show that directors assign a balance among the three suggested components of a student assessment model for bands. These three co mponents comprise the most frequently used assessment components reported by the directors and include musical and non musical traits (Table 58).

PAGE 98

98 The following explanation and definition for individual components stem from results of the study (specifically, the assessment philosophy and assessment information sections) in regard to purpose, criteria, category, influence, data collection, and characteristics. During t he construction of this model, effort has been made to ef fectively blend the music criteria ( i.e., musical preparation, musical execution) with the non music criteria ( i.e., attendance, conduct attitude, materials). Rehearsal attendance and contribution The attendance of each member of the ensemble at all rehearsals is critical to the success of the ensemble. Attendance will be graded in ter ms of present, excused absence; unexcused absence (also refers to tardiness and early dismissals). Contribution (a s graded through teacher observation) reflec ts how a student fulfills individual responsibilities to the ensemble. Contribution includes the following areas: conduct, attitude, musical preparation, and materials (instrument, music, accessories etc ). Stu dents will receive a weekly grade for rehearsal attendance and contribution. Performance attendance and contribution The attendance of each member of the ensemble at all performances is critical to the success of the ensemble. Attendance will be graded in terms of present, excused absence; unexcused absence (also refers to tardiness and early dismissals). Contribution (as graded through teacher observation) reflec ts how a student fulfills individual responsibilities to the ensemble. Contribution includes the following areas: conduct, attitude, musical preparation, musical execution, and materials (ins trument, music, accessories etc. ). Students will receive a grade per performance. Individual testing and evaluation Students will participate in a performan ce based assessment (playing test) each week. The material for the assessments will include scales/rudiments and band music. Students will also have four written tests (one per grading period) addressing basic music knowledge (including applicable mu sic theory and history) (Table 59) The following represent a synthesis of the ideas and concepts directors have indicated as important to an assessment model The specific grading scale and grade-

PAGE 99

99 type assignment are flexible and can be altered to the specifi c school requirements at each directors school (Figure 5 1) I mplications for Music Education This stu dy provides a broad view of secondary band directors current assessment practices. Many component s of the assessment process were investigated and conclusions were discussed in view of results. The se results provide a better understanding of what is happening, and to a certain degree, why directors make their assessment choices. The following discus sion revolves around what could (or in some cases should) be taking place concerning student assessment in band classes. D irectors responding to the survey stated that they are interested in finding other ways to assess their students. The following disc ussion provide s alternatives concerning student assessment that band directors may have not yet explored. Purpose The purposes of student assessment considered most important by directors were to provide their students and themselves with feedback concer ning the instructional process. This is encouraging and may indicate a shift in emphasis away from performance based purposes found in previous research. A logical outcome of this shift might include increased emphasis on the individual testing and evaluation component of a student assessment model and decreased emphasis on the performance based components ( performance attendance and contribution, and rehearsal attendance and contribution) Criteria Directors continue to emphasize criteria centered on perf ormance in their assessment decisions (i.e. ability to play an instrument in an ensemble, and individual

PAGE 100

100 playing ability on an instrument ) Performance remains an important part of the high school band program H owever, in an effort to develop well rounded musicians, other criteria should be emphasized in both assessment and curricular decisions. Band directors should be striving to produce musicians, not just music. A ssessment criteria rated lowest by directors were knowledge of musi c theory ; knowledge of music history ; ability to improvise melodies, variations, and accompaniment ; and ability to compose music. Having musicians with better understandings of these music fundamentals would only serve to enhance future performances, not detract from them. Directors should make efforts to find creative ways to incorporate this important musical knowledge into their rehearsals. Their assessment decisions should reinforce acquisition o f this knowledge. A n example of incorporating music th eory, history, improvisation, and composition into the preparation of a piece of music for performance is the use of Variations on a Korean Folk Song by John Barnes Chance. The use of the pentatonic scale in each of the folk song melodies could be the bas is of lessons in music theory. R egarding music history, t he use of Korean folk songs could initiate a discussion on the musical nationalism that flourished in the mid nineteenth century. Music improvisation could be experimented with by having students create melodies using only the notes of the A flat pentatonic scale heard at the opening of the piece. Music composition could be addressed by having students compose their own folk song using the pentatonic scale as the basic material for the melodic and harmonic elements of the piece. The next step in this process would be for the director to assess the content taught in the various areas The music theory and history components coul d easily be

PAGE 101

101 assessed in written form. The improvisation would most likely be assessed through teacher observation in class while the students complete the activity. The composition could either be assessed in written form or through a performance of the students works by student groups. The most important step in this process is to assess the information taught to students. This sends a clear message to all involved in the learning process that this information is important and valued in addition to pr eparing the piece for performance. Incorporating the comprehensive musical education of students serves to enhance and support musical performances. Preparation Directors responded that their colleagues were the best source of preparation for assessing st udents in their band programs ( discussion with colleagues and clinics at professional conference ). D irectors also reported that their undergraduate and graduate coursework only moderately prepared them for student assessment, and that professional organi zations and standards (national, state and district) we re preparing them to an even lesser degree. These result s indicate a major disconnect among the current music education curriculum, our professional organizations, and practicing music educators. Effo rts should be made to better address the topic of student assessment (especially in ensemble situations) during the training of our future music educators. Assessment methods should be discussed and models suggested to students before they are sent into t he classroom. In addition, our professional organizations should continue to develop programs and support research addressing student assessment. These organizations can play an important role in the direction of student assessment in the future.

PAGE 102

102 Data Co llection Most directors use inclass performance based activities when collecting assessment data on their students. More than 80% of the directors assess their students while they play individually in class and play in a group in class M aterials most used by directors on performance based test are scales/rudiments (93.3%) and band music (92.4%). The survey did not investigate how these performance based tests ( playing tests ) were administered by the directors. How tests are administered determines t heir effectiveness. In many instances playing tests are used as a threat and are initiated after a certain level of frustration is experienced by the director because of a lack of student preparation. The following is a suggested method of incorporating playing tests into a student assessment plan Playing tests can be an excellent assessment method for directors. Playing tests align with the performance objectives of the ensemble, foster individual preparation and practice emphasize the individuals r esponsibility to the ensemble, and hold students accountable for musical development on their instrument. When used properly, playing tests save rehearsal time and improve the overall quality of performances. As with other assessment methods, playing tes t should be given on a regular basis (weekly) and should align with in class content. The schedule of the specific content of playing tests should be logical and should emphasize important musical fundamentals as well as the preparation of major performan ces. In a ddition to in class performance based activities, other data collection methods should be explored by band directors. Curriculum, instruction, and assessment in secondary band rooms have remained st atic for the past 50 years. I n view of our current technology a variety of alternatives are available to todays music

PAGE 103

103 educator to assist in instruction and assessment Results show that the least used procedures for collecting assessment data were Smart Music ( 13.1% ) and computer assisted programs (5.1%). D irectors must be willing to explore these procedures and find ways to incorporate them into their programs Suggested Model A suggested model of student assessment should incorporate results of current research, suggestions and recom mendation from professional organizations, and practical experience gained from the classroom. Th is model serves as a guide, remaining flexible to allow band directors the freedom to modify according to their specific teaching situation. The three main assessment components incorporate both mu sical and non musical criteria; individual testing and evaluation, performance attendance and contribution, and rehearsal attendance and contribution. W eight of the components i s adjusted to emphasize testing and ev aluation of both performance based skills and music knowledge criteria (music theory and music history ). R esulting percentages are individual testing and evaluation = 40%; performance attendance and contribution = 30%; and rehearsal attendance and contribution = 30%. In both the performance attendance and contribution and rehearsal attendance and contribution components the category of attitude has been replaced with behavior Directors reported that most of their student assessment involves teacher observation. Behavior can be accurately observed (empirical). Attitude is a hypothetical construct that cannot be accurately measured through observation, and therefore becomes difficult to quantify when assessing students and assigning grades

PAGE 104

104 The remainder of the model is supported by results of this study and aligns with current student assessment practices of high school band directors (Figure 52). Future R esearch Future research in the area of student assessment should attempt to develop a globally accepted assessment model for use by directors in high school band programs. Such a model could be incorporated into both the undergraduate and graduate music education curriculum, giving prospective music educators the knowledge and tools to effectively assess their students. 1. A similar study could be administered to exemplary high school band directors (i.e., directors who have had bands perform at the Midwest Band Clinic) to offer expert information on assessment practices. These results c ould further justify the construction of an assessment model. 2. Student assessment studies in other school music genres (i.e., chorus, orchestra) to determine commonalities and differences in strategies and practice. 3. Student assessment studies in other grade levels (i.e., middle school, college) to determine commonalities and differences in strategies and practice. 4. Investigation of student assessment from the perspectives of students, parents, and school administrators. 5. Longitudinal study of student assessment in relation to educational and informational inservices designed for music educators. 6. Experimental study testing the effect of a specific student assessment model on student learning and ensemble development. The results of this study represent an impor tant first step in improving student assessment in high school bands in that they reveal current assessment methods and the factors that underlie the reasons those methods are used. These findings stimulate questions for investigation and discussion about how new student assessment methods might encourage a more comprehensive curriculum while supporting the goals and objectives of these performance based ensembles. Few would dispute the proposition

PAGE 105

105 that the music education profession would benefit from the development of a comprehensive band assessment model. The findings of this study suggest that such a model would strengthen band assessment practices, improve the reliability and validity of student assessment data, and, as a result, positively influence band curriculum, classroom instruction, and performance preparation. As a profession, we are obligated to continue this work as we endeavor to attain one of our most important goals the improvement of student music learning. Table 5 1 Assessment purpo ses including category (N = 454) Purpose Category M To provide feedback to students I 4.63 To determine what concepts students are failing to understand I 4.45 To determine what concepts students are understanding I 4.41 To determine whether instruction has been successful I 4.33 To demonstrate student accountability for learning I 4.27 To determine future instructional direction I 4.26 To identify individual student abilities P 4.23 To set or maintain class standards E 4.11 To provide feedback to parents E 4.05 To help students prepare for public performance P 4.00 To determine the level of musical preparedness for public performance P 3.97 To establish or maintain credibility for the band program E 3.85 To identify general class abilities P 3.81 To motivate students to practice their instruments E 3.75 To determine whether students are practicing at home E 3.37 To rank students according to individual performance levels P 2.81 I = Instructional purpose, P = Performance purpose, E = External purpose

PAGE 106

106 Table 5 2 Assessment criteria including categories and national standard (N = 454) Criteria Standard Category M Ability to play an instrument in an ensemble 1 P 4.26 Individual playing ability on an instrument 2 P 4.16 Ability to evaluate music and music performances 7 CT 3.92 Ability listening to, analyze, and describe music 6 CT 3.54 Ability to understand the relationships between music, the other arts, and disciplines outside the arts 8 CT 3.15 Ability to understand music in relation to history and culture 9 CT 3.13 Knowledge of music theory K 3.05 Knowledge of music history K 2.51 Ability to improvise melodies, variations, and accompaniment 3 C 2.32 Ability to compose music 4 C 2.02 P = Performance, CT = Critical Thinking, K = Knowledge, C = Creative Table 5 3 Factors influencing assessment methods including categories (N = 454) Factor Category M Your personal philosophy of education P 4.38 The objectives or goals of your class P 4.31 The amount of available class time CT 4.20 The demands of the ensembles performance schedule CT 3.90 The expectations of your students E 3. 53 The number of students enrolled in the class CT 3.36 Available equipment (computers, recording) L 3.29 Professional development you have participated in T 3.11 Influence from your music colleagues E 3.11 Your undergraduate coursework T 3.07 The expectations of your students parents E 3.04 The expectation of your school principal E 3.02 Your graduate coursework T 2.94 Influence from professional organizations E 2.80 Requirements set by the school district E 2.73 The assessment method implemented in the high school band program you attended T 2.37 P = Personal Philosophy, CT = Class Time, L Logistics, T = Training, E = External

PAGE 107

107 Table 5 4. Preparation methods including category (N = 454) Preparation option Category M Discussions with colleagues C 3.98 Clinics at professional conference C 3.64 Graduate coursework T 3.07 Professional organizations E 3.06 Undergraduate coursework T 3.04 National standards E 2.83 State or district standards E 2.72 Teacher in service sessions T 2.27 C = Colleagues, T = Training, E = External methods Table 5 5 Assessment components usage including category (N = 454) Assessment component Category Response % (Count) Participation N 95.6 (433) Performances M 92.1 (417) Performance based Tests M 88.7 (402) Attendance N 77.7 (352) Conduct/Discipline N 69.8 (316) Written tests/Worksheets M 58.1 (263) Attitude N 54.1 (245) Extra Credit (lessons, concert attendance) M/N 47.7 (216) Practice Log/Journal M 28.0 (127) Sight reading Tests M 24.5 (111) Student Self assessment N 22.5 (102) Smart Music M 12.8 (58) Requirement Checklists M 12.8 (58) Computer assisted Programs M 9.9 (45) Peer Assessment N 9.5 (43) Portfolios M 7.5 (34) M = Musical, N = Non musical

PAGE 108

108 Table 5 6 Data collection procedures including category (N = 454) Procedure Category Response % (Count) Teacher observation C 88.0 (397) Students play individually in class C 85.4 (385) Students play in a group in class C 80.0 (361) Short answer test or assignment T 39.0 (176) Students record themselves playing individually O 33.0 (149) Student self assessment O 31.5 (142) Multiple choice test or assignment T 28.4 (128) Practice log or record O 26.2 (118) Essay question test or assignment T 25.1 (113) Students record themselves playing in a group O 13.1 (59) Smart Music O 13.1 (59) Computer assisted program O 5.1 (23) C = Classroom Method, O = Outside of the classroom method, T = Test or Assignment Table 5 7. Weighted c omponent results compared to pilot study results (N = 454) Assessment c omponent Current Study % Pilot Study % Rehearsal attendance and contribution 35 33 Performance attendance and contribution 35 33 Individual testing and evaluation 30 34 Table 5 8. Student assessment model: Stage o ne Assessment Component Weight % E xplanation Rehearsal attendance and contribution 35 Performance attendance and contribution 35 Individual testing and evaluation 30

PAGE 109

109 Table 5 9 Student assessment model: Stage t wo Assessment Component Weight % Explanation Rehearsal attendance and contribution 35 The attendance of each member of an ensemble at all rehearsals is critical to the success of the ensemble. Attendance will be graded in ter ms of present, excused absence; unexcused absence (also refers to tardiness and early dismissals). Contribution (as graded through teacher observation) reflec ts how a student fulfills individual responsibilit ies to the ensemble. Contribution includes the following areas: conduct, attitude, musical preparation, and materials (instrument, music, accessories, etc. ). Students will receive a weekly grade for rehearsal attendance and contribution. Performance attendance and contribution 35 The attendance of each member of the ensemble at all performances is critical to the success of the ensemble. Attend ance will be graded in terms of present, excused absence; unexcused absence (also refers to tardiness and early dismissals). Contribution (as graded through teacher observation) reflec ts how a student fulfills individual responsibilities to the ensemble. Contribution includes the following areas: conduct, attitude, musical preparation, musical execution, and m aterials (ins trument, music, accessories etc. ). Students will receive a grade per performance. Individual testing and evaluation 30 Students will participate in a performance based assessment (playing test) each week. The material for the assessments will include scales/rudiments and band music. Students will also have four written tests (one per grading period) addressing basic music knowledge (including applicable music theory and history).

PAGE 110

110 Figure 51. Current student assessment practice s Student Assessment in Band = 100% Rehearsal Attendance and Contribution = 35% conduct attitude musical prepartion materials Individual Testing and Evaluation = 30% band music scales/rudiments written tests Performance Attendance and Contribution = 35% conduct attitude musical prepartion musical execution materials

PAGE 111

111 Figure 52. Student Assessment Model Student Assessment in Band = 100% Rehearsal Attendance and Contribution = 30% conduct/behavior musical prepartion materials Individual Testing and Evaluation = 40% band music scales/rudiments written tests (music theory, music history) Performance Attendance and Contribution = 30% conduct/behavior musical prepartion musical execution materials

PAGE 112

112 APPENDIX A QUESTIONNAIRE Student Assessment Practices of High School Band Directors Background Information Please provide the following background information concerning your school, band program, and teaching experience. 1. Type of school: Public Private Charter Other 2. Number of students (9th 12th grade) enrolled in your high school: 1 5 00 501 1000 1001 1500 1501 2000 2001 or more 3. In what type of community is the school located? Urban/ City (high population) Suburban (associated with a larger city) Town (moderate population) Rural/Remote (low population) 4. What is the socio economi c status of the community? Low Low/Middle Middle Middle/High High 5. Total number of students involved in the band program: 1 50 51 100 101 150 151 200 201 or more 6. Nu mber of students involved in concert band(s): 1 50 51 100 101 150 151 200 201 or more 7. Number of concert bands at your school: 1 2 3 4 5 or more 8. Average number of stud ents in each concert band: 1 15 16 30 31 45 46 60 61 or more

PAGE 113

113 9. Total number of minutes each concert band meets PER WEEK: __________ 10. Years of experience teaching high school band (including this year): 1 4 5 8 9 12 13 16 17 or more 11. Years teaching in your current position (including this year): 1 4 5 8 9 12 13 16 17 or more 12. Highest degree earned in music/music education: Associates Bachelor Masters Doctorate Post Doctoral 13. Number of band directors employed at your school: 1 1.5 2 2.5 3 or more Grading Information Please provide the following information concerning your student grading process. 14. How many marking periods does your school have per year? __________ 15. How many weeks long is a typical marking period? __________ 16. What type of grade do you assig n at the end of a grading period? No grades assigned Pass/Fail Letter Grades Number Grades Written Comments Combination of types (please explain) ___________ 17. Does the grade given in your band ensemble class affect the students overall GPA? Yes No Other (please explain) __________

PAGE 114

114 18. Is there a weighted grading system being used in your school (higher level classes assigned more value in the students overall grade point average)? Yes No 19. Is there a weighted option in the grade given in your band ensemble class? Yes No If yes, please explain __________ Assessment Philosophy Please provide your opinion to the following philosophical questions concerning student assessment 20. How important are the following purposes of student assessment ? (1 = not important 5 = extremely important) a) to provide feedback to students 1 2 3 4 5 b) to provide feedback to parents 1 2 3 4 5 c) to identify individual student abilities 1 2 3 4 5 d) to identify general class abilities ... 1 2 3 4 5 e) to determine whether instructi on has been successful 1 2 3 4 5 f) to determine what concepts students are understanding ...... 1 2 3 4 5 g) to determine what concepts students are failing to understand ... 1 2 3 4 5 h) to determine future instructional direction ... 1 2 3 4 5 i) to demonstrate student accountabili ty for learning 1 2 3 4 5 j) to establish or maintain credibility for the band program ... 1 2 3 4 5 k) to determine the level of musical p reparedness for performance ... 1 2 3 4 5 l) to help students prepare for per formance ........... 1 2 3 4 5 m) to determine whether students are practicing at home 1 2 3 4 5 n) to motivate students to pract ice their instruments 1 2 3 4 5 o) to set or maintain c lass standards 1 2 3 4 5 p) to rank students according to individual performance levels ... 1 2 3 4 5

PAGE 115

115 21. What importance do you place on the following criteria in the evaluation of your band students? (1 = not important 5 = extremely important) a) individual playing ability on an instrument 1 2 3 4 5 b) ability to play an instrument in an ensemble 1 2 3 4 5 c) knowledge of music history 1 2 3 4 5 d) knowledge of music theory. 1 2 3 4 5 e) ability to improvise melodies, v ariations, an d accompaniment. 1 2 3 4 5 f) ability to compose music.. 1 2 3 4 5 g) ability listening to, analyze, and describe music. 1 2 3 4 5 h) ability to evaluate music and music performanc es. 1 2 3 4 5 i) ability to understand the relationships between music, the other arts, and disciplines outside the arts ...... .. 1 2 3 4 5 j) ability to understand music in relation to history and cult ure. 1 2 3 4 5 22. How important are the following assessment categories in a student assessment model for bands? (1 = not important 5 = extremely important) a) Placement assessments ( i.e., auditions, challenges) 1 2 3 4 5 b) Summative assessments ( i.e., concerts, festivals, recital s) ......... 1 2 3 4 5 c) Diagnostic assessment ( i.e., error detection) 1 2 3 4 5 d) Formative assessment ( i.e., playing tests) 1 2 3 4 5 23. How influential are the following factors have on the assessment methods you use? (1 = no influential 5 = extremely influential) a) your personal philos ophy of education 1 2 3 4 5 b) the amount of avai lable class time ... 1 2 3 4 5 c) the objectives or goals of your class 1 2 3 4 5 d) the demands of your ensembl es performance schedule 1 2 3 4 5 e) the number of students en rolled in the class .. 1 2 3 4 5 f) professional development you have participated in ... 1 2 3 4 5

PAGE 116

116 g) influence from mu sic colleagues .. 1 2 3 4 5 h) influence from profess ional organization .... 1 2 3 4 5 i) requirements set by the school district 1 2 3 4 5 j) the expectations of your students .. 1 2 3 4 5 k) the expectations of your students parents 1 2 3 4 5 l) the expectation of yo ur school principal ... 1 2 3 4 5 m) available equipment (computers, recording ) .... 1 2 3 4 5 n) your undergrad uate coursework .. 1 2 3 4 5 o) your graduate coursework 1 2 3 4 5 p) modeled after the high school program you attended .. 1 2 3 4 5 24. How do you feel the following have prepared or are preparing you to assess the students in your band program? (1 = not well prepared 5 = very well prepared) a) undergraduate coursework ....... ... 1 2 3 4 5 b) graduate coursework ................ .... 1 2 3 4 5 c) national standards ..................... ... 1 2 3 4 5 d) state or district standards .......... ... 1 2 3 4 5 e) teacher in service sessions ....... ... 1 2 3 4 5 f) clinics at professional conference 1 2 3 4 5 g) discussions with colleagues ..... .... 1 2 3 4 5 h) professional organiz ations ........ ... 1 2 3 4 5 Assessment Information Please provide information concerning assessing the students in your largest concert band. 25. Which of the following assessment components do you use to determine a studen ts grade (select all that apply)? a) attitude b) attendance c) computer assisted programs d) conduct/discipline e) extra credit (lessons, concert attendance)

PAGE 117

117 f) participation g) peer assessment h) performance based (playing) tests i) performances j) portfolios k) practice log/journal l) requirement checklists (scales, exercises) m) sight reading tests n) Smart Music o) student self assessment p) written tests/worksheets 26. Please enter the percentage of each component you use to determine your grades. Leave unused components blank. Be sure that the total adds up to 100%. a) attitude grades _____% b) attendance grades _____% c) computer programs grades _____% d) conduct/discipline _____% e ) extra credit _____% f) participation _____% g) peer assessment _____% h) performance based tests _____% i) performances _____% j) portfolios _____% k) practice log/journal _____% l) requirement checklists _____% m) sight reading tests _____% n) Smart Music _____% o) student self assessment _____% p) written tests/worksheets _____%

PAGE 118

118 27. Which of the follow ing procedures for data collection do you use when assessing your students (select all that apply)? a) students play individually in class b) students play in a group in class c) students record themselves playing individually d) students record thems elves playing in a group e) multiple choice test or assignment f) short answer test or assignment g) essay question test or assignment h) computer assisted program i) Smart Music j) practice log or record k) teacher observation l) student self assessment l) other (fill in) 28. If you use performancebased tests when assessing students, what materials do you utilize (select all that apply)? a) scales / rudiments b) band music c) sight reading d) all state / district / county / o r honor band audition music e) method book exercises f ) etudes g ) solo literature i ) do not use performance based tests 29. The following is a list of characteristics that have been traditionally used in assessment models of band students. Rate the importance of these characteristics in your assessment design. (1 = not important 5 = extremely important) a) assists in the preparation of music for performances ... 1 2 3 4 5 b) includes a variety of assessment components .. 1 2 3 4 5

PAGE 119

119 c) aligns with instruction ...................................... ... 1 2 3 4 5 d) is understood by all parties involved ( i.e., students, parents) ... 1 2 3 4 5 e) includes appropriat e grading rubrics ................ .. 1 2 3 4 5 f) assists in motivating students to learn and develop ... 1 2 3 4 5 g) reflects the music skills and knowledge that are most important for students to learn ..................... ............................... 1 2 3 4 5 h) supports, enhances, and reinforces learning 1 2 3 4 5 i) is reliable and valid ............................................. 1 2 3 4 5 j) requires a student to demonstrate a music behavior in an authentic or realistic situation .................................................... 1 2 3 4 5 k) is open to review by interested parties 1 2 3 4 5 l) includes regularly scheduled assessment opportunities 1 2 3 4 5 m) includes both musical and non musical components .. 1 2 3 4 5 n) is time efficient ... ........ 1 2 3 4 5 o) is relatively easy to administer and maintain ...... ... 1 2 3 4 5 30. Rate your agreement level with the following statements concerning assessment. (1 = strongly disagree 5 = strongly agree) a) I am satisfied with my current ba nd assessment practices .. 1 2 3 4 5 b) My students are satisfied with the current band assessment practices 1 2 3 4 5 c) My students parents are satisfied with the current band assessment practi ces 1 2 3 4 5 d) My school administrators are satisfied with the current band assessment practi ces 1 2 3 4 5 e) My assessment practices are good enough to ensure quality instruction 1 2 3 4 5 f) My assessment practices foster the individual musical development of my students .. 1 2 3 4 5 g) My assessment and gradin g practices are similar to those of most of the band director s I know .. 1 2 3 4 5 h) I would be interested in finding other ways to assess my students 1 2 3 4 5

PAGE 120

120 Assessment Model Please answer the following question concerning an assessment model. 31. Using the following three assessment components, (a) individual testing and evaluation, (b) performance attendance and contribution, and (c) rehearsal attendance and contribution, assign percentages (total ing 100%) to create what you believe to be a balanced assessment tool for band students: (a) individual testing and evaluation ______% (b) performance attendance and contribution ______% (c) rehearsal attendance and contribution ______% Thank you for your participation in this questionnaire

PAGE 121

121 APPENDIX B PILOT STUDY RESULTS S urvey results are presented in three categories: 1) Demographic Information, 2) Grading Information, and 3) Assessment Information. Demographic Information The school enrollment of the 45 schools showed representation in each of the five population categories. The results were as follows : 1 to 500 students = 31.1% (14 schools); 501 to 1,000 students = 13.3% (6 schools); 1,001 to 1,500 students = 28.9% (13 schools); 1,501 to 2,0 00 students = 20.0% (9 Schools); and 2,001 or more students = 6.7% (3 schools). While the community type results showed representation in each category, over half of the schools reported Suburban = 53.3%, (24 schools). The r emaining results were Small Town = 22.2% (10 schools); Rural/Remote = 15.6% (7 schools); and Urban/Inner City = 11.1% (5 schools). Finally, the socioeconomic status of the community also showed representation in each category: Low = 13.3% (6 schools); Low /Middle = 35.6% (16 schools); Middle = 15.6% (7 schools); Middle/High = 28.9% (13 schools) ; and High = 11.1% (5 schools). The size of the band programs participating in the study indicated a wide variety with almost half consisting of between 101 and 150 students. The results were as follows : 1 to 50 students = 20.0% (9 schools) ; 51 to 100 students = 20% (9 schools) ; 101 to 150 students = 46.7% (21 schools) ; 151 to 200 students = 13.3% (6 schools) ; and no schools reported more than 201 students enrolled in band ensemble classes. The total number of ensemble classes being taught in each band program ranged from 1 to 5 (or more). Those results showed: 1 ensemble = 13.3% (6 schools) ; 2 ensembles = 20.0% (9 schools) ; 3 ensembles = 31.1% (14 schools) ; 4 ensembles = 15.6% (7

PAGE 122

122 schools) ; and 5 or more ensembles = 22.2% (10 schools). The number of concert ensembles being taught showed similar results: 1 concert ensemble = 20.0% (9 schools) ; 2 concert ensembles = 26.7% (12 schools) ; 3 concert ensembles = 26. 7% (12 schools) ; 4 concert ensembles = 4.4% (2 schools) ; and 5 or more concert ensembles = 22.2% (10 schools). Finally, the average number of students in each concert ensemble results showed almost half ranging from 46 to 60 students (42.2%, 19 schools). T he remaining results showed: 1 = 15 students at 4.4% (2 schools) ; 16 = 30 students at 8.9% (4 schools) ; 31 = 45 students at 22.2% (10 schools) ; and 61 or more students at 22.2% (10 schools). Participants revealed that almost half have been teaching 17 or more ye ars (42.2%, 19 directors). Remaining results: 1 to 4 years = 20.0% (9 directors) ; 5 to 8 years = 15.6% (7 directors) ; 9 to 12 years = 8.9% (4 directors) ; and 13 to 16 years = 13.3% (6 directors). The years teaching in their current positions showed a very balanced result: 1 to 4 years = 28.9% (13 directors) ; 5 to 8 years = 22.2% (10 directors) ; 9 to 12 years = 20.0% (9 directors) ; 13 to 16 years = 8.9% (4 directors), and 17 or more years = 20.0% (9 directors). The results showed that all directors ha ve received at least a bachelors degree (48.9%) with more than half possessing a masters degree (66.7%). None of the respondents reported associates, doctorate, or post doctoral degrees. Finally the number of directors employed at each school results sho wed almost half of the directors manages the band program alone (44.4%, 20 schools). Other results : 1.5 directors = 4.4% (2 schools) ; 2 directors = 24.4% (11 schools) ; 2.5 directors = 2.2% (1 school) ; and 3 or more directors = 24.4% (11 schools).

PAGE 123

123 Grading Information The directors reported a variety of grading periods from 2 semesters to 12 grading periods. However, the majority of the directors indicated 4 grading periods (quarter system). The length of the grading period also varied, from 4.5 weeks to 1 8 weeks. Here the majority of the directors indicated that 9 weeks was the typical length of a grading period. A large majority of the directors indicated that they assign a l etter grade (80.0%, 36 schools); with the remainder reporting to assign number grades (20.0%, 9 schools) at the end of a grading period. One hundred percent of the directors responded that the grade given in the band ensemble class affects the students overall GPA (grade point average). Finally, while 71.1% (32 schools) reported that their school uses a weighted grading system only 17.8% (8 schools) reported that the band ensembl e class grade is weighted. R esponses of the 8 directors showed how their band ensemble class grade is weight ed in their grading system: (a) Students may enroll as juniors and seniors Band III and IV Honors. These are weighted courses with respect to overall GPA; (b) Band taken for honors credit during the student's junior and senior year carries more weight; (c) Students 1 0th grade and above can contract to earn a weighted grade by fulfilling a number of additional achievements beyond the class period; (d) An upperclassman, under certain rare circumstances can get honors credit for top ensemble participation. Thi s mainly hinges on meeting certain performance criteria (all district or all state band, one rating at festival, etc ); (e) The grade is a combination of performance activities and learning activities; (f) Entry level classes are 1.0. Performing ensembles are weighted 1.2 to 1.6. Students can get .2 higher for participation in all district band in the fall term and district solo and ensemble contest in the spring; (g) Band is automatically weighted

PAGE 124

124 to equal any other collegeprep class grade; (h) Honors Cr edit available for grades 1012. Students must perform a jury and write a research paper. Assessment Information Table B 1 show s the percentages and counts of the directors use of selected assessment components to determine a students grade in their ensemble classes. The two most prevalent components used by directors were participation (95.6%) and performances (95.6%). The two least prevalent were portfolios (6.7%) and peer assessment (6.7%). Additional assessment componen ts offered by directors inc lude rhythm dictation, extra credit ( i.e., lessons, outside groups), and credit for attending concerts and recitals (ones in which the student is not performing). Table B 1 Directors use of assessment c omponents ( N = 45) A ssessment c omponent Response % ( Count) P articipation 95.6 (43) Performances 95.6 (43) Performance based t ests 91.1 (41) Attendance 82.2 (37) Conduct/d iscipline 66.7 (30) Written tests/w orksheets 57.8 (26) Attitude 55.6 (25) Practice log/j ournal 31.1 (14) Sight reading t ests 15.6 (7) Computer assisted p rograms 13.3 (6) Smart Music 11.1 (5) Student s elf assessment 11.1 (5) Requirement c hecklists 8.9 (4) Peer a ssessment 6.7 (3) Portfolios 6.7 (3) Table B 2 shows how frequently each assessment component is used by the director in determining the students grade in their ensemble class. Participation (84.4%), attendance (75.6%), and conduct/di scipline (60.0%) are used weekly by many directors. Sight reading tests, computer assisted programs, Smart Music, stu dent

PAGE 125

125 self assessment; requirement checklists, peer assessments, and portfolios are rarely used by directors. Table B 2 Frequency of use of assessment components ( N = 45) Assessment c omponent Weekly % Monthly % Grading p eriod % Semester % Never % Participation 84. 4 2.2 13.3 0.0 0.0 Performances 4.4 24.4 55.6 11.1 4.4 Performance based Tests 35.6 37.8 11.1 8.9 6.7 Attendance 75.6 4.4 11.1 0.0 8.9 Conduct/Discipline 60.0 0.0 17.8 2.2 20.0 Written tests/Worksheets 8.9 20.0 17.8 13.3 40.0 Attitude 48.9 0.0 13.3 2.2 35.6 Practice Log/Journal 20.0 6.7 11.1 0.0 62.2 Sight reading Tests 2.2 2.2 8.9 8.9 77.8 Computer assisted Programs 0.0 4.4 6.7 6.7 82.2 Smart Music 0.0 6.7 0.0 6.7 86.7 Student Self assessment 0.0 4.4 6.7 8.9 80.0 Requirement Checklists 4.4 2.2 6.7 6.7 80.0 Peer Assessment 2.2 2.2 8.9 2.2 84.4 Portfolios 0.0 2.2 2.2 6.7 88.9 Respondents were presented with a list of sixteen factors that might influence their choice of assessment methods. Participants rated the level of influence using a 5 point Likert type scale ranging from 1 (not at all influenced) to 5 (extremely influenced). The directors personal philosophy of education ( M = 4.44) and the amount of available class time ( M = 4.04) had a high degree of infl uence on the choice of assessment method. Requirements set by the school district ( M = 2.40), influence from a professional organization ( M = 2.16), and modeling after the high school program they attended ( M = 2.13) had a low degree of influence. Also inf luential in determining assessment methods were the objective or goals of the class ( M = 3.76), the demands of the ensembles performance schedule ( M = 3.76), and the number of students enrolled in the class ( M = 3.38) ( Table B 3)

PAGE 126

126 Table B 3 Factors th at influences assessment m ethods ( N = 45) Factor Rating mean Personal philosophy of education 4.44 Amount of available class time 4.04 Objectives or goals of the class 3.98 Demands of the ensembles performance schedule 3.76 Number of students enrolled in the class 3.38 Influence from your music colleagues 3.00 Professional development 3.00 Expectation of the students 2.98 Modeled after a colleagues program 2.82 Graduate coursework 2.58 Expectation of your school principal 2.56 Undergraduate coursework 2.53 Expectation of the students parents 2.49 Requirements set by the school district 2.40 Influence from a professional organization 2.16 Modeled after the high school program you attended 2.13 Participants were also presented with a list of sixteen possible purposes for assessing students in band. The directors rated the importance of these purposes using a 5point Likert type scale ranging from 1 (not at all important) t o 5 (extremely important). D irectors responded that providing feedback to students ( M = 4.49) and identifying student needs ( M = 4.38) were among the most important purposes. To determine whether students were practicing at home ( M = 3.47), to establish or maintain credibility for the band program ( M = 3.47), and to rank students according to individual performing levels ( M = 3.27) were the least important purposes of student assessment (Table B 4)

PAGE 127

127 Table B 4 Purposes of student assessment ( N = 45) Purpose Rating mean To provide feedback to students 4.49 To identify individual student needs 4.38 To determine future instructional direction 4.29 To identify general class needs` 4.20 To demonstrate student accountability for learning 4.18 To determine what concepts students are failing to understand 4.18 To determine whether instruction has been successful 4.16 To motivate students to practice their instruments 4.11 To set or maintain class standards 4.04 To determine the level of musical preparedness for public performance 4.00 To help students prepare for public performance 3.93 To provide feedback to parents 3.60 To establish or maintain credibility for the band program 3.47 To determine whether students are practicing at home 3.47 To rank students according to individual performance levels 3.27 Respondents were asked to provide an estimate of the weight that a set of 15 components had on their band grade calculations. For this question, the directors used a 5point Likert type scale with the following assignments: 1 = 0% 2 = 1 to 25%, 3 = 26 to 50%, 4 = 51 to 75%, and 5 = 76 to 100%. Performances ( M = 3.09), performance based tests ( M = 2.95), and participation ( M = 2.91) rank ed very high in the percentages assigned by the directors. Peer assessment ( M = 1.29), student self assessment ( M = 1.26), and portfolios ( M = 1.25) were among the lowest r anked components ( Table B 5) Table B 6 shows results of the final s urvey question, which asked directors to assign percentages of weight that overall components should have in a balanced assessment protocol for band. The directors assigned the following mean percentages to these components: (a) individual testing and evaluation 34.72%, (b) perfor mance attendance and contribution 32.79%, and (c) rehearsal attendance and contribution 31.79%. The mean results suggest that the directors believe an optimal assessment protocol should use a nearly equal percentage of the three components.

PAGE 128

128 Table B 5 Percentage use of assessment c omponents ( N = 45) Component Rating mean Performances 3.09 Performance based tests 2.95 Participation 2.91 Attendan ce g rades 2.56 Conduct/d iscipline 2.21 Written tests/worksheets 2.00 Attitude grades 1.97 Practice log/journals 1.68 Sight reading tests 1.45 Smart Music 1.34 Requirement c hecklists 1.31 Computer program grades 1.30 Peer assessment 1.29 Student self assessment 1.26 Portfolios 1.25 Table B 6 Model assessment components percentages ( N = 44) Assessment Component Mean Percentage Individual testing and evaluation 34.72 Performance attendance and contribution 32.79 Rehearsal attendance and contribution 31.79

PAGE 129

129 LIST OF REFERENCES Abeles, H. F. (1992). A guide to interpreting research in music education. In R. Colwell, Handbook of research on music teaching and learning (pp. 227243). New York, New York: Schirmer Books. Abeles H.F., H offer C. R., & Klotman R.H. (1995). Foundations of music educa tion (2nd ed.). Belmont, California: Thomson Schirmer. Alreck, P. L., S ettle R.B (2004). The surve y research handbook (3rd ed.). Boston: McGraw Hill/Irwin. Antmann M. D. (2007). Assessment and gradin g in the beginning band c lassroom, PhD. Dissertation. Florida State University, Tallahassee, Florida, United States. Retrieved February 12, 2008, from ProQuest Digital Dissertations database. Asmus, E. P. (1999). Music assessment c oncepts. Music Educators Journal 86 ( 2), 19 24. Bessom, M. E., Tatarunis A.M., & Forcucci S.L (1980). Teaching music in today's secondary schools: A creative approach to contemporary music education (2 ed.). New York, New York: Holt, Rein hart, and Winston. Bloom, B.S., Hastings, J.T., and Madaus, G.F. (1971). Handbook of formative and summative evalu ation of student l earning. New York, New York: McGraw Hill. Bowman, S.E., Calahan, G.L., Colwell, R., Drummond, R., Dubash, F., Formo, P., Pucciani, L., Srupak, R.T., & Hickey, W. (1984). Point of view: Grading performance g roups. Music Educators Journal 70 (7), 59 62. Boyle, J.D., Radocy, R.E. (1987). Measurement and evaluation of music experiences. New York, New York: Schirmer Books. Bradford, C.B. (2003). Sound assessment practices in the standards based choral curriculum. Choral Journal 43 (9), 21 27. Branum, K., Fusco, L., Haag, R., Richmond, F., & Russo, M.D. (1988). Idea bank: Evaluating music s tudents. Music Educators Journal 75 (2), 3841. Brookhart, S. M. (1993). Teacher's grading practices: Meanings and v alues. Journal of Educational Measurement Vol 30 (No. 2), 123 & 139. Brophy, T. S. (2000). Assessing the developing child m usician. Chicago, Illinois: GIA Publications. Burrack, F. (2002). Enhanced assessment in instrum ental programs. Music Educators Journal 88 (No. 6), 2732.

PAGE 130

130 Chapman, G. H. (2004). College admissions: The effect of application factors and the quality of applicants. PhD. Dissertation. Syracuse University, New York, United States. Retrieved November 12, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3149043).: Syracuse University. Chiodo, P. (2001). Assessing a cast of t housands. Music Educators Journal 87 (No. 6), 1723. Cole, D.J., Ryan, C.W., & Kick, F. (1995). Portfolios a cross t he c urriculum. Thousand Oaks, California: Corwin Press. Col well, R. (1982). Evaluation in music education: Perspicacious or peregrine. A Symposium in Music Education (p. 158). Urbana: The University of Illinois. Cope, C. O. (1996). Steps toward effective assessment. Music Educators Journal 83 ( 1), 39 42. Council of Chief State School Officers. (2009). Arts education assessment c onsortium Retrieved December 8, 2009, from Council of Chief State School Officers: http://www.ccsso.org/projects/SCA SS/Projects/Arts_Education_Assessment_Co nsortium/ Creswell, J. W. (2002). Educational r esearch: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, New Jersey: Merrill Prentice Hall. Dewey, J. (1916). Democracy and education. New York: The Macmillan Company. Dewey, J. (1910). How we t hink. Lexington: DC Heath and Company. Diehl, D. (2007). Factors related to the integration of the national standards into the secondary school wind band. Ph.D. dissertation. Ball State University, Muncie, Indiana, United States. Retrieved November 17, 2009, from ProQuest Digital Dissertations database. (Publication No. UMI 3255053). Dirth, K. A. (2000). Implementing portfolio assessment in the music performance classroom. Ed.D. dissertation. Columbia University Teachers College, New York, New York, United States. Retrieved November 26, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 9976711). Drake, A. (1984). A survey of music performing group grading practices. Bulletin of the Council for Research in Music Education, 78, 33 37.

PAGE 131

131 Eyerman, G. C. (2002). Changes in high school curricular offerings before and after the implementation of the Florida Comprehensive Assessment Test (FCAT). Ed.D. dissertation. Florida Atlantic University, Boca Raton, Florida, United States. Retrieved Novem ber 19, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3055359). Farrell, S. R. (1997). Tools for powerful student evaluation (2nd ed.). Milwaukee, Wisconsin: Meredith Music Publications. Goolsb y, T. W. (1999). Assessment in ins trumental m usic. Music Educator Journal 86 (2), 31 35, 50. Hanzlik, T. J. (2001). An examination of Iowa high school instrumental band directors' assessment practices and attitudes toward assessment. Ed.D. dissertation. The University of Nebraska Linc oln, Nebraska, United States. Retrieved October 25, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3009721). Hart, D. (1994). Authentic assessment: A handbook for educators. Menlo Park, California: Addison Wesley Publishing. Har vard Graduate School of Education. (2010). Project z ero Retrieved January 13, 2010, from Arts PROPEL: http://pzweb.harvard.edu/research/propel.htm Heritage School Band. (2008). Concert band grading p olicy Retrieved November 30, 2009, from Heritag e Schoo l Band: http://www.heritagebands.org/concert_band_grading_policy.htm Hill, K. W. (1999). A descriptive study of assessment procedures, assessment attitudes, and grading policies in selected public high school band performance classrooms in Mississippi. Mu s.Ed.D. dissertation. The University of Southern Mississippi, Hattiesburg, Mississippi, United States. Retrieved February 1, 2008, from ProQuest Digital Dissertations database. (Publication No. AAT 9935693). Hoffer, C. R. ( 2000). Teaching music in the sec ondary s chools ( 5th ed.). Belmont, California: Wadsworth Publishing Company, Inc. Kanc ianic, P. M. (2006). Classroom ssessment in U.S. high school band programs: Methods, purposes, and i nfluences. Ph.D. dissertation. University of Maryland, Maryland, Unit ed States. Retrieved October 25, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3222315).

PAGE 132

132 Kotora, E. J. (2001). Assessment practices in the choral music classroom: A survey of Ohio high school choral music teachers and college choral methods teachers. Ph.D. dissertation. Case Western Reserve University, Ohio, United States. Retrieved December 5, 2009, from Dissertations & Theses (Publication No. AAT 3036343). LaCognata, J.P. (2010). Student assessment in the high school band e nsemble c lass. In T. Brophy (Ed.), The practice of assessment in music education: Frameworks, models, and designs, Proceedings of the 2009 Florida Symposium on Assessment in Music Education Gainesville, Florida April 15 17, 2009 (pp. 227 236). Lake Highlands Area Band Club. (2008). Grading in the Lake Highlands band program Retrieved January 18, 2010, from Lake Highlands Bands: http://www.lakehighlandsbands.org/Default.aspx?tabid=161 Leh man, P. R. (1997). Assessment and grading. Teaching Music 5 (3) 58 59. Linn, R., Miller, M.D. (2005). Measurement and assessment in t eaching (9th ed.). Upper Saddle River, New Jersey: Pearson Merrill Prentice Hall. Locke, L.F., Spirduso W.W., & Silverman S.J (2000). Proposals that work: A guide for planning disser tations and grant proposals. (4th ed.). Thousand Oaks: California. Lopez, S. M. (2006). Effect of the Florida A+ p lan on curriculum and instruction in Title I public elementary schools. Ed.D. dissertation. Florida International University, Miami, Florida, United States. Retrieved November 20, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3217574). Mabry, L. (1999). Portfolios plus: A crticial guide t o alternate assessment. Thousand Oaks, California: Corwin Press. Mark, M. L. (1 996). Contemporary music education (3rd ed.). Belmont, California: Schirmer. Mark, M.L., Gary, C.L. (1999). A History of American music e ducation (2 nd ed.). Reston, Virginia: Music Educators National Conference. Mathison, C. (1971). A bibliography of res earch on the evaluation of music teacher education programs. Journal of Research in Music Education, 19 ( 1), 106114. McCoy, C. W. (1988). An exploratory study of grading criteria amoung select Ohio ensemble directors. Contributions to Music Education, 15, 15 19.

PAGE 133

133 McCoy, C. W. (1991). Grading students in performin g groups: a comparison of principals' recommendations with directors' practices. Journal of Research in Music Education, 39 ( 3), 181190. McCreary, T. J. (2001). Methods and perceptions of assessment in secondary instrumental music. Ph.D. dissertation. University of Hawai'i, Manoa, Hawaii, United States. Retrieved February 3, 2008, from ProQuest Digital Dissertations database. (Publication N o. AAT 3030187). McMillan, J. (2003). Understanding and improving teachers' classroom assessment decision making: Implications for theory and practice. Educational Measurement: Issues and Practicies 22 (4), 3443. Meaux, R. J. (2004). A descriptive ana lysis of twenty six undergraduate music education programs at Texas four year colleges and universities accredited by the National Association of Schools of Music. D.M.A. dissertation. University of Houston, Texas, United States. Retrieved October 17, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3123919). MENC: The National Association for Mu sic Education. (1998). Grading practices in m usic. Music Educators Journal 84 (5), 3740. MENC: The National Association for Music Educa tion. (2008). National standards for m usic e ducation. Retrieved February 6 2008 from MENC: The National Association for Music Education: http://www.menc.org/resources/view/national standards for music education. Miller, J. A. (2007). Direct and indirect effects of selected factors on school grades in public high schools in the state of Florida. Ed.D. dissertation. University of Central Florida, Florida, United States. Retrieved November 9, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3256933). National Assessment of Education Progress. (2008). The Nation's Report Card Arts Retrieved January 12, 2010, from The Nation's Report Card: http://nationsreportcard.gov/arts_2008/ National Association of Schools of Music. (2009). Handbook 200910 (December 2009 Edition) Reston, Virginia: National Association of Schools of Music National Commission on Excellence in Education. (1983). A Nation at Risk: The imperative for educational r eform Retrieved November 9, 2009, from http://www. ed.gov/pubs/NatAtRisk/risk.html

PAGE 134

134 Norringt on, D. M. (2006). Instrumental music instruction, assessment, and the block schedule. M.M. dissertation. Southern Illinois University, Carbondale, Illinois, United States. Retrieved November 6, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 1437506). Northeastern Clinton Central School District. (20042005). Senior band grading policy Retrieved November 28, 2009, from NCCS Instrumental Music: http://www.nccscougar.org/nwarner/ Paswaters, R. W. (2006). A study of Florida public elementary school principals' job satisfaction following the implementation of Florida's A+ system for grading schools. Ed.D. dissertation. University of Central Florida, Orlando, Florida, United States. R etrieved November 8, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3242461). Pizer, R. A. (1990). Evaluation programs for school bands and orchestras. West Nyack, New York: Parker Publishing Company. Pontious, M. (2008). Comprehensive musicianship through performance: A paradigm for r estructuring. Retrieved from Wisconsin Department of Public Instruction: http://dpi.wi.gov/cal/mucmppap.html Reid M. (2005). Music assessment collaboration model for secondary music t eachers. Ed.D dissertation. University of California Los Angeles, California, United States. Retrieved October 29, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3202761). Reimer, B. (1989). A philosophy of music education (2nd ed.). Englewood Cliffs, New Jersey, United States: Prentice Hall, Inc. Reimer, B. (2002). A philosophy of music education: Advancing the v ision (3 rd ed.). Upper Saddle River, New Jersey: Prentice Hall. Schopp, S. (2008). A study of the effects of national standards f or music education, number 3 (improvisation) and number 4 (composition) on high school band instruction in New York state. Ph.D. dissertation. Columbia University, New York, New York, United States. Retrieved November 17, 2009, from ProQuest Digital Disser tations database. (Publication No. UMI 3225193). Sears M. (2002). Assessment in the instrumental music classroom: Middle school methods and m aterials. M.M. dissertation. Dissertation University of Massachusetts Lowell, Massachusetts, United States. Ret rieved November 22, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 1409387).

PAGE 135

135 Sherman, C. P. (2006). A study of current strategies and p ractices in th e assessment of individuals in high school bands. Ed.D. dissertation. Columbia U niversity, New York, New York, United States. Retrieved November 14, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 3237098). Simanton, E. G. (2000). Assessment and grading practices among high school band teachers in the United States: A descriptive study. Ph.D. dissertation. The University of North Dakota, Grand Forks, North Dakota, United States. Retrieved October 30, 2007, from ProQuest Digital Dissertations database. (Publication No. AAT 9986536). Sta uffer, S. L. (1999). Beginning assessment in elementary general m usic. Music Educators Journal, 86 (No. 2), 2530. Sudman, S. (1976). Applied s ampling. New York: Academic Press. Thorndike, R. (2005). Measurement and evaluati on in psychology and education (7th ed.). Upper Saddle River, New Jersey: Pearson Merrill Prentice Hall. Todd Beamer High School. (2009). Todd Beamer Campus Music Website. Retrieved November 28, 2010, from Todd Beamer High School: http://schools.fwps.org/tbhs/music/band/ Tracy, L. H. (2002). Assessing individual students in the high school chorale ensemble: Issues and p ractices. Ph.D. dissertation. Florida State University, Tallahassee, Florida, United States. Retrieved November 19, 2007, from ProQuest Digital Dissertations database. (Publication No. AA T 3065486). Trice, A. (2000). A handbook of classroom assessment. New York, New York: Longman. U.S. Department of Education. (1996). Creating better student assessments. Retrieved January 17, 2010, from Improving America's schools: A newsle t ter on issues in school reform: http://www.ed.gov.pubs/IASA/newsletters/assess/pt.1.html U.S. Department of Education. (2002) P.L. 107 110, No Child Left Behind Act of 2001 Retrieved May 25, 2010, from ED.gov: http://www2.ed.gov/policy/elsec/esea02/107110.pdf Wil bur, J. P. (1955). Training of secondary school music teachers in western colleges and universities. Journal of Research in Music Education, 3 ( 2), 131135. Wisconsin Lutheran High School Band. (2009). 200910 Grading policy Retrieved November 21, 2009, from Wisconsin Lutheran High School Band: http://wiscoband.tripod.com/sitebuildercontent/sitebuilderfiles/200910GradingPolicy.pdf

PAGE 136

136 Zitek, J. S. (2008). An e xamination of Nebraska high school band directors' implementation of and attidues toward the nationa l standards in music. Ph.D. dissertation. University of Nebraska, Lincoln, Nebraska, United States. Retrieved November 17, 2009, from ProQuest Digital Dissertations database. (Publication No. UMI 3331177).

PAGE 137

137 BIOGRAPHICAL SKETCH John P. LaCognata w as appointed Assistant Professor of Music and Director of Bands at the University of North Carolina Wilmington in 2010. His responsibilities include supervising the band program, conducting the Wind Symphony, Chamber Winds, and Pep Band, and teaching Basic C onducting and Applied T rumpet. In addition, he will conduct the New Horizons Concert Band for the UNCW Osher Lifelong Learning Institute (OLLI). Mr. LaCognata received his Bachelor of Science in M usic E ducation from the University of Illinois (1986), Master of M usic in T rumpet P erformance from Auburn University (1989) and a PhD in Music Education with an emphasis in Wind Conducting at the University of Florida (2010) where he was awarded a Doctoral Teaching Fellowship. Prior to his appointment at UNCW, LaCognata held a variety of teaching positions throughout his twenty four year career as a music educator. He served on the faculties of Southeastern Oklahoma State University, Louisiana State University, and Iowa State University and at the secondary lev el he held positions at Hillcrest High School (Country Club Hills, Illinois), Tavares High School (Tavares, Florida), Cypress Creek High School (Orlando, Florida), and Winter Park High School (Winter Park, Florida). At Winter Park, the band program received recognition within the state of Florida and throughout the country under his leadership. The Sound of the Wildcats Marching Band made appearances at the 2005 Autozone Liberty Bowl in Memphis, Tennessee, the 2002 Blue Cross Blue Shield Fiesta Bowl Nation al Band Championship in Phoenix, Arizona, and the 2000 Sylvania Alamo Bowl in San Antonio, Texas. The Wind Ensemble at Winter Park performed at the 2002 Bands of America National Concert

PAGE 138

138 Band Festival in Indianapolis, Indiana, and was a featured ensemble at the "President's Concert" at the 2007 Florida Music Educators' Association Conference in Tampa, Florida. The highlight of Mr. LaCognatas tenure at Winter Park was the Wind Ensemble performance at the 60th Anniversary of the Midwest Clinic in Chicago, Illinois in 2006. Mr. LaCognata is an active adjudicator, clinician and performer. He has served as a guest conductor and clinician for bands and honor bands throughout the United States. He is a former member of the Cathedral Brass and a freelance trumpet player. His professional affiliations include the College Band Directors National Association, the Music Educators National Conference, and the International Trumpet Guild.