Citation
Construction, Validation, and Administration of a Diagnostic Test of Cello Technique for Undergraduate Cellists

Material Information

Title:
Construction, Validation, and Administration of a Diagnostic Test of Cello Technique for Undergraduate Cellists
Creator:
Mutschlecner, Timothy
Place of Publication:
[Gainesville, Fla.]
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (163 p.)

Thesis/Dissertation Information

Degree:
Doctorate ( Ph.D.)
Degree Grantor:
University of Florida
Degree Disciplines:
Music Education
Music
Committee Chair:
Brophy, Timothy S.
Committee Members:
Hoffer, Charles R.
Jennings, Arthur C.
Houston, Joel F.
Graduation Date:
8/11/2007

Subjects

Subjects / Keywords:
Cellos ( jstor )
Educational evaluation ( jstor )
High school students ( jstor )
Music education ( jstor )
Music students ( jstor )
Music teachers ( jstor )
Musical performance ( jstor )
Rating scales ( jstor )
Teachers ( jstor )
Test scores ( jstor )
Music -- Dissertations, Academic -- UF
cello, diagnostic, education, music, string, teaching, technique, tests
Genre:
Electronic Thesis or Dissertation
born-digital ( sobekcm )
Music Education thesis, Ph.D.

Notes

Abstract:
The purpose of this study was to construct, validate, and administer a diagnostic test of cello technique for use with undergraduate cellists. The test consisted of three parts: (1) A written test, which assessed a student?s understanding of fingerboard geography, intervals, pitch location, and note reading, (2) A playing test, which measured a student?s technique through the use of excerpts from the standard repertoire for cello, and (3) A self-assessment form, through which students could describe their experience, areas of interest, and goals for study. A criteria-specific rating scale with descriptive statements for each technique was designed to be used with the playing test. The written test, playing test, and self-assessment were pilot-tested with five undergraduate students at a university in the southeast. A validation study was conducted to determine to what extent teachers felt this test measured a student?s technique. Nine cello teachers on the college and preparatory level were asked to evaluate the test. The test was administered to 30 undergraduate cellists at universities located in the southeastern region of the United States. Strong interitem consistency was found for the written test (r KR20 = .95). A high internal consistency of items from the playing test was found (? = .92). Interjudge reliability of the playing test was high, as measured by comparing the independent evaluations of two judges with the researcher?s evaluations using Pearson's r (Judge A r = .92; Judge B r = .95. Other conclusions drawn from the study include: (1) Piano experience has a significant positive effect on the results of the playing test (R2 = .15); (2) The playing test is a good predictor of teacher-rankings of their student in terms of technique; (3) Year in school, degree program, or years of playing experience were not significant indicators of students? playing ability as measured by this test. Participating teachers described this test as a valuable tool for evaluating students and charting their course of study. They found it to be an efficient means to identify a student's strengths and weaknesses in cello technique. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (Ph.D.)--University of Florida, 2007.
Local:
Adviser: Brophy, Timothy S.
Statement of Responsibility:
by Timothy Mutschlecner.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Mutschlecner, Timothy. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Resource Identifier:
660162394 ( OCLC )
Classification:
LD1780 2007 ( lcc )

Downloads

This item has the following downloads:

mutschlecner_t.pdf

mutschlecner_t_Page_096.txt

mutschlecner_t_Page_122.txt

mutschlecner_t_Page_033.txt

mutschlecner_t_Page_079.txt

mutschlecner_t_Page_044.txt

mutschlecner_t_Page_099.txt

mutschlecner_t_Page_035.txt

mutschlecner_t_Page_127.txt

mutschlecner_t_Page_113.txt

mutschlecner_t_Page_081.txt

mutschlecner_t_Page_161.txt

mutschlecner_t_Page_145.txt

mutschlecner_t_Page_075.txt

mutschlecner_t_Page_059.txt

mutschlecner_t_Page_154.txt

mutschlecner_t_Page_110.txt

mutschlecner_t_Page_102.txt

mutschlecner_t_Page_084.txt

mutschlecner_t_Page_163.txt

mutschlecner_t_Page_136.txt

mutschlecner_t_Page_015.txt

mutschlecner_t_Page_105.txt

mutschlecner_t_Page_001.txt

mutschlecner_t_Page_028.txt

mutschlecner_t_Page_016.txt

mutschlecner_t_Page_057.txt

mutschlecner_t_Page_005.txt

mutschlecner_t_Page_103.txt

mutschlecner_t_Page_066.txt

mutschlecner_t_Page_158.txt

mutschlecner_t_Page_029.txt

mutschlecner_t_Page_156.txt

mutschlecner_t_Page_008.txt

mutschlecner_t_Page_048.txt

mutschlecner_t_Page_009.txt

mutschlecner_t_Page_097.txt

mutschlecner_t_Page_124.txt

mutschlecner_t_Page_039.txt

mutschlecner_t_Page_021.txt

mutschlecner_t_Page_026.txt

mutschlecner_t_Page_006.txt

mutschlecner_t_Page_135.txt

mutschlecner_t_Page_117.txt

mutschlecner_t_Page_126.txt

mutschlecner_t_Page_114.txt

mutschlecner_t_Page_031.txt

mutschlecner_t_Page_083.txt

mutschlecner_t_Page_142.txt

mutschlecner_t_Page_040.txt

mutschlecner_t_Page_150.txt

mutschlecner_t_Page_070.txt

mutschlecner_t_Page_082.txt

mutschlecner_t_Page_086.txt

mutschlecner_t_Page_120.txt

mutschlecner_t_Page_139.txt

mutschlecner_t_Page_112.txt

mutschlecner_t_Page_116.txt

mutschlecner_t_Page_054.txt

mutschlecner_t_Page_045.txt

mutschlecner_t_Page_011.txt

mutschlecner_t_Page_018.txt

mutschlecner_t_Page_032.txt

mutschlecner_t_Page_108.txt

mutschlecner_t_Page_002.txt

mutschlecner_t_Page_133.txt

mutschlecner_t_Page_138.txt

mutschlecner_t_Page_146.txt

mutschlecner_t_Page_087.txt

mutschlecner_t_Page_034.txt

mutschlecner_t_Page_047.txt

mutschlecner_t_Page_072.txt

mutschlecner_t_Page_046.txt

mutschlecner_t_Page_093.txt

mutschlecner_t_Page_007.txt

mutschlecner_t_Page_118.txt

mutschlecner_t_Page_004.txt

mutschlecner_t_Page_098.txt

mutschlecner_t_Page_153.txt

mutschlecner_t_Page_107.txt

mutschlecner_t_Page_068.txt

mutschlecner_t_Page_053.txt

mutschlecner_t_Page_132.txt

mutschlecner_t_Page_043.txt

mutschlecner_t_Page_151.txt

mutschlecner_t_Page_080.txt

mutschlecner_t_Page_041.txt

mutschlecner_t_Page_065.txt

mutschlecner_t_Page_073.txt

mutschlecner_t_Page_014.txt

mutschlecner_t_Page_131.txt

mutschlecner_t_Page_119.txt

mutschlecner_t_Page_060.txt

mutschlecner_t_Page_037.txt

mutschlecner_t_Page_058.txt

mutschlecner_t_Page_069.txt

mutschlecner_t_Page_141.txt

mutschlecner_t_Page_025.txt

mutschlecner_t_Page_100.txt

mutschlecner_t_Page_123.txt

mutschlecner_t_Page_111.txt

mutschlecner_t_Page_129.txt

mutschlecner_t_Page_092.txt

mutschlecner_t_Page_049.txt

mutschlecner_t_Page_078.txt

mutschlecner_t_Page_076.txt

mutschlecner_t_Page_010.txt

mutschlecner_t_Page_104.txt

mutschlecner_t_Page_022.txt

mutschlecner_t_Page_042.txt

mutschlecner_t_Page_085.txt

mutschlecner_t_Page_088.txt

mutschlecner_t_Page_094.txt

mutschlecner_t_Page_147.txt

mutschlecner_t_Page_051.txt

mutschlecner_t_Page_055.txt

mutschlecner_t_Page_128.txt

mutschlecner_t_Page_003.txt

mutschlecner_t_Page_106.txt

mutschlecner_t_Page_101.txt

mutschlecner_t_Page_130.txt

mutschlecner_t_Page_052.txt

mutschlecner_t_Page_012.txt

mutschlecner_t_Page_017.txt

mutschlecner_t_Page_095.txt

mutschlecner_t_Page_157.txt

mutschlecner_t_Page_024.txt

mutschlecner_t_Page_071.txt

mutschlecner_t_Page_074.txt

mutschlecner_t_Page_050.txt

mutschlecner_t_Page_140.txt

mutschlecner_t_Page_155.txt

mutschlecner_t_Page_077.txt

mutschlecner_t_Page_062.txt

mutschlecner_t_Page_149.txt

mutschlecner_t_Page_064.txt

mutschlecner_t_Page_134.txt

mutschlecner_t_Page_030.txt

mutschlecner_t_pdf.txt

mutschlecner_t_Page_091.txt

mutschlecner_t_Page_144.txt

mutschlecner_t_Page_056.txt

mutschlecner_t_Page_013.txt

mutschlecner_t_Page_067.txt

mutschlecner_t_Page_159.txt

mutschlecner_t_Page_036.txt

mutschlecner_t_Page_038.txt

mutschlecner_t_Page_162.txt

mutschlecner_t_Page_089.txt

mutschlecner_t_Page_027.txt

mutschlecner_t_Page_023.txt

mutschlecner_t_Page_109.txt

mutschlecner_t_Page_061.txt

mutschlecner_t_Page_125.txt

mutschlecner_t_Page_019.txt

mutschlecner_t_Page_121.txt

mutschlecner_t_Page_020.txt

mutschlecner_t_Page_137.txt

mutschlecner_t_Page_160.txt

mutschlecner_t_Page_152.txt

mutschlecner_t_Page_148.txt

mutschlecner_t_Page_063.txt

mutschlecner_t_Page_115.txt

mutschlecner_t_Page_143.txt

mutschlecner_t_Page_090.txt


Full Text





and reading comprehension (Iowa Tests of Educational Development, Hoover, Dunbar, Frisbie,

Oberley, Bray, Naylor, Lewis, Ordman, and Quails, 2003). Using a regression analysis,

Gromko determined the smallest combinations of variables in music sight reading ability, as

measured by the WFPS. The results were consistent with earlier research, suggesting that music

reading draws on a variety of cognitive skills including visual perception of patterns rather than

individual notes.

The WFPS has its greatest validity as a test for sight reading. Sight reading is a composite

of a variety of skills, some highly specialized. Using only this test to rank students on

musicianship, technique or aptitude would be inappropriate, however. This test design reveals a

certain degree of artificiality; the use of the measure as a scoring unit and choice of ignoring

pauses between measures are somewhat contrived. Nevertheless, Watkins and Farnum

succeeded in developing the most reliable and objective performance testing instrument in their

day.

Robert Lee Kidd

Kidd (1975) conducted research for his dissertation concerning the construction and

validation of a scale of trombone performance skills at the elementary and junior high school

levels. His study exemplifies a trend toward more instrument-specific research. Kidd focused

on the following questions:

* What performance skills are necessary to perform selected and graded solo trombone
literature of Grades I and II?
* What excerpts of this body of literature provide good examples of these trombone
performance skills?
* To what extent is the scale a valid instrument for measuring the performance skills of
solo trombonists at the elementary and junior high school level?
* To what extent is the scale a reliable instrument?









APPENDIX E
THE WRITTEN TEST

The Diagnostic Test of Cello Technique

Written Test


Timothy M. Mutschlecner



* Fingerboard Geography

* Interval Identification

* Pitch Location and Fingering

* Single-Position Fingering

* Bass, Treble, and Tenor Clef Note Identification





STUDENT'S NAME













Double Stops

[Excerpt from Sonata in G Major by G.B. Sammartini, 1' movement]


J =76
a tempo V 4


f-


4
2


dim.


Y


V Y


Alfred Publishing










[Excerpt from Suite No 3: Allemande, by J.S. Bach] =106






.- n-il-






CBirenreiter Music Corporation


2


-^


=~F==~L3=~=F~F


E"- '" F-[ f


e > r


4=ft









is needed. Other music educators believe that any assessment is inappropriate as either
too quantitative or too mechanical (p. 210).

That some applied music teachers believe that they have no need for methods to assess

technique beyond their own listening skill is understandable. Most have spent their lives refining

evaluative skills: first, of their own playing, and then that of their students. These teachers may

feel it insulting to suggest that a test is better than they are at diagnosing a student's strengths and

weaknesses. However, these same teachers would not think twice about having a diagnostic test

of their car's electrical system if it were acting strangely. If a diagnostic test of cello technique

could be shown to give a reasonably accurate and rapid assessment of a student's playing level

and particular needs, skeptical teachers might come to appreciate the test's pragmatic value.

Aristotle in his Politics stated what is implied by every music school faculty roster: "It

is difficult, if not impossible, for those who do not perform to be good judges of the performance

of others" (p. 331). These philosophic roots may help to explain why teachers of applied music

are almost always expected to be expert performers. Skills of critical listening required of a

teacher must be refined and molded in the furnace of performance; these listening skills are the

essential abilities that a music teacher cannot do without. Because music performance involves

competence in the cognitive, affective, and psychomotor domains of learning, authentic

assessment must extend beyond single criterion, bi-level tests of the type appropriate for math or

spelling. No single test can measure all factors that go into a performance; at best a single test

may evaluate only a few aspects of a student's playing.

Two contemporary philosophical views on the role of evaluation in music are those of

Bennett Reimer (1989/2003), and David Elliott (1995). Though these scholars share many beliefs

about the role and value of universal music education, they represent two poles of thought









The Diagnostic Test of Cello Technique


Section Two: Playing Examination

Part One: Left Hand Technique




Scales
oaks

[Excerpt from Concerto in G Minorfor Two Cellos, RV 531,by A. Vivaldi, Ia
movement]

Allegro =106
J =106 4





Alfred Publishing



[Excerpt from Danse Rustique, Op. 20, No. 5 by W.H. Squire]


S=--98
Allegro _








Stainer & Bell Limited











115









Subjective as well as objective information shape our systems of evaluation. As Boyle

and Radocy (1987) observe, subjective information tends to vary from observer to observer and

its value in informing decision making is limited. Objective information, by definition, is

relatively unaffected by personal feelings, opinions, or biases. Musical evaluation should not be

limited to gathering only objective data but should include subjective observations as well.

Although certain aspects of musical performance can be measured with scientific precision, such

as vibrato width or decibel levels, the complex multi-faceted nature of music makes the

reliability of any measure less than perfect. This observation need not discourage music

educators, but rather help them recognize the need for stronger objective criteria for evaluation.

A music educator's personal philosophy of assessment is not tangential to their work, but

an essential base from which to define and direct teaching. Brophy (2000) explains the need for

a philosophy of assessment:

A personal assessment philosophy is an essential element in the development of a general
teaching philosophy. Exploring one's reasons for being a music teacher should
inevitably reveal personal reasons and motivations for believing that assessment is
important, including why it is important. The depth of one's commitment to music
education as a profession is also a fairly reliable predictor of one's commitment to
assessment as an important aspect of the music program (p. 3).

Deciding what is important for students to learn and why it is important determines how one

will assess what students know. Attitudes toward assessment directly influence the content and

quality of teaching. Inevitably, a teacher's philosophy of assessment will be most influenced by

how he or she was taught and evaluated as a student. This may help explain the range of

attitudes noted by Colwell (2006):

Evidence from learning psychology reveals that assessment properly conducted makes a
major difference in student learning and when incorrectly used, a corresponding negative
effect. The current hype, however, has not produced much action in the United States,
Canada, or Great Britain. To many music educators, assessment is so much a part of
instruction-especially in achieving goals in performance-that they do not believe more









TABLE OF CONTENTS

A CK N OW LED GM EN TS ............................................................. ........... .. 4

L IST O F TA B L E S .............................................................................. 9

DEFINITION OF TERM S................... .................. ..................... ......... 10

ABSTRACT............................................................................ ......... .

CHAPTER

1 INTRODUCTION ................. ...................................... .... ......13

Purpose of Study........... ................ ........................................ ......... 14
R research Q uestions................... .................................. .... ...... 14
D elim stations .................................................................................. 14
Significance of the Study ............................ ......... ............... .... ............14

2 REVIEW OF LITERATURE........... ...................................16

Introduction................... .................. .................. .............. 16
Philosophical Rationales............ .... ...................................... ....... ... 16
Bennet Reim er................... ............. .............................. ..... .... 19
D avid E lliott................. ................................. .. ... ............... 2 1
Comparing and Contrasting the Philosophic Viewpoints of Reimer and
Elliott .....................................................................22
Theoretical Discussion.................................................... ... ..........23
Assessment in Music: Theories and Definitions.....................................23
Constructivism and Process/Product Orientation........... .... ...................26
Definitions................. ............................. ........ 27
Research................................ .......................... ................... 29
The Measurement of Solo Instrumental Performance ........... ............................29
John Goodrich W atkins ................. ................. ..................2....... ..29
Robert Lee Kidd................... ................ ...................... ...... 31
Janet M ills ................ ........... .......... ......................... 32
The Use of Factor Analysis in Performance Measurement..................................34
H arold F. Abeles.................. .............. ........................ ....... 34
M martin J. B ergee.......................... .............................. ......... .... 36
The Development of a Criteria-Specific Rating Scale .......................... .............37
The Measurement of String Performance................................. .............. 39
Stephen E. Farnum .................. .................... ....... ................39
Stephen F. Zdzinski and Gail V. Barnes .............................. ... ......... 41
Summary: Implications for the Present Study........... .................................. 42





6









valuable. If a student expresses the desire to be able to play, "anything set before me," they

would be likely to respond enthusiastically to a rapid, intense survey of a wide variety of cello

literature. For the student who specifically mentions perfecting intonation as a goal, there are

studies and approaches that would be recommended.

The question, "What areas of technique do you feel you need the most work on?" elicited

even more specific responses such as shifting, general knowledge of higher positions, fluid bow

arm, relaxing while playing, exploring musical phrasing, etc. These responses help give the

teacher a window into the student's self-awareness. They could become excellent starting

points for examining technique and would go far in helping technical study be goal-directed

rather than a mechanical process.

The final section of the Student Self-Assessment Profile had the students summarize their

goals for six months, one, two, four, and ten year periods. Responses showed students had clear

ideas about what they wanted to do after school, such as orchestral auditions or graduate school.

One revision made for the present study was to ask students what they needed to do to

accomplish their goals. A personal commitment in the plan of study is essential for insuring the

student's motivation to accomplish the goals formulated by both the teacher and himself. For

example, if a student seriously wants to compete for an orchestral job, preparation must began

long before the position opening is announced, through study of orchestral excerpts, a concerto,

and the Suites for Unaccompanied Cello by J.S. Bach. It is incumbent upon the teacher to

discuss these kinds of issues with students who express ambitions to play professionally in an

orchestra.













Arpeggios
[Excerpt from Etude, Op.120, No. 13, by F. Dotzauer]
Dotzauer, Op 120, 9 18
Allegro J -" 1 t > 1

2







f P /
1 4 t 4 1 2



VCarl Fisher, LLC
jExcerpt from Sonata in G Major, by G.B. Sammarinni I' movement]

Allegro non toppo J -80




CAlfred Publishing
[Excerpt from Fantasy Pieces, Op. 73 by R. Schumann, movement]

Zart und mit Asdruck o *
International Music Company









2. This test can be used as a model for violin, viola, and bass

diagnostic tests of technique.

3. Future studies should explore the relationship of theoretical knowledge and

performance ability on the cello.

As testing increasingly becomes a major focal point in discussions on improving

education, questions regarding the value and purpose of assessment will increasingly be raised.

Diagnostic evaluation, because of its capacity to inform teaching, is an important component of

music education, including applied music. Tools like the Diagnostic Test of Cello Technique

help clarify for both teachers and students what needs to be learned. Along with existing

approaches to evaluation, music educators will continue to seek better objective means to assess

musical behavior.

Normative assessment has limited value in the arts; students come from such diverse

backgrounds and experiences that their work must be judged by established criteria, not from

comparison. The effectiveness of instrumental teaching depends on how clearly performance

objectives are communicated to the student. Well-defined performance criteria results in clear

objective goals. In music, as in life, when the target is clear, it is easier to hit the mark.









stroke (M= 8.46), and lowest onpizzicato (M= 6.06). Discussion of the significance of these

mean scores is found in Chapter Five.

Comparison of Left Hand Technique and Bowing Stroke Scores

The total mean scores were calculated for the two sections of the Playing Test: Left Hand

Technique (M= 7.21), and Bowing Strokes (M= 7.31). Students performed at very similar level

for both sections and performed uniformly, i.e. higher-scoring students did well on both sections

and lower-scoring students did less well on both sections.

Comparison of Playing Test Scores and Teacher-Ranking

To determine the predictive validity of the Playing Test, teachers from the six music

school participating in this research were asked to rank their students from lowest to highest in

terms of their level of technique. Five of the six teachers responded to this request. These

rankings were compared to the rank-order based on the Playing Test scores. The results are

shown in Table 4-6.

Two teachers (School A and B) ranked their students in exactly the same order as the

Playing Test ranking (r2= 1.0). Using the Spearman rank-order correlation, the correlations of

the other three schools who responded were positive and strong: (r2= 0.65, 0.84, and 0.76

respectively). Results indicate student's performance on the Playing Test closely corresponds to

the level of their technique as perceived by their teachers. The Playing Test is criterion-

referenced and not designed to be used as a norm-reference test. However, the strong positive

correlations of the teacher's rank-order of their students to that of the rank order of the scores on

the Playing Test suggests that this measure is a valid means of determining undergraduate cello

students' technical ability.









(rpbs= 0.80) and item 31 (rpb, = 0.82) of the Pitch Location and Fingering Section had the two

highest correlations to the total test score. The range of difficulty level (1.0-.80) indicates that

the Written Test is not at an appropriate level of difficulty for undergraduate cellists.

Using Pearson's r, a low positive correlation was obtained between student scores on the

Written and Playing Test (r2 = .16). This suggests little relationship between scores on these

tests. This suggests that the cognitive knowledge required to do well on the Written Test may be

distinct from the psychomotor ability needed to demonstrate the techniques found in the Playing

Test.

Part Two: The Playing Test

Scoring the Playing Test

A discussion of the criteria-specific rating scale used to score the Playing Test is found in

Chapter Three. Ten techniques were evaluated using an additive rating scale which ranged from

0 and 10 points per item. Seven techniques were evaluated using a continuous rating scale with a

range of 2 to 10 points possible. A zero score resulted from none of the criteria being

demonstrated for an additive item. The total possible score for the combined sections of the

Playing Test was 170.

Results from the Playing Test

Reliability was estimated by using Cronbach's Alpha to find the relationship between

individual items on the Playing Test. The results (a = .92) indicate high internal consistency of

test items: this suggests that the means of assessing each technique are well-matched.

Table K-3 (Appendix K) presents the raw scores of the Playing Test items and the

composite means and standard deviations. Table 4-5 lists these items from highest to lowest

based on their mean scores. These data reveal that students scored highest on the detachN bowing









teachers was invaluable. Dr. Elizabeth Cantrell and Dr. Christopher Haritatos, as independent

judges, spent many hours viewing video-taped recordings of student performances. Their care in

this critical aspect of my study was much appreciated.

Dr. Tanya Carey, one of the great living cello pedagogues, provided valuable

suggestions for the design of this test. Thanks go as well to the many Suzuki cello teachers who

participated in this research. Their willingness to share ideas and provide suggestions on ways to

improve my test was heartening.

Finally, thanks go to the 30 students who agreed to participate in this research. The

effort they made to prepare and play their best is gratefully acknowledged. Future cellists are in

their debt for being pioneers in the field of assessment in string performance.









cello technique created for this study is designed to serve the latter purpose. It falls into the

category of a narrow content focus test, which is defined as intensive in nature (Katz, 1973).

This type of test is appropriate for judging an individual's strengths and weaknesses. It allows

for intra-individual comparisons, such as ability levels of differing skills. Intensive tests provide

the basis for remedial instruction, as well providing indications of the means of improving areas

of weakness.

The purpose of a test largely determines what type of test needs to be chosen or

constructed for assessment purposes. If a test's primary purpose is to discriminate among

individuals, then the test is norm-referenced (Boyle and Radocy, p. 75). An individual

performance is judged in comparison to the performances of his or her peers. This type of test is

appropriate for making comparisons among individuals, groups or institutions.

"Criterion-referenced tests describe student achievement in terms of what a student can

do and may be evaluated against a criterion or absolute standard of performance" (Boyle, p.

253). Such a test is ideally suited to individual performance; the challenge for this test is how to

establish the criteria to be used as a standard. If a performance evaluation uses excerpts

accurately revealing a student's ability in demonstrating specific tasks, then that test has good

content validity; the test materials coincide with the skills being tested.

The focus of performance assessment may be global, i.e. a judgment of its totality, or

specific, i.e. a judgment of only particular aspects of performance. A diagnostic test would be

expected to use criteria that reveal specific aspects of performance, although the evaluation could

still include global statements about overall playing ability. The use of global and specific

approaches are explored in the review of literature at the end of this chapter.









(sequentially more demanding performance criteria) and "additive" (nonsequential performance

criteria). When a technique was measured using a continuous rating scale, the number next to the

written criterion that corresponded to the perceived level of skill was circled. When using the

additive rating scale, the primary investigator marked the box beside each of the written criteria

that described one aspect of the performance demonstrating mastery of the skill. Both the

continuous and the additive rating scale have a score range of 2-10 points, as two points were

awarded for each level of achievement or each performance competency. It was theoretically

possible for a student to score 0 on an item using an additive scale if their performance matched

none of the descriptors. Seven continuous rating scales and ten additive rating scales constituted

the Playing Test evaluation form. The overall level of performance achievement for each student

was calculated as the sum of the scores for each area of technique.

The Student Self-Assessment Profile

The last fifteen minutes was devoted to the completion of the Written Test (Appendix E)

and the Student Self-Assessment Profile (Appendix J). To maintain the highest control in

administering the test, the primary investigator remained in the room while the Written Test was

taken, verifying that neither a piano nor cello was referred to in completing the test. The Written

Test evaluation form is provided in Appendix F.

Rationale for the Assessment Methodology

Saunders and Holahan (1997) have observed that traditional rating instruments used by

adjudicators to determine a level of quality and character (e.g., outstanding, good, average,

below average, or poor) provide little diagnostic feedback. Such rating systems, including

commonly used Likert scales, cause adjudicators to fall back on their own subjective opinions

without providing a means to interpret the results of the examination in new ways. Furthermore,









Results from the Written Test

Table K-l (Appendix K) presents the raw scores of the Written Test items and the

composite means and standard deviations. Reliability of the Written Test was obtained using the

Kuder-Richardson formula, revealing the internal consistency of test items: rKR20 = .95. This

result indicates that despite the narrow range of scores, the Written Test has strong interitem

consistency.

Table 4-1 presents the data from a regression analysis for year in school (freshmen,

sophomore, junior, and senior) and the Written, Playing, and combined Test scores. Freshmen

classification emerged as a significant predictor (p < .05) for the Playing Test and combined test

scores. The R-squared value of .28 indicates that freshmen classification accounted for 28% of

the variance in the Playing Test Scores. For the combined Written and Playing Test scores, the

R-squared value of .265 indicates that freshmen classification accounted for 27% of the variance.

With the exception of these findings, year in school does not seem to bear a relationship to

technical level, as measured by the Written and Playing Test.

Exploring the relationship of test scores and student's degree program was complicated,

as there was a mixture of music performance majors, double majors, music education majors,

music therapy majors, and music minors. One school did not allow freshmen to declare music

performance as a major until their sophomore year, insisting they enter the studios initially as

music education majors. If one classified double majors in the music performance category, then

there were 21 music performance majors and nine students in the "other" category. A regression

analysis was conducted with major/minor distinction as a predictor of the written, playing and

total scores. No effect of major or minor distinction was found for the Written Test (R2= .001).

Results were nearly significant for the Playing Test (p = .08)) and not significant for the










Table K-2. Raw Score, Percent Score, Frequency Distribution, Z Score, and Percentile Rank of
Written Test Scores



Raw Percent Frequency Z Percentile
Score Score Score Rank


59 62.00 2 -2.30 1.67
62 66.00 1 -2.04 8.33
65 68.00 1 -1.78 11.67
68 72.00 1 -1.51 15.00
73 77.00 1 -1.07 18.33
81 86.00 1 -0.37 21.67
85 89.00 1 -0.02 25.00
86 91.00 3 .07 28.33
87 92.00 2 .16 38.33
88 92.00 1 .25 45.00
90 95.00 2 .42 48.33
91 96.00 2 .51 55.00
92 97.00 3 .60 61.67
93 98.00 3 .69 71.67
94 99.00 3 .77 81.67
95 100.00 3 .86 91.67









LIST OF REFERENCES


Abeles, H.F. (1973). Development and validation of a clarinet performance
adjudication scale. Journal ofResearch in Music Education, 21, 246-255.

Aristotle, trans. 1943, Jowett, B. Politics. (1340b24) New York: Random House.

Aristotle, Nichomachean Ethics. Bk. 2 (1103a26-1103b2) as paraphrased by Durant, W.
(1967). The Story ofPhilosophy. New York: Simon and Schuster.

Asmus, E.P. & Radocy, R.E. (2006). Quantitative Analysis. In R. Colwell (Ed.), MENC
handbook of research methodologies (pp.95-175). New York: Oxford University
Press.

Bergee, M. J. (1987). An application of the facet-factorial approach to scale
construction in the development of a rating scale for euphonium and tuba music
performance. Doctoral dissertation, University of Kansas.

Berman, J., Jackson, B. & Sarch, K. (1999). Dictionary of bowing and pizzicato terms.
Bloomington, IN: Tichenor Publishing.

Blum, D. (1997). Casals and the art of interpretation. Berkeley and Los Angeles, CA:
University of California Press.

Boyle, J. (1970). The effect of prescribed rhythmical movements on the ability to read
music at sight. Journal ofResearch in Music Education, 18, 307-308.

Boyle, J. (1992). Evaluation of music ability. In D. Boyle (Ed.), Handbook of research
on music teaching and learning (pp. 247-265). New York: Schirmer Books.

Boyle, J. & Radocy, R.E. (1987). Measurement and evaluation of musical experiences.
New York: Schirmer Books.

Brophy, T. S. (2000). Assessing the developing child musician: A guide for general
music teachers. Chicago: GIA Publications.

Colwell, R. (2006). Assessment's potential in music education. In R. Colwell (Ed.),
MENC handbook of research methodologies (pp. 199-269). New York: Oxford
University Press.

Colwell, R. & Goolsby, T. (1992). The teaching of instrumental music. Englewood
Cliffs, NJ: Prentice Hall.

Eisenberg, M. (1966). Cello playing of today. London: Lavender Publications.









APPENDIX B
VALIDITY STUDY

A validity study was conducted following the pilot study to determine what extent

teachers felt this test measured a student's technique (Mutschlecner, 2005). Cello teachers (N=

9) on the college and college preparatory level agreed to participate in this validity study by

reading all sections of the diagnostic test and then responding to questions in an evaluation form

(Appendix C).

In answer to the question, "To what extent does this test measure a student's technique,"

responses ranged from "Very extensively," and, "Rather completely," to, "The written part tests

knowledge, not technique." Fifty six percent of the teachers felt the test measured a student's

technique in a significant way. Sixty seven percent of the respondents suggested that sight-

reading difficulties might mask or obscure an accurate demonstration of a student's technical

ability. As one teacher said, playing the excerpts "... shows if they have worked on this

repertoire. If they are reading it, it shows their reading ability." Two teachers came up with the

same solution: Provide the playing test to students early enough for them to develop familiarity

with the passages which they are asked to play. This would not eliminate the inherent advantage

students would have who had studied the piece from which the excerpt was derived, but it could

mitigate some effects, such as anxiety or poor sight-reading skill, which adversely affects

performance. These suggestions were implemented in the present study.

Criticism of the Written Examination included the concern that, "some fine high school

students ready for college might not know intervals yet." In response to this, a new section of

the Written Examination was developed (Pitch Location and Fingering) that measures a student's

capacity to locate pitches on a fingerboard representation without the use ofintervallic









education, particularly in the areas of string pedagogy and assessment. Tim has been married to

Sarah Caton Mutschlecner, a nurse practitioner, for 18 years. They have three daughters:

Audrey, age 16; Megan, age 14; and Eleanor, age 10.










Table 4-7. Comparison of Researcher's and Independent Judges' Scoring of Student
Performances of the Playing Test


Student
No


Primary
Investigator


152
136
156
144
134


144.4
9.63
0.92


Student
No


Primary
Investigator


138.8
16.09
0.95


Judge A



162
142
158
142
136


148
11.31



Judge B


138
152
104
98
134


125.2
8.67









ordered their students by level of technical skill based on their assessment of the students'

playing technique. These rankings were correlated to those based on the Playing Test results as a

measure of validity. The data analysis was designed to explore the following research questions:

1. To what extent can a test of cello playing measure a student's technique?

2. To what extent can a criteria-specific rating scale provide indications
of specific strengths and weaknesses in a student's playing?

3. Can a written test demonstrate a student's understanding of fingerboard
geography, and the ability to apply music theory to the cello?

Participants

Written and Playing Test scores, and student answers to questions in the Student Self-

Assessment Profile were obtained (N= 30). Participants were undergraduate music majors and

minors studying cello at three private and three public universities (N= 6) in the southeastern

region of the United States.

Part One: The Written Test

Scoring the Written Test

The Evaluation Form used to tabulate the scores for the Written Test is provided in

Appendix F. Items on the Written Test were assigned points using the following system:

(1) Fingerboard Geography: 11 points. (44 pitch locations to identify were divided by 4)

(2) Interval Identification: 8 points.

(3) Pitch Location and Fingering: 32 points. (a single point was assigned for correctly

identifying both pitch and fingering)

(4) Single Position Fingering: 32 points.

(5) Bass, Treble, and Tenor Clef Note Identification: 12 points.

The total possible score for the combined sections of the Written Test was 95 points.









Single -Position Fingering

Indicate the fingering that would allow the four notes in each example
to be played in a single position. Use no open strings and no extensions
(stretches between 1 and 2).











#1
a a^








#2









#3
n nn a


#4 -









Work on Vibrato, either in general or in upper positions was mentioned by six students.

Despite sounding like an oxymoron, it is true that an effortless sounding vibrato is very difficult

to make. Dorothy Delay, of the Juilliard School of Music, assigned the first hour of practice to

be spent on articulation, shifting, and vibrato exercises for the left hand, and various bow strokes

for the right (Sand, 2000). Students who express a desire to develop their vibrato should be

guided with appropriate exercises, etudes, and solos.

Other areas of technique are far more easily addressed. A student who mentions sight-

reading or reading in different clefs can be easily directed to materials for study. Applying

oneself to the exercises in Rhythmic Training, by Robert Starer will benefit any student who felt

deficient in rhythm (Starer, 1969). There are materials to address virtually every technical need,

as long as the need is made apparent to the teacher.

The final question of the SSAP asks, "Summarize your goals in music and what you need

to do to accomplish these goals." The words with underlined emphasis were added based on

input from the Validity Study (Appendix B). This phrase is meant to suggest a student's

personal responsibility to follow-through with their stated goals. Table 4-11 is a transcription of

student responses to this question in their own words.

Six-month goals are short term, and reflect a student's semester-long objectives. "Work

strictly on technique, not worrying about pieces or recitals," is one example. Some one-year

goals seem naive: "To have perfect intonation." Goals are the driving forces behind ones

outward acts; playing with perfect intonation may not be attainable but that doesn't mean it isn't

a valid aspiration. One student has shown they understand the need to make some aspects of

playing virtually automatic through repetition: "Resolve all tension issues: slow loose practice-

making it a habit." Music and athletics have in common the need for drilling desired actions.








Pitch Location and Fingering (continued)


CGDA
BSBH15


CGDA


CG D A
"nK










Interval Identification (continued)









the extent and breadth of a new student's experience and may indicate appropriate directions for

further study.

How interested are you in each of these areas of performance: Solo, Chamber, and
Orchestral?

Table 4-8 lists students' responses to this question. Eighty-three percent of the students

stated they either agreed or strongly agreed to having interest in solo and orchestral performance,

and ninety-three percent expressed the same for chamber music. Noting responses to this section

could be a means for teachers to initiate discussion with students about their plan of study. If a

student's greatest interest was in playing chamber music, his teacher might help to facilitate this

desire. Knowing that a student's primary goal was to win an orchestral audition would dictate in

part the choice of repertoire studied.

Other areas of performance interest?

Students listed the following areas of performing interest: jazz (n = 2), conducting (n =

1), piano accompanying (n = 1), choir n = 1), improvisation (n = 1), bluegrass (n = 1), praise

bands (n = 1), and contemporary performance (n = 2). Teachers provided with this information

might choose to direct students to nontraditional sources of study, such as improvisation

methods, learning to read chord charts, or playing by ear.

What are your personal goals for studying the cello?

Responses to this question are provided in Table 4-9. Five out of the twenty-nine

students (17%) listed "teaching privately" as a goal for study. The second most frequently

mentioned goal was "orchestral performance" (10%). If this study was conducted with the

highest ranking music conservatories in the United States, the researcher suspects that "solo

performance" might be frequently mentioned as well.









on a clarinet may not have the same high factor loadings on a string instrument where tone

production is controlled primarily by bowing technique (Zdzinski, 2002).

Through factor analysis the reliability of the new measures improved. However, with

additional research came more questions. In the Abeles (1973) and Zdzinski (2002) studies, only

the audio portions of performances were analyzed by judges. The reasons these researchers

chose not to include visual input is not addressed in their studies, but the fact that they chose to

record results using audio only may have contributed to the higher reliability found in these

studies. Gillespie (1997) compared ratings of violin and viola vibrato performance in audio-only

and audiovisual presentations. Thirty-three inexperienced players and 28 experienced players

were videotaped while performing vibrato. A panel of experts rated the videotaped performances

and then six months later rated the audio-only portion of the performances on five vibrato

factors: width, speed, evenness, pitch stability, and overall sound. While the experienced

players' vibrato was rated higher regardless of what mode of presentation, results revealed

significantly higher audiovisual ratings for pitch stability, evenness, and overall sound for

inexperienced players and for pitch stability for experienced players. The implications are that

visual impressions may cause adjudicators to be less critical of the actual sound produced.

Gillespie notes; "The visual stimuli give viewers additional information about a performance that

can either be helpful or distracting, causing them to rate the performance differently than if they

had simply heard it." He adds, "If the members of the panel see an appropriate motion for

producing vibrato, they may rate the vibrato higher, regardless if the pitch drifts slightly"

(Gillespie, p. 218). At the very least, the study points out the need for the strictest possible

consistency in the content-format given to the judges to assess. If assessment is made from an









DEFINITION OF TERMS


* Fingerboard geography the knowledge of pitch location and the understanding of the
spatial relationships of pitches to each other
* Horizontal intervals-intervals formed across two or more strings
* Vertical intervals-intervals formed by the distance between two pitches on a single
string
* Visualization-the ability to conceptualize the fingerboard and the names and locations
of pitches while performing or away from the instrument
* Technique
1) the artistic execution of the skills required for performing a specific aspect of
string playing, such as vibrato or staccato bowing
2) the ability to transfer knowledge and performance skills previously learned to
new musical material
* Target Note-a note within a playing position used to find the correct place on the
fingerboard when shifting










Table 4-4. (continued)


Category


Item
Number


Pitch Location
And Fingering


Item
Difficulty


.80
.83
.80
.83
.83
.83
.80


Item
Discrimination


Point Bi-Serial
Correlation


0.71
0.73
0.73
0.74
0.74
0.82
0.76


Single Position Fingering


0.07
0.07
0.07
0.07
N/A
N/A
N/A
N/A
0.43
0.43
0.16
0.15
0.23
0.16
0.36
0.36
0.06
0.32
0.40
0.35
0.23
0.23
0.23
0.23
0.23
0.12


(Table continued on next page)









6. Are the excerpts chosen for the Playing Examination a valid way of determining a student's
competence in-

a) Left hand technique?



b) Bowing technique?




7. If you feel a particular excerpt is not a good predictor of a student's ability, what alternative
passage do you recommend using?





8. Would you consider using the Playing Examination as a means of assessing a new student's
technique?
Why or Why not?




9. How would you use information gathered from the Student Self-Assessment and Goal Setting
Profile in working with your students?




10. To what extent would you be willing to participate in future Field Testing of this test through
administering it to a portion of the students in your studio?




Please include any additional comments here:










Spiccato/Flying Spiccato The student's playing of spiccato indicates:
(Check All that Apply, worth 2 points each)
D a bounced-bow stroke with good control of the bow's rebound off the string.
I good tone production through control of bow pressure and speed.
D the bow springs lightly from the string.
D notes are individually activated.
D even use of bow distribution (Flying Spiccato excerpts).
Observations/Comments:



Sautille The student's use of sautille bowing demonstrates:
(Check All that Apply, worth 2 points each)
D a rapid, natural rebounding of the bow.
D a primary movement initiated from the wrist and hand, using a light bow hold.
D the bow's contact with the string is centered around the balance point of the bow.
D the tempo is fast enough for the bow to continue to bounce of it own momentum.
D the resilience of the bow stick is used to allow the bow to spring off the string.
Observations/Comments:



Pizzicato The student's playing of pizzicato illustrates:
(Check All that Apply, worth 2 points each)
D confidently played arpeggiated chords, using the thumb.
D strong, vibrant tone (as demonstrated in the Brahms excerpt).
D clear ringing sound in the upper register (as in the Kabalevsky excerpt).
D an absence of snapping sounds caused by pulling the string at too steep
an angle.
D an absence of buzzing or dull, thudding tones due to inadequate setting of the
left-hand fingers.
Observations/Comments:









this test has value as a diagnostic tool for students studying music through a wide variety of

degree programs, not just those majoring in performance.

A letter of introduction that explained the purpose of the study was mailed to the cello

faculty of the six schools. Upon receiving approval from the faculty cello teacher, the letter of

consent along with the Playing Test (Appendix G) was provided for each participant. One copy

of the consent form was signed and returned from each participating student. Following this,

times were arranged for each student to take the Written and Playing Test. Each student received

a copy of the Playing Test a minimum of two weeks before the test date. Included with the

Playing Test was a cover letter instructing the students to prepare all excerpts to the best of their

ability. Attention was directed toward the metronome markings provided for each of the

excerpts. Students were instructed to perform these excerpts at the tempos indicated, but not at

the expense of pitch and rhythmic accuracy.

Data Collection

The Written and Playing Test

Each participant met individually with the primary investigator for forty-five minutes.

The first thirty minutes of testing time was used for the Playing Test. Before beginning to

perform the Playing Test, students were asked to check their tuning with the pitch A-440

provided for them. Students were also asked to take a moment to visually review each excerpt

prior to performing it. Students were asked to attempt to play all the excerpts, even if some

seemed too difficult for them.

The primary investigator listened to and judged the individual student's skill level for

each performance. For each aspect of technique assessed, a five-point criteria-specific rating

scale was constructed. The Playing Test evaluation form (Appendix H) used both "continuous"









concludes that criteria for judging music must be distinctive to each form of music and therefore

incomparable to one another (p. 266). Reimer softens his stance by providing examples of

universal criteria: that is, criteria applicable to diverse musical forms. He does insist, however,

that they must be applied distinctively in each case:

Assessment of musical intelligence, then, needs to be role-specific. The
task for the evaluation community (those whose intelligence centers on issues of
evaluation) is to develop methodologies and mechanisms for identifying and assessing
the particular discrimination and connections required for each of the musical roles their
culture deems important. As evaluation turns from the general to the specific, as I
believe it urgently needs to do, we are likely to both significantly increase our
understandings about the diversities of musical intelligence and dramatically improve
our contribution to helping individuals identify and develop areas of more and less
musical capacity (p. 232).

Reimer accepts the view that there is a general aspect of musical intelligence, but

suggests that it takes its reality from its varied roles. This allows him to see evaluation in music

as a legitimate aspect of musicianship, part of the doing of music that Elliott insists on. His

philosophic position supports creating new measures of musical performance, especially as they

bring unique musical intelligence to light and aid in making connections across diverse forms of

music making.

Part Two: Theoretical Discussion

Assessment in Music: Theories and Definitions

Every era has a movement or event that seems to represent the dynamic exchange

between the arts and the society of that time. Creation of the National Standards for Art

Education is one such event. The Goals 2000: Educate America Act defined the arts as being part

of the core curriculum in the United States in 1994. That same year witnessed the publication of

Dance Music Theatre Visual Arts: What Every Young American .lhluI Know and Be Able to Do

in the Arts (MENC, 1994). It is significant that among the nine content standards, number seven









The relatively low score for martele bowing is likely due to a lack of understanding as to

what constitutes this bow stroke. The two excerpts used for this item were moderately easy to

play. A large number of students, however did not demonstrate the heavily, accented

articulation, and stopping of the bow on the string, which characterizes this stroke. While many

method books include a description of martele bowing, students are unlikely to have a clear

grasp of how to execute this bowing unless it is demonstrated by a teacher.

The item with the lowest score was pizzicato (M 6.06). The excerpts chosen featured

three separate techniques: (a) arpeggiated chords using the thumb (Elgar), (b) notes with a strong

vibrant tone (Brahms), (c) clear ringing sound in the upper register (Kabalevsky). These

excerpts were not easy to sight read for students who were ill-prepared. This was the final

section in a series of excerpts requiring great concentration; mental and/or physical fatigue may

have been a factor. It is also possible that the study of pizzicato is neglected in lessons.

Intonation was the second lowest score (M 6.20). Judge B assigned the only perfect

score given to a student. It is axiomatic that string players must be constantly vigilant about

playing in tune. Not allowing students to become tolerant of playing out-of-tune is one of the

essential roles of the teacher. Pablo Casals' words on this subject are timeless:

'Intonation', Casals told a student, 'is a question of conscience. You hear when a
note is false the same way you feel when you do something wrong in life. We
must not continue to do the wrong thing' (Blume, 1977, p. 102).

Five students (15%) mentioned intonation when asked, 'What areas of cello technique do you

feel you need the most work on' (see Chapter 4, p. 63). From this study it appears the Playing

Test may help make students more aware of the importance of work on intonation.









CHAPTER 3
METHODOLOGY

The purpose of this study was to construct, validate and administer a diagnostic test of

cello technique for use with college-level students. This test is criterion-referenced and included

both quantitative and qualitative measurements. This study was implemented in the following

stages: (a) development of an initial testing instrument, (b) administration of a pilot test, (c)

administration of a validity study, (d) administration of the final test, and (e) data analyses

procedures for the final test, including an interjudge reliability measurement. This chapter

describes the following methodological elements of the study: setting and participants,

instrumentation, data collection, data analysis, and validity and reliability procedures.

Setting and Participants

Approval for conducting this study was obtained first from the Institutional Review

Board (IRB) of the University of Florida. A copy of the informed consent letter is included in

Appendix D. The testing occurred at the respective schools of the participants, using studio or

classroom space during times reserved for this study.

College-level students (n = 30) were recruited for this study from three private and three

public universities in the southeastern region of the United States. While this demographic does

not include all the regions of the United States, the variability is considered adequate for this test,

which was not concerned with regional variations, if such variations exist, in cello students. The

participants selected were undergraduate cello students, both majoring and minoring in music.

This subject pool consisted of music performance majors (n = 16), music minors (n = 1), double

majors (n = 3), music therapy majors (n = 2), music education majors (n = 6), and music/pre-

med. students (n = 2). Using subjects from a diversity of academic backgrounds assumes that









evidence of the validity of the criteria-specific rating scales for diagnosing the strengths and

weaknesses of individual performances. The researchers noted that because three kinds of

performances (prepared piece, scales, and sight-reading) were measured, factor analysis would

provide insight into the interdependence of performance dimensions across these types of

playing. Factor analysis would indicate the constructs that guide adjudicators in the evaluation

process as well.

Saunders and Holahan's findings have implications for the present study. Their data

provide indirect evidence that criteria-specific rating scales have useful diagnostic validity.

Through such scales, students are given a diagnostic description of detailed aspects of their

performance capability, something that Likert-type rating scales and traditional rating forms

cannot provide. Such scales help adjudicators listen for specific aspects of a performance rather

than having them make a value judgment about the overall merits of a performance.

The Measurement of String Performance

Stephen E. Farnum

Because of the success obtained and reported with the Walkin,-Farnum Performance

Scale, and its practical value as a sight-reading test for use in determining seating placement and

periodic measurement, it was suggested that a similar scale be developed for string instruments

(Warren, 1980). As a result, the Farnum String Scale: A Performance Scale for All String

Instruments (1969) was published. Both tests require the student to play a series of musical

examples that increase in difficulty. No reliability or validity information is provided in the

Farnum String Scale (FSS). The test manual describes four preliminary studies used to arrive at

sufficient range of item difficulty. Initially Farnum simply attempted to transpose the oboe test

from the WFPS, but he found that there was an inadequate spread of difficulty. New exercises










Table K-3. (continued)


Student Thumb Vibrato Intonation Slurred Detache Martele
Position Legato

1 8 10 8 8 8 8
2 6 8 8 8 8 10
3 10 10 6 10 10 10
4 10 8 8 10 8 4
5 10 10 8 10 8 10
6 8 8 6 10 8 8
7 6 10 6 10 10 10
8 4 8 6 10 8 6
9 8 4 6 4 10 6
10 6 10 8 10 10 10
11 6 8 6 8 10 8
12 6 10 8 8 8 8
13 8 10 6 8 10 2
14 6 4 4 8 8 4
15 6 8 4 6 6 8
16 4 6 2 8 8 2
17 6 4 4 8 8 2
18 4 6 8 8 8 4
19 6 8 8 8 10 8
20 10 8 6 9 10 10
21 2 8 6 6 2 2
22 8 6 6 4 8 6
23 6 8 4 8 8 4
24 8 10 4 8 10 4
25 6 4 4 4 8 4
26 8 8 6 10 6 6
27 8 10 8 8 10 10
28 10 8 8 10 10 8
29 10 8 6 10 10 8
30 6 10 8 10 8 10
M 7.0 7.93 6.2 8.23 8.47 6.67
SD 2.08 2.00 1.69 1.85 1.72 2.84
(Table K-3 continues on next page)









Abeles found that the six-factor structure produced from the factor analysis was

essentially the same as the a priori theoretical structure. This suggested good construct validity.

He concluded that this structure would be appropriate for classifying music performance in

general, as none of the factors seemed to reflect idiosyncratic clarinet characteristics. On the

other hand, Zdzinsky (2002) found that the factors identified to assess stringed instrument, wind

instrument and vocal performance are distinct and related to unique technical challenges posed

by each performance area.

The interjudge reliability estimates for the CPRS were consistently high (.90). Individual

factor reliabilities ranged from .58 to .98, with all factors but tone and intonation above .70.

Criterion-related validity based on correlations between CPRS total scores and judges' ratings

were .993 for group one, .985 for group two, and .978 for group three. Predictive validity (<.80)

was demonstrated between the CPRS and global performance ratings.

Martin J. Bergee

The development of a rating scale for tuba and euphonium (ETPRS) was the focus of a

doctoral dissertation by Bergee (1987). Using methods similar to Abeles, Bergee paired

descriptive statements from a literature, adjudication sheets and essays with a Likert scale to

evaluate tuba and euphonium performances. Judges initial responses led to identification of five

factors. A 30-item scale was then constructed based on high factor loadings. Three sets of ten

performances were evaluated by three panels of judges (N= 10) using the rating scale. These

results were again factor analyzed, resulting in a four-factor structure measuring the items:

interpretation/musical effect, tone quality/intonation, technique, and rhythm/tempo.

Interestingly, factor analysis produced slightly different results then in the Abeles' Clarinet

Performance Adjudication Scale. Technique was unique to this measure, while articulation was









was: Evaluating music and music performances. Bennett Reimer, one of the seven music

educators on the task force appointed to write the document, discusses the central role of

evaluation in music:

Performing composed music and improvising require constant evaluation, both during the
act and retrospectively. Listening to what one is doing as one is doing it, and shaping the
sounds according to how one judges their effectiveness (and effectiveness), is the primary
doing-responding synthesis occurring within the act of creating performed sounds
(Reimer, 2003, p. 265).

Central to success is the ability to assess one's work. This assessment includes all of the content

standards, including singing, performing on instruments, improvising, and composing.

Evaluation is the core skill that is required for self-reflection in music. When a student is

capable of self evaluation, to some extent teachers have completed their most important task.

Reimer sees the National Standards as the embodiment of an aesthetic ideal, not merely a

tool to give the arts more legislative clout:

The aesthetic educational agenda was given tangible and specific formulation in the
national content standards, and I suspect that the influence of the standards will continue
for a long time, especially since their potential for broadening and deepening the content
of instruction in music education has barely begun to be realized (p. 14).

Reimer and the other members of the task force were given an opportunity to integrate a
philosophy into the national standards that values music education. With this statement they
articulated a philosophy defending the scholastic validity of the arts:

The Standards say that the arts have "academic" standing. They say there is such a thing
as achievement, that knowledge and skills matter, and that mere willing participation is
not the same thing as education. They affirm that discipline and rigor are the road to
achievement-if not always on a numerical scale, then by informed critical judgment
(MENC, 1994, p. 15).

Such statements are necessary in a culture that perniciously sees the arts as extracurricular

activities and not part of the core educational experience of every child.

Reimer has provided a philosophical foundation for assessment in the arts. Others, like

Lehman (2000), observe that, "Our attention to this topic is very uneven. It is probably fair to















Arpeggiated Chords (continued)


[Excerpt from Concerto in Bb Major by L Boccherini/Gnrtzmacher, 1 movement]

Allegro moderato -90 a


2 1 4 4


3,-
SEE -M-
- a -=


poco p P

=3 =1, 1 i.


p

,r r a^ ,


cresc.

B 7 tN t>


- n


2
a2~


- -L -. _m 1 4 -L


rit.


Alfred Publishing


[Excerpt from Concerto in E Minor, Op. 85 by E. Elgar, 4 movement] J =112-120

Allegro aimato 2 3

Novello and Company Limited


S3 -0 O q r c









ItRM ,- N3 1









The Student Self -Assessment Profile

The premise for designing the Student Self-Assessment Profile is that better information

about a student's background, interests, and goals for study can result in more effective teaching.

It value as a diagnostic tool is in revealing a student's years of study, previous repertoire studied,

and playing experience. The emphasis on identifying personal goals for studying the cello as

well as overall goals in music opens a window into a student's self awareness. Communication

of these goals to a teacher can affect the course of study. Allowing students' goals to influence

their education may result in their feeling more invested in the learning process. The outcome

may be more effective, goal-directed practice. Students are more likely to be motivated by goals

that they perceive as being self-initiated. Awareness of these goals is not necessarily derived by

conventional teaching methods; it comes from a dialogue between the teacher and student. The

Student Self-Assessment Profile can act as a catalyst for such a dialogue.

The personal goal for studying the cello most often mentioned was "teaching privately"

(Table 4-9). When a teacher knows that a student wants to teach the cello as a vocation, his role

becomes more of a mentor, exemplifying for the student the art of teaching. A greater role for

discussion during the lesson may ensue as the need for various approaches to problems becomes

apparent. Perhaps the most important thing a teacher can provide a student aspiring to teach is to

help them become reflective about their own playing; asking themselves why they do something

a certain way. Questions that ask why rather than how take precedence. Two students mentioned

college-level teaching as one of their personal goals. Providing student-teaching opportunities

for these students as well as opportunities to observe experienced teachers at work would be

invaluable.









3 METHODOLOGY................... ................... ................. .. ......... 45

Setting and Participants....................................... ............... .. ............ 45
Data Collection.................. ................ .......................... .........45
The W written and Playing Test ............. .. ........................................ 46
The Student Self-Assessment Profile................................................47
Rationale for the Assessment Methodology............ ........................ .........47
Interjudge Reliability................... .................. .................. .. ......... 49
Data Analysis ................... ......................................... ........49
Content V alidity................... .................... ......................... ....... 50

4 RESULTS ................... ................ ........................... ........... 51
Data Analysis ................... ................... ........................ ......... 51
Participants................... ............... ...... .................. ................. 52
Part O ne: The W written Test................... ......................................... ...... 52
Scoring the W written Test................................. ............. ............ 52
Results from the W written Test ............. ........................................... .53
Regression Analysis of Written Test Items ........................... ............55
Part Tw o: The Playing Test.................. ................................. ...........56
Scoring the Playing Test................... ........................................56
Results from the Playing Test..................................... ......................56
Comparison of Left Hand Technique and Bowing Stroke Scores.................57
Comparison of Playing Test Scores and Teacher-Ranking........................57
Interjudge Reliability of the Playing Test...........................................58
Part Three: The Student Self-Assessment Profile.................... ................................58
Repertoire Previously Studied..................... .................................... 58
How Interested Are You In Each of These Areas of Performance:
Solo, Cham ber, and O rchestral?..................................... ........................... 59
Other Areas of Performance Interest?............................. .......................59
What Are Your Personal Goals for Study On the Cello?...................................59
What Areas of Cello Technique Do You Feel
You Need the Most Work On?............................. ...............60
Summarize Your Goals in Music and What You Need
To Accomplish These Goals ............ .............. ... ..................60
Summary of Results................... .................... .... ............................61

5 DISCUSSION AND CONCLUSIONS........... ........................... .............75

Overview of the Study................... ....................................... ..... 75
R eview of the R results ................................. ......................... .............. 75
Observations from the Results of Administering the Diagnostic
Test of Cello Technique................... .................. ........................... 76
The W written Test................... .................. ........................... 76
The Playing Test ..................... ......................78
The Student Self-Assessment Profile................................................81
Discussion of Research Questions.............. ............................. .............84



7









prerequisite for sight-reading ability. As a result, this section should be included in future

versions of this test.

The Written Test needs to be revised for undergraduate students in terms of difficulty

level. A greater range of scores would likely result if the present version of the test was

administered to high school students. In future versions, using actual passages from the cello

repertoire to evaluate a student's understanding of intervals, fingering, and fingerboard

geography would be in keeping with the testing philosophy of using situated cognition.

The Playing Test

Left Hand Technique (nine items) and Basic Bowing Strokes (eight items) were evenly

dispersed within the range of lowest to highest mean scores (Table 4-5). The choice in this study

to divide technique into left hand techniques and bowing techniques does not reflect in reality

how integrated these two areas are. This study's design did not isolate bow techniques from the

musical context in which they are found. If such a study was conducted, it might reveal that

some students excel in bowing techniques and others in left hand technique. These two areas of

technique are so intermeshed that it would be difficult to isolate them. Bowing serves literally to

amplify what the left hand does. Development of bowing skill, through practice on open strings

without using the left hand, is limited, and is usually, though not always, confined to initial

lessons.

The Playing Test's mean scores revealed that students scored highest on the detached

bowing stroke (M= 8.46), followed by legato bowing (M= 8.33), and arpeggios (M= 8.13).

Detached bowing is the most commonly used bow stroke; legato playing is also very ubiquitous.

One might have expected to find Scales, Broken Thirds and Arpeggios grouped together the

same difficulty category. These three areas of technique are considered the core left hand









The Diagnostic Test of Cello Technique


For the Student:

The following is a series of excerpts primarily taken from the standard solo repertoire for

cello. The selections are chosen as representative of certain technical skills or bowing styles.

Part one (Left Hand Technique) contains passages which demonstrate: Scales, Arpeggios,

Broken Thirds, Double Stops, Position Changes, Arpeggiated Chords Across Three or Four

Strings, Thumb Position, Vibrato, and Intonation. Part two (Basic Bowing Strokes) features

passages which demonstrate essential bowing styles: Slurred Legato, Ditachd/Accentuated

Ditachi, Marteli, Portato, StaccatolSlurred Staccato, SpiccatoFlying Spiccato, Sautilli, and

Pizzicato. Short definitions of terms and explanations of how to execute bowings are provided.

Metronome markings are given as examples of typical performance speeds. Make your goal to

perform these excerpts at the tempos indicated, but not at the expense of pitch or rhythmic

accuracy.

This test is designed to help determine a player's level of competency in specific areas of

technique. It is not a measure of sight-reading ability; what the examiner wishes to see and hear

is a demonstration of a particular skill, such as playing in thumb positions or legato bowing.

Focus on demonstrating this aspect of the excerpt to the best of your ability. If what is being

asked for is unclear to you, please ask the examiner for additional clarification.










Table 4-5. Mean Scores of Playing Test Items in Rank Order

Item Rank Order Mean Score



Detached 1 8.47
Slurred Legato 2 8.23
Arpeggios 3 8.13
Staccato 4 7.93
Vibrato 5 7.93
Portato 6 7.67
Position Changes 7 7.67
Scales 8 7.60
Arp. Chords 9 7.20
Sautille 10 7.13
Thumb Position 11 7.00
Broken Thirds 12 6.80
Martele 13 6.67
Double Stops 14 6.40
Spiccato 15 6.30
Intonation 16 6.20
Pizzicato 17 6.00

Note. Ratings ranged from 2 through 10.










Table K-3. Raw Scores of the Playing Test Items, Composite Means, and Standard Deviations

Student Scales Arpeggios Broken Double Position Arpeggiated
Thirds Stops Changes Chords


1 10 10 8 10 10 10
2 10 10 8 8 6 6
3 10 10 8 8 10 10
4 10 8 10 8 8 10
5 8 8 6 6 8 10
6 8 10 10 8 8 8
7 10 10 8 8 10 8
8 8 10 8 6 8 8
9 8 10 8 4 8 6
10 8 10 8 8 10 8
11 6 8 8 8 10 4
12 8 10 8 6 10 10
13 8 8 6 8 6 8
14 6 6 6 4 6 6
15 6 6 6 4 6 8
16 6 4 4 4 6 6
17 6 6 6 8 6 2
18 8 8 4 6 4 4
19 8 8 8 8 8 10
20 6 10 6 4 8 8
21 4 6 6 6 10 8
22 6 6 4 4 6 4
23 0 2 2 2 6 2
24 6 6 4 4 4 4
25 8 10 6 4 6 2
26 8 8 6 8 8 8
27 8 8 8 8 10 10
28 10 10 8 6 8 10
29 10 8 8 8 8 10
30 10 10 8 8 8 8
M 7.6 8.13 6.8 6.4 7.67 7.2
SD 2.19 2.10 1.86 1.99 1.83 2.66
(Table K-3 continues on next page)









ACKNOWLEDGMENTS

This work is dedicated to my dear wife Sarah who had shown unwavering support and

encouragement to me in my studies. In every way she made possible the fulfillment of this goal

which would have be unimaginable without her. To my children Audrey, Megan, and Eleanor I

owe a debt of gratitude for their patient understanding. My parents Alice and Paul, through their

continued reassurance that I was up to this task, have been much appreciated. Dr. Donald and

Cecelia Caton, my parents-in-law, spent many hours editing this manuscript, and I am very

grateful for their skill and encouragement. The professional editing expertise of Gail J. Ellyson

was invaluable.

The transition from music performance to academic scholarship has not always been

easy. Dr. Timothy S. Brophy's expertise in the field of music assessment and his enthusiasm for

the subject was truly the inspiration for what grew from a class paper into this dissertation. As

Chair of my committee he has provided the necessary guidance and direction leading to the

completion of this work. I consider myself very fortunate to have worked under Dr. Brophy's

mentorship.

As members of my supervisory committee, Dr. Art Jennings, Dr. Charles Hoffer, and Dr.

Joel Houston have generously offered their insight in refining this research. I thank them for

their service on my committee and for their support. Gratitude is also extended to Dr. Camille

Smith and Dr. David Wilson for serving as initial members of my committee.

A study of this magnitude would have been impossible without the commitment from

colleagues: Dr. Wesley Baldwin, Dr. Ross Harbough, Dr. Christopher Hutton, Dr. Robert

Jesselson, Dr. Kenneth Law, and Dr. Greg Sauer. Their willingness to allow their students to

participate in my research, made this study possible. The support and insight from these master









due to their design, these rating scales are incapable of providing much in the way of interpretive

response. As Saunders and Holahan observe, "knowing the relative degree to which a judge

agrees or disagrees that, 'rhythms were accurate,' however, does not provide a specific indication

of performance capability. It is an evaluation of a judge's magnitude of agreement in reference

to a nonspecific and indeterminate performance standard and not a precise indication of

particular performance attainment" (p. 260).

Criteria-specific rating scales are capable of providing greater levels of diagnostic

feedback because they contain written descriptors of specific levels of performance capability. A

five-point criteria-specific rating scale was developed for this study to allow for greater

diagnostic input from judges. Aspects of left hand and bowing technique were evaluated using

both continuous (sequentially more exacting criteria) and additive (nonsequential performance

criteria). Both continuous and additive scales require a judge to choose which of the several

written criteria most closely describe a student's performance. The additive scale was chosen

when a particular technique (such as playing scalar passages) has a number of nonsequential

features to be evaluated, such as evenness, good bow distribution, clean string crossings, and

smooth connections of positions.

Along with the five-point criteria specific rating scale, the Playing Test evaluation form

(Appendix H) provided judges with an option of writing additional observations or comments

about each technique evaluated. While these data are not quantifiable for measurement

purposes, recording the judge's immediate reactions in their own words to a student's

performance may capture an insight into some aspect of performance that the written criteria

overlooks. Because the primary purpose of this test is diagnostic, allowing room for

commentary is important.







APPENDIX G
THE PLAYING TEST

THE DIAGNOSTIC TEST OF CELLO TECHNIQUE

PLAYING TEST

Contents:

1. Left Hand Technique

Scales
Arpeggios
Broken Thirds
Double Stops
Position Changes
Arpeggiated Chords Across Three or Four Strings
Thumb Position
Vibrato
Intonation

2. Basic Bowing Strokes

Slurred Legato
Detach6/Accentuated D6tache
Martele
Portato
Staccato/Slurred Staccato
Spiccato/Flying Spiccato
Sautill6
Pizzicato









CHAPTER 5
DISCUSSION AND CONCLUSIONS

This chapter presents a discussion of the results of administering the Diagnostic Test of

Cello Technique. Following a review of the purposes and procedures of this study, the findings

of this study are addressed in light of (a) the research questions posed, (b) a comparison of

results with similar studies, and (c) implications for string education. This chapter closes with

conclusions and recommended directions for future research.

Overview of the Study

The purpose of this study was to design, validate, and administer a diagnostic test of cello

technique for use with college-level students. Written and playing tests were designed, pilot

tested, and a validity study was undertaken. Thirty students from six different universities in the

southeastern United States were recruited to participate in this research. Each student completed

a written test, playing test, and a self-assessment profile. A criterion-based rating scale was

developed to evaluate the Playing Test performances. Two university-level teachers were

recruited to judge ten video-taped performances of students taking the Playing Test. Evaluations

from those judges were correlated with the primary researcher's to determine interjudge

reliability.

Review of Results

The independent variables in this study were (a) year in school, (b) major/minor

distinction, (c) years of cello study, and (d) piano experience. Freshmen classification emerged

as a significant predictor of Playing Test scores (p = .003) and total scores (p = .004). No effect

of major/minor distinction was found for the Written Test (R2= .001). Results were nearly

significant for the Playing Test (R2 = .104) and not significant for the combined Written and

Playing Tests (R2= .072). Years of cello study were not significant predictors of test results.









Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

CONSTRUCTION, VALIDATION, AND ADMINISTRATION OF A DIAGNOSTIC TEST
OF CELLO TECHNIQUE FOR UNDERGRADUATE CELLISTS

By

Timothy M. Mutschlecner

August 2007

Chair: Timothy S. Brophy
Major: Music Education

The purpose of this study was to construct, validate, and administer a diagnostic test of

cello technique for use with undergraduate cellists. The test consisted of three parts: (1) A

written test, which assessed a student's understanding of fingerboard geography, intervals, pitch

location, and note reading, (2) A playing test, which measured a student's technique through the

use of excerpts from the standard repertoire for cello, and (3) A self-assessment form, through

which students could describe their experience, areas of interest, and goals for study. A criteria-

specific rating scale with descriptive statements for each technique was designed to be used with

the playing test.

The written test, playing test, and self-assessment were pilot-tested with five

undergraduate students at a university in the southeast. A validation study was conducted to

determine to what extent teachers felt this test measured a student's technique. Nine cello

teachers on the college and preparatory level were asked to evaluate the test.

The test was administered to 30 undergraduate cellists at universities located in the

southeastern region of the United States. Strong interitem consistency was found for the written

test (r KR20 = .95). A high internal consistency of items from the playing test was found (a =










To What Extent Can a Test of Cello Playing Measure a
Student's Technique? ................... .............................. ...... .. ........ .. ....... ....84
To What Extent Can a Criteria-Specific Rating Scale Provide
Indications of Specific Strengths and Weaknesses In a Student's Playing?.......... 85
Can a Written Test Demonstrate a Student's Understanding of
Fingerboard Geography, and the Ability to Apply Music Theory
T o the C ello? .................................. ... ... .................................. 86
Observations on the Playing Test from Participating Teachers..........................88
C om parative Findings .......................................................................... 89
T he F arnum String Scale................................................................. 89
Zdzinski and Barnes ............ ........................ ............... .. ......... 90
Conclusions ............ ............. ... .......... .. 91

APPENDIX

A PILO T STU D Y .............. ....................................................... ....... 93

B V ALID ITY STUD Y ......................................................... ....... .... ......97

C VALIDITY STUDY EVALUATION FORM .............................................100

D INFORMED CONSENT LETTER .....................................102

E THE WRITTEN TEST .............. ............... .......... ...... ..........103

F THE WRITEN TEST EVALUATION FORM................................................112

G THE PLAYING TEST................... ................... ...................... .... ... .. 113

H THE PLAYING TEST EVALUATION FORM.............................................144

I REPERTOIRE USED IN THE PLAYING TEST .......................................... 149

J THE STUDENT SELF-ASSESMENT PROFILE........................................152

K DESCRIPTIVE STATISTICS FOR RAW DATA......................................... 154

L IST O F R EFE R EN C E S ................................................... ....... ... ........ 159

BIOGRAPHICAL SKETCH .............. ......... ....................... ...... ......162









8









their decisions on a common set of evaluative dimensions rather than their own subjective

criticisms.

In the first phase of the study, 94 statements were generated through content analyses of

essays describing clarinet performance. These statements were also formulated through a list of

adjectives gathered from several studies which described music performance. Statements were

paired with seven a priori categories: tone, intonation, interpretation, technique, rhythm, tempo,

and general effect. The statements were then transformed to items phrased both positively and

negatively; items that could be used by instrumental music teachers to rate actual clarinet

performances. Examples from this item pool are: 1. The attacks and releases were clean. 2. The

clarinetist played with a natural tone. 3. The clarinetist played flat in the low register. The items

were randomly ordered and paired with a five point Likert scale, ranging from "highly agree" to

"highly disagree."

Factor analysis was performed on the evaluation of 100 clarinet performances using this

scale. Six factors were identified: interpretation, intonation, rhythm, continuity, tempo,

articulation, and tone-with five descriptive statements to be judged for each factor. The final

form of the Clarinet Performance Rating Scale (CPRS) was comprised of items chosen on the

basis of having high factor loadings on the factor they were selected to measure and low factor

loadings on other factors. The thirty statements chosen were grouped by factors and paired with a

five-point Likert scale. Ten taped performances were randomly selected and rated using the

CPRS by graduate instrumental music education students. For the purpose of determining

interjudge reliability, judges were divided into groups of 9, 11 and 12 judges. Item ratings from

these judges were again factor analyzed to determine structure stability.









Twenty-eight items were selected for subscales of the String Performance Rating Scale

(SPRS) based on factor loadings. The reliability of the overall SPRS was consistently very high.

Reliability varied from .873 to .936 for each judging panel using Hoyt's analysis of variance

procedure. In two studies conducted to establish criterion related validity, zero order correlations

ranged from .605 to .766 between the SPRS and two other rating scales.

The researchers concluded that string performance measurement may be improved

through the use of more specific criteria, similar to those used in their study (Zdzinsky, p. 254).

Such tools may aid the educator/researcher by providing highly specific factors to listen and

watch for when analyzing student performances.

Summary: Implications for the Present Study

Studies carried out in the measurement of instrumental music performance have

increased in reliability, validity, and specificity since the first standardized test for band

instruments-the Waukin,-Farnum Performance Scale of 1954. Surprisingly, along with the

Farnum String Scale, this is still the only readily available published performance measure. One

can conjecture that the use of teacher-made tests account for this, but the more plausible

explanation is music teachers' distrust of any test that would claim to be capable of measuring a

subject as complex and multifaceted as music performance.

The use of descriptive statements that were found through factor analysis to have

commonly accepted meanings has been a significant development in increasing content validity

in performance measurement. As researchers applied the techniques pioneered by Abeles

(1973), they discovered that factors identified for one instrument or group of instruments did not

necessarily transfer directly to another instrumental medium. Statements about tonal production












APPENDIX I
REPERTOIRE USED IN THE PLAYING TEST


Piece Technique/Bow Stroke


Composer


Bach, J.S.










Boccherini, L./Grutzmacher








Beethoven, L. van










Brahms, J.








Breval, J. B.


Arioso (from Cantata 156)

Sonata in G Minor, No. 3, 3rd mvt.

Suite No. 1 in G Major, Allemande

Suite No. 3 in C Major, Allemande

Suite No. 5 in C Minor, Sarabande

Concerto in Bb Major, 1st mvt.

Concerto in Bb Major, 1st mvt.

Concerto in Bb Major, 1st mvt.

Concerto in Bb Major, 3rd mvt.

Sonata in G Minor, Op. 5, No. 2
3rd mvt.

Sonata Op. 69 in A Major, 1st mvt.

Sonata Op. 69 in A Major, 3rd mvt.

Sonata in C Major, Op. 102, No. 1
3rd mvt.

Sonata No. 1 in E Minor, Op. 38, 1st mvt.

Sonata No. 1 in E Minor, Op. 38, 1st mvt.

Sonata No. 1 in E Minor, Op. 38, 2nd mvt.

Sonata No. 2 in F Major, Op. 99, 2nd mvt.

Concerto No. 2 in D Major, Rondo


Intonation

Staccato

Slurred Legato

Double Stops

Intonation

Scales

Arpeggiated Chords

Thumb Position

Spiccato

Spiccato


Scales

Thumb Position

Accentuated Detache


Position Changes

Portato

Slurred Staccato

Pizzicato

Thumb Position









Results of the Written Examination showed that the students had no difficulty with the

questions asked. What errors there were amounted to careless mistakes. This suggests that the

Written Examination did not discriminate well for cello students at this level. These results led

the researcher to increase the difficulty level of the present study

The rating instrument used for the Playing Examination was a five-point Likert scale

which included brief descriptions as to what each performance level represented. Student

performances of the Playing Examination ranged between 74.7% and 93.3% of a perfect score.

The student who had the weakest score was a music education major. Students in general did

slightly better in the Basic Bowing Strokes section of the exam than in the Left Hand Technique

section (91% compared to 86%). This was not surprising: The musical excerpts used to

demonstrate left hand technique were of necessity more difficult, and less easy to sight-read.

The lowest combined score was for theportato bowing. This was defined in the Playing

Examination as:

A series of broad strokes played in one bow with a smooth slightly separated sound
between each note. The bow does not stop as in the slurred staccato. Each note is to be
clearly enunciated with a slight pressure or 'nudge' from the index finger and upper arm.

Despite this extended definition students were unable to consistently demonstrate this bowing.

The evidence suggested that this stroke is not being taught or discussed to the same extent as

other bowings.

The next three lowest combined scores afterportato bowing were for position changes,

string crossings, and broken thirds. Well-performed position changes and string crossings may

be part of the identifying characteristics of an advanced player. The researcher suspects that

broken thirds are not practiced much and not emphasized by teachers, thus explaining the lower









combined Written and Playing Tests (p = .15). A student's choice to major in cello does not

appear to be an indication of his or her technical level according to this test.

The 30 cellists participating in this research had studied the cello between five and

sixteen years (Table 4-2). A regression was conducted with years of cello study as a predictor of

the scores. For the Written Test, (B = .037, SEB = .069, 8 = .53) and the Playing Test, (B = .044,

SEB = .024, 8 = 1.82) years of cello playing was not found to be a significant predictor (p = .60;

p = .08). A lack of relationship between years of cello playing and scores may reflect the wide

range of students' innate ability and developmental rate. The relatively small sample size also

means that outliers have skewed the results. Efficient use of practice time is an acquired skill; it

is possible for students with fewer years of experience to surpass those that, while having played

longer, are ineffective in their practice.

Though no data on actual numbers of years of piano experience were collected, exactly

one-half of the participants reported having piano experience, and one-half reported having no

piano experience (ns = 15). A t-test of the means for Written and Playing Test scores was

conducted based on the participants' self-reported piano experience. Both tests were significant.

Students reporting piano experience scored significantly higher on the Playing Test (M= 91.93,

SD = 3.08), t(30) = 115.55, p = .000, than those without piano experience (M= 78.47, SD =

12.71), t(30) = 23.92, p = .000. Students reporting piano experience also scored significantly

higher on the Written Test (M= 129.73, SD = 20.63), t(30) = 24.35, p = .000, than those without

piano experience (M= 116.93, SD = 28.28), t(30) = 16.01,p = .000.

Because significant differences were found in these groups based on reported piano

experience, a regression was conducted with piano experience as a predictor of the scores. For

the Written Test, (B = -2.00, SEB = 4.21, 8 = -.48) piano experience was not found to be a









CHAPTER 2
REVIEW OF LITERATURE

Introduction

Literature dealing with assessment of musical performance tends to fall into two

categories: summative assessments focus on the value of a finished project; formative

assessments focus on data gathered during the process of reaching a goal or outcome (Colwell,

2006). A studio teacher's ongoing process of diagnosis, correction, and reevaluation is an

example of formative assessment in music. A student recital or jury exemplifies summative

assessment in music performance. The diagnostic test of cello technique designed for this study

is a formative assessment, in that it measures a student's performance ability as a certain point on

a continuum that leads to mastery.

This literature review is divided in three parts. Part One examines the philosophical

foundation for this study. Part Two explores assessment theory and provides the theoretical

bases for this research. Part Three reviews research in assessment with particular emphasis on

performance.

Part One: Philosophical Rationales

A philosophic rationale is the bedrock upon which any scholarly inquiry is made. Reimer

(2003) succinctly describes its importance:

The "Why" questions-the questions addressed by philosophy-are the starting point for
all conceptualizations of education, whether in music, other subjects, or education as a
whole. Answers to these questions-questions of value-provide the purposes of
education, purposes dependent on what people in a culture regard to be so important that
education must focus on them (p. 242).

These questions must be asked not only of a given educational curriculum but also of the

means chosen for evaluation of material taught. Simply asking ourselves, "How do we

determine what we know?" brings our educational materials and pedagogy into greater focus.









unique to the Abeles measure. Abeles' measure also isolated tone quality and intonation as

independent items. The idiomatic qualities of specific instruments or families of instruments

may result in the use of unique factors in performance measurement.

Interjudge reliability for the ETPRS was found to be between .94 and .98, and individual

factor reliabilities ranged from .89 to .99. Criterion-related validity was determined by

correlating ETPRS scores with global ratings based on magnitude estimation: (.50 to .99).

ETPRS scores were also correlated with a VMENC-constructed wind instrument adjudication

ballot resulting in validity estimates of .82 to .99.

The Development of a Criteria-Specific Rating Scale

T. Clark Saunders & John M. Holahan

Saunders and Holahan (1997) investigated the suitability of criterion-specific rating

scales in the selection of high school students for participation in an honors ensemble. Criteria-

specific rating scales differ from traditionally used measurement tools in that they include

written descriptors of specific levels of performance capability. Judges are asked to indicate

which of several written criteria most closely describes the perceived level of performance

ability. They are not required to express their like or dislike of a performance or decide if the

performance meets an indeterminate standard.

In this study, criterion-specific rating scales were used by 36 judges in evaluating all 926

students seeking selection to the Connecticut All-State Band. These students were between

grades 9-12 and enrolled in public and private high schools throughout the state of Connecticut.

Only students who performed with woodwind and brass instruments were examined in this

study, because the judges were able to use the same evaluation form. The 36 adult judges

recruited in this study were comprised of elementary, secondary, and college-level instrumental











Debussy, C.

Dotzauer,

Dvorak, A.



Eccles, H.



Elgar, E.






Faure, G.






Franck, C.

Frescobaldi, G.

Goens, D. van



Golterman, G.






Haydn, J.




Jensen, H. J.


Sonata in D Minor, Prologue

Etude Op. 20, No. 13

Concerto in B Minor, Op. 104, 1st mvt.

Concerto in B Minor, Op. 104, 2nd mvt.

Sonata in G Minor, 1st mvt.

Sonata in G Minor, 2nd mvt.

Concerto in E Minor, Op. 85, 2nd mvt.

Concerto in E Minor, Op. 85, 2nd mvt.

Concerto in E Minor, Op. 85, 4th mvt.

El1gy, Op. 24

El1gy, Op. 24

Elegy, Op. 24

Sonata in A Major, 1st mvt.

Tocatta

Scherzo, Op. 12

Scherzo, Op. 12

Concerto in G Major, Op. 65, No. 4
3rd mvt.

Concerto in G Major, Op. 65, No. 4
3rd mvt.

Concerto in C Major, Hob. VIIb. 1
3rd mvt.

Concerto in D Major, Op. 101, 1st mvt.

The Ivan Galamian Scale System for
Violoncello


Portato

Arpeggios

Vibrato

Double Stops

Vibrato

Staccato

Pizzicato

Sautille

Arpeggiated Chords

Scales

Vibrato

Intonation

Slurred Legato

Martele

Sautille

Thumb Position

Position Changes


Arpeggiated Chords


Double Stops


Broken Thirds

Broken Thirds










APPENDIX J
THE STUDENT SELF-ASSESSMENT PROFILE


Name


Status (year/college)


Major


Minor


Years of study on the Cello


Other instruments) played


Repertoire previously studied:

Methods/Etudes


Solo Literature




Orchestral Experience:



How interested are you in each of these areas of performance?

I am interested in solo performance.
o Strongly agree o Agree o Disagree o Strongly disagree

I am interested in chamber music performance.
o Strongly agree o Agree o Disagree o Strongly disagree

I am interested in orchestral performance.
o Strongly agree o Agree o Disagree o Strongly disagree

Other areas of performance interest?


What are your personal goals for studying the cello?


What areas of cello technique do you feel you need the most work on?









specific directions for scoring performance aspects about which most experienced teachers could

agree regarding correctness" (p. 176). These tests established a precedent for providing explicit

detail as to what constitutes an error in performance.

Stephen F. Zdzinski & Gail V. Barnes

Zdzinski and Barnes demonstrated that it was possible to achieve high reliability and

criteria-related validity in assessing string instrument performances. In their 2002 study, they

initially generated 90 suitable statements gathered from essays, statements, and previously

constructed rating scales. These statements were sorted into a priori categories that were

determined by previous research. As with the Abeles study, a Likert scale was paired with these

items. Fifty judges were used to assess one hundred recorded string performances at the middle

school through high school level. Results from the initial item pool were factor-analyzed using a

varimax rotation. Five factors to assess string performance were identified:

(interpretation/musical effect, articulation/tone, intonation, rhythm/tempo and vibrato). These

were found to be somewhat different than Abeles (1973) and Bergee (1987) in their scales

construction studies of woodwind and brass performance. This is not surprising, considering the

unique challenges of string instrument and woodwind instrument technique. String instrument

vibrato had items that were idiomatic for the instrument. Likewise, articulation and tone quality

are largely controlled by the right (bowing) side in string performance and were loaded onto a

single factor, as contrasted with wind instrument assessment scales. The authors found that

factors identified to assess string instrument, wind instrument, and vocal performance are

distinct, and related to unique technical challenges specific to the instrument/voice (Zdzinski,

p.253).









Table 4-3. Summary of Regression Analysis for Piano Experience as a Predictor of Written,
Playing, and Total Test Scores (N= 30)

Test Section B SE B

Written Test Scores -2.00 4.21 -.48

Playing Test Scores -19.20 8.63 -2.23*

Total Combined Score -21.20 11.74 -1.81

Note. R2 = .008 for Written Test Scores; R2 = .15 for Playing Test Scores; R2 = .10 for Total Test Scores.
* p <.05















Pizzicato

Execution: Use of thumb for "strummed" chordal passages in the Elgar excerpt.

[Excerpt from Concerto in E Minor, Op. 85 by E. Elgar, 2" movement]


RECIT. freely: =102-140
Lento. ac
18 P1" f J


rl.


SONovello and Company Limited
Allegro molto.
A & rco" ri *


. "-...r ri i : r '
P. W~ W F


Sa tempo Ite.
rit. I a arco a, a ,-


plzz.


---o p^f 1-----ift-----JF ) l s



a Irmo rit mto Apinz. 19a ae M a

,d,.,F'.




Brahms SONATA FOR PIANO AND VIOLONCELLO. OP. 99, NO. 2
O 1973 by Wiener Urtext Edition
All Rights Reserved
Used by permission of
European American Music Distributors LLC
U.S. and Canadian agent for Wiener Urtext Edition

[Excerpt from Sonata in F Major, Op. 99 by J. Brahms, 2" movement]


Adagio affettuoso J =90-110
40 3pizz.
m M, st' A a # P A L -


w


~i~LeCt--tt'F~.-F~ F1 F f --- F.,b


~glt~--tFfl--t~


- I w di w


w


'~f~t
----~-----












Broken Thirds


[Excerpt from The Ivan Galamian Scale System for Violoncello, arr. by Hans Jorgen
Jensen]


J =60
20 0
2 a._*, .


4 2 2 1. 2 2 2 4
0 At 0fl, 24,


0 0


0 0 0


CECS Publishing


[Excerpt from Concerto in D Major Op. 101, by J. Haydn, I" movement]


Allegro moderato ) =106



-p
p


HaydnlGendron CONCERTO FOR VIOLONCELLO IN MAJOR, OP. 101
C 1954 Schott Music
All rights reserved
Used by permission of European American Music Distributors LLC
Sole U.S. and Canadian agent for Schott Music


B-Flat Major
a 0


iK;bil~t~L~=~l~--~r-~q(









intermediate-to advanced level student to play through these excerpts (or even just one or
two excerpts under each technical element), with or without prior preparation, the teacher
should be able to quickly (in about thirty minutes or less) identify the student's strengths
and weaknesses in any of the essential aspects of cello technique. (Personal
communication, May 23rd, 2007)

Another participating teacher confirmed the diagnostic worth of the test and its usefulness

in setting goals:

I feel the diagnostic test designed by Tim Mutschlecner is a valuable tool for
evaluating students and charting their course of study. Students come to a
teacher's studio with such a wide diversity of skill and backgrounds that
any aid in assessing their abilities is welcome. Thank you for your original and
worthwhile test. (Personal communication, May 10th, 2007).

This teacher addresses what the test results have shown; students enter college with a wide range

of experience and preexisting abilities. One of the student participants, a freshman, scored

higher on the Playing Test than five out of six seniors. This exemplifies why the test has

questionable value as a norm-referenced measure. When ranking students, one teacher observed

that comparing students was like comparing "apples and oranges." The playing test provides a

set of criteria that can supplement a teacher's performance standards and expectations.

Comparative Findings

The Farnum String Scale

When discussing the Farnum String Scale (FSS) in Chapter Two, it was observed that the

test requires the student to play a series of musical examples that increase in difficulty. This

approach was adopted in the Diagnostic Test of Cello Technique (DTCT). Unlike the FSS,

musical examples were taken directly from actual music written for the cello. The rationale for

this was that using real music increased the test's capacity for authentic assessment; students

would be playing the actual passages where the techniques in question would be found. The

downside to this was the potential of distracters, aspects of the excerpts that would mask a












Scales (continued)

[Excerpt from Sonata Op. 69, in A Major by L. van Beethoven, 1' movement]


S=65
Allegro, ma non tanto




GC. Schinner Inc.




[Excerpt from Concerto in Bb Major by L. Boccherini/Grutzmacber, I movements

Allegro moderate
J -58


63 >_p--3 0---$


P w Mm
cresc.
14 -
64 f ~ ~~ Ff~F C ~C ~C ngr.~


CAlfred Publishing


[Excerpt from tligy. Op. 24 by G. Faur6]


io

a /I 1_ f ii -esa -- "

I"^ 1f^ *^Ssi ^/


S


Alfred Pubishing

CAlfred Publishing


Molto Adagi
)-84


i












D6tache

Execution: An active, yet smooth bow stroke with no visible or audible accent.

[Excerpt from Sonata in E Minor, Op. 1, No. 2, by B. Marcello, 2d movement]


J -90
Allegro ,
2



f


4
2 2


0I 0


r ~ F r taP .P *- a-it is

IL~

a ia -
9*- ~1111111FiF fK F f: 1~


Alfred Publishing


Accentuated Dttache


Execution: Bow changes are not concealed, but emphasized with accents. One hears the
articulation of the bow changes.

[Excerpt from Sonata in C Major, Op. 102, No. 1 by L. van Beethoven, 4" movement]

CG. Schirmer Inc.


Allegro vivace
J =110




crese. -
0ik~
CTrfl ni


v


f


Ij


fp fp










Interval Identification


Identify each interval using the following abbreviations: m 2", M 2nd,
m 3rd, M 3rd, P Aug. 4A /Dim. 5t, P 5L, m 6, M 6,, m 7e, M 7th, P 8.
(m--minor, M=Major, Aug.=Augmented, Dim.=Dnimnished, P=Perfect)















Position Changes

[Excerpt from Concerto in G Major, Op. 65, No. 4 by G. Golterman, 3rd movement]


S=152 3



/ 4 4


HO3 L -- rN -


[Excerpt from The Swan, by C. Saint-Saens]


Alfred Publishing


CAlfred Publishing









scores in this area. Results from the Playing Examination indicate the need to increase the

difficulty level.

The results of the Student Self-Assessment Profile included the following responses:

How interested are you in each of these areas of performance?

I am interested in solo performance.
1 Strongly agree 1 Agree 3 Disagree 0 Strongly disagree

I am interested in chamber music performance.
4 Strongly agree 1 Agree 0 Disagree 0 Strongly disagree

I am interested in orchestral performance.
3 Strongly agree 2 Agree 0 Disagree 0Strongly disagree


What was most unexpected was the number of students who chose "disagree" for the

statement: I am interested in solo performance. One would have expected performance majors to

at least agree with this statement, if not strongly agree. They may have been influenced by the

choice of the word, "performance," and were thinking about whether they enjoyed the

experience of solo playing which, by connotation, meant auditions, juries and degree recitals.

These students may have been reading "solo" as meaning "solo career," and responded likewise.

In the Student Self-Assessment Profile students responded to the question, "What are

your personal goals for studying the cello," in a variety of ways such as:

(a) Would like to have a chamber group and coach chamber groups.
(b) To play anything that is set before me-I don't want to have limits in terms of
technique. To be able to convey to the audience what I feel when I play.
(c) Perfect intonation before I graduate, attempt to win the concerto competition.
(d) To get an orchestra gig, have a quartet/quintet, and teach students on the side.
(e) I want to be able to use the cello in all sorts of ways including orchestral,
chamber, rock & roll, and studio recording.

These answers are very specific and focused. A teacher, informed about these goals, could

modify teaching to address some of these goals. For example, students that have expressed an

interest in teaching would find discussions on how one might teach a particular skill very










APPENDIX K
DESCRIPTIVE STATISTICS FOR RAW DATA


Table K-1. Raw Scores of the Written Test Items, and
Deviations


Composite Means and Standard


Student Fingerboard
Geography


1 11
2 11
3 11
4 11
5 11
6 11
7 5
8 11
9 11
10 11
11 11
12 11
13 11
14 11
15 11
16 11
17 11
18 11
19 11
20 11
21 11
22 11
23 11
24 11
25 11
26 11
27 11
28 11
29 11
30 11
M 10.80
SD 1.10


Interval
Id.


7
7
7
7
8
7
3
6
6
8
5
4
8
5
7
3
7
8
8
8
3
6
5
6
5
8
8
8
6
8
6.40
1.63


Pitch
Location


30
30
32
31
32
0
16
32
29
29
31
12
22
31
29
2
32
32
32
31
30
14
28
32
27
32
31
32
29
31
26.70
8.82


Single-Pos.
Fingering


32
23
32
29.80
4.11


Note Total
Id. Score


12
12
12
11.57
0.90


92
87
93
91
95
59
59
93
90
92
91
68
65
90
86
59
88
94
95
92
85
73
86
87
86
93
94
95
81
94
85.20
11.38

















Sautilli (continued)

[Excerpt from Scherzo, Op. 12 by D. van Goens]


Vivace molto e con spirit J =140-160
II F F. r..-r.r... r --F F


-nl-- CM S = GO= mr= = = = C



A- -W r Ir
f


I 4


p


1 ..4


1 ...


r --


2 U LIUI "-LI-


MII'


SAlfred Publishing









[Excerpt from Concerto in E Minor, Op. 85, by E. Elgar, 2d movement]


Allegro mollo. = :so


32 .
h # $ii^i::: aaaari


Novello and Company Limited


"1


&I EP jj -: i i I -, P OP I -- -w W


W- GPM.-d-W


'


m .


" ..










Table 4-2. Years of Study, Frequency, and Test Scores


Years of Study Frequency


5
6
7
8
9
9.5
10
11
11.5
12
13
16


Written Test Mean Score


83
75.5
93
95
93.5
87.71
68
81.31
93
87


Playing Test Mean Score


114
108
101.5
126
142
140
141.1
140
108.7
156
104









Piano experience was shown to have a significant effect on the Playing Test scores: (p = .034).

Students with piano experience scored 14% higher on the Written Test and 7% higher on the

Playing Test that those without piano experience. The reliability of the Playing Test was high as

shown by coefficient alpha (ru= 0.92). Correlation coefficients obtained between the primary

researcher and the two reliability researchers were positive and strong (Judge A, r2 = 0.92; Judge

B, r2 = 0.95), suggesting that the criteria-specific rating scale designed for this study was

effective.

Observations from the Results of Administering the Diagnostic Test of Cello Technique

The Written Test

Future versions of the Written Test designed for college-students should eliminate The

Fingerboard Geography section, as only one student made errors in filling out this section. This

section should be included for a high school version of the test; the likelihood is that not all

students at this level would be clear about the location of pitches on the fingerboard.

The Interval Identification section as a whole had the highest average difficulty level of

any section of the Written Test based on item analysis. In this section, item six (a major sixth

across two strings) had the highest difficulty level of any item on the test (.70). This item,

however did not discriminate well between high-scoring and low-scoring students (.38). On this

item students most likely erred by not keeping in mind that on the cello, the interval of two notes

lying directly across from each other on adjacent strings is always a perfect fifth. Adding a

whole step to a perfect fifth, results in the interval of a major sixth. This is an example of

something universally known by undergraduate cello students but not necessarily visualized by

them on the fingerboard. This suggests that some students were either unclear about interval

designations or that they do not think intervallically when playing the cello. It is the researcher's









Table 4-11. Goals in Music and Means of Accomplishing Them


Six Months:

Catch up to my peers.
To shift easily.
Work strictly on technique, not worrying about pieces or recitals.
Practice more musically than technically.
Have lessons with other teachers.
Improve jazz vocabulary.

One Year:

Keep my scholarships.
To have perfect intonation.
Become an effective music educator (lifelong).
Resolve all tension issues; slow, loose practice-making it a habit.
Increase in difficulty of music.
Work on awareness of bowing choices.
Practice.

Two Years:

To be able to support myself solely through playing and teaching.
I hope to memorize a full concerto and feel comfortable performing.
Much practice; memorization and performance practice will be needed.
Graduate, and find a graduate school with a fabulous teacher.

Four Years:

To get past the prelims in an orchestral audition.
To graduate, get a job as a music therapist, and join the community of a professional
orchestra.
Play recreationally, not as a career.

Ten Years:

To be a guest artist at a major music festival.
Be teaching at a university with a Ph.D. in music.
Be employed in a high school as a music teacher, but still make time to perform and
possibly give private lessons.
Able to teach other cellists.
Gigging professionally.
Be a financially stable musician.









Goals such orchestral or chamber music performance could have a direct effect on the

program of study if the teacher agreed that these objectives were appropriate and attainable. A

student who has expressed a sincere goal of playing professionally in a major orchestra deserves

to know both the playing standards required and the fierce competition involved. A serious

attempt to address some of the personal goals mentioned here would challenge even the most

veteran of teachers. How do you help a student improve concentration? Become a fluid

improviser? Convey their interpretation of music to others? Addressing these goals as a teacher

means taking risks, varying one's approach, and being flexible.

Over one third of the students who filled out the Student Self-Assessment Profile listed

"bow stroke" as a priority for technical study (Table 4-10). They are in good company; string

musicians agree that true artistry lies in a players' mastery of the bow. Musical issues such as

phrasing, dynamics, and timing are the bow's domain. A cellist's approach to bowing largely

determines their tone, and articulation. These qualities, along with vibrato, are the distinguishing

unique characteristics of an individual cellists' sound.

After "bow stroke" the most commonly noted area of technique addressed was relaxation

or lowering body tension. This is an aspect of technique that musicians have in common with

athletes. Gordon Epperson summarized the observations of many teachers:

What is the chief impediment to beauty of sound, secure intonation, and technical
dexterity? I should answer, without hesitation, excess tension. Sometimes
tension alone is blamed; but surely, we can't make a move without some
degree of tension. It's the excess we must watch out for. (Epperson, 2004, p. 8).

Excessive tension may not always be readily apparent; teachers may not realize students are

struggling with this area unless the issue is raised. Students who mention excessive tension

while playing as a major concern should be directed to a specialist in Alexander Technique.










Table 4-1. Summary of Regression Analysis for Year in School as a Predictor of Written,
Playing, and Total Test Scores (N= 30)


Score


SEB


Written Test


Freshmen
Sophomore
Junior
Senior


Playing Test

Freshmen
Sophomore
Junior
Senior

Total Score

Freshman
Sophomore
Junior
Senior


Note. Written Test Scores: R2 = .027 Freshmen; R2
Senior. Playing Test Scores: R2 =.280 Freshmen; R2
Senior. Total Test Scores: R2 =.265 Freshmen; R2
Senior.
* p <.05


.023 Sophomore; R2
.107 Sophomore; R2
.058 Sophomore; R2


.065 Junior; R2 .008;
.046 Junior; R2 .008;
.091 Junior; R2 .004;


.0069
.0060
.0085
.0031


.010
.0058
.0032
.0014


.0079
.0074
.0060
.0067


.003
.0032
.0028
.0030


.0027
.0029
.0024
.0027


.88
.82
-1.40
-0.47


3.30*
-1.83
-1.16
-0.46


3.18*
-1.32
-1.67
-0.34


.009
.0038
.0040
.0009









audiovisual source or a viewed live performance, the possible effects of visual influence on the

ratings needs to be considered.

Concerns about content validity were uppermost in mind when choosing the excerpts for

the Diagnostic Test of Cello Technique. In the following chapter the development and validation

of these materials is discussed, as well as the measurement used to quantify the data from the

written and playing portions of the test.










Kabalevsky, D. B.

Lalo, E.

Locatelli, P.

Marcello, B.



Popper, D.



Rimsky-Korsakov, N.

Saint-Saens, C.



Sammartini, G. B.



Schroder, C.

Shostakovich, D.

Squire, W.H.

Starker, J.


Schumann, R.

Tchaikovsky, P. I.

Vivaldi, A.


Concerto in G Minor, Op. 49, 1s mvt.

Concerto in D Minor, 2nd mvt.

Sonata in D Major, 1st mvt.

Sonata in E Minor. Op. 1 No. 2, 2nd mvt.

Sonata in E Minor. Op. 1 No. 2, 4th mvt.

Gavotte in D Major

Hungarian Rhapsody, Op. 68

/.ehett'/ I :h/'e, Op. 35, 1st mvt.

Allegro Appassionato, Op. 43

The Swan

Sonata in G Major, 1st mvt.

Sonata in G Major, 1st mvt.

Etude, Op. 44, No. 5

Sonata in D Minor, Op. 40, 1st mvt.

Danse Rustique, Op, 20, No. 5

An Organized Method of String Playing
(p. 33)

Fantasy Pieces, Op. 73, 1st mvt.

Chanson Triste, Op. 40, No. 2.

Concerto in G Minor for 2 Cellos, RV 531,
1st mvt.

Sonata in E Minor, No. 5, 2nd mvt.


Pizzicato

Slurred Legato

Flying Spiccato

Detache

Slurred Staccato

Flying Spiccato

Sautille

Arpeggiated Chords

Flying Spiccato

Position Changes

Arpeggios

Double Stops

Sautille

Intonation

Scales

Position Changes


Arpeggios

Vibrato

Scales


Martele
















Spiccato


Execution: A controlled, bounced-bow stroke; the bow begins above the string and is thrown on the
string with a swinging follow-through arm motion. The bow describes an inverted arch one or two
inches above the string. As the bow bounces up after sticking the string it is held by a light but firm
grip. Slower spiccato is played near the frog with the entire arm; faster lighter spiccato is played
further out on the bow with the fingers and wrist initiating the movement.

[Excerpt from Sonata in G Minor, Op. 5 Nr. 2 by L. van Beethoven, 3" movement]


Rondo
Allegro


S=69


- 3


I i Vo
4


W~bt~ihiF~Lft~t ~= n n


CG. Schirmer Inc.


[Excerpt from Concerto in Bb Major, by L. Boccherini, 3d movement]


Rondo
Allegro


J =144


rIk >. k ioa


. 6 i R i1


4
. ; L *w Sta


1
1 A I t


a
i.-1i ,$-e


ni
2 *v
es


I s e-f- I \ 1 il I K,


I
1 2 1
2 1 a


IL 1 Ir i I rir LL -J
dim. -


un poco pesi
2 I
t ~t o tnizf:


- -, t -- "- .-1- i "-' -- -- .4 -
I^ = i., i


ante


A- l f r F rl if
fp f
'Alfred Publishing


Cr)ftn


'I
Jp


---


-L.L .









significant predictor. In the Playing Test, (B = -19.20, SEB = 8.63, 13 = -2.23) piano experience

emerged as a significant predictor (p < .05). The R2 value of .15 indicates that piano experience

accounted for 15 % of the variance in the Playing Test scores. Results are shown in Table 4-3.

Regression Analysis of Written Test Items

In the Interval Identification section of the Written Test, the mean score for those with

piano experience was 7.07 out of 8 possible points as compared with a mean of 5.73 for those

without experience. Through regression analysis piano experience was shown to be a significant

predictor (p = .002) of the Interval Identification scores (B = 1.56, SE B = .41, 1 = 3.81). The R2

value of .528 indicates that piano experience accounted for 53 % of the variance in the Interval

Identification scores. This is a highly significant figure. Students with piano experience clearly

are better at thinking intervallically on the cello.

For the Pitch Location and Fingering section of the test, the means were 31.13 out of 32

possible points for those with piano experience compared with 22.26 for those without.

Regression analysis revealed that this piano experience was nearly significant as a predictor of

these scores (p = .061). Piano experience again emerged as a significant predictor (p = .002) of

the Single-Position Fingering scores (B = 1.80, SE B = .47, 8 = 3.83). The R2 value of .53

indicates that piano experience accounted for 53 % of the variance in the Single-Position

Fingering scores. This section required students to look at notes vertically through a series of

arpeggios and arrive at a fingering, something that pianists are frequently required to do.

Item difficulty, item discrimination, and point biserial correlations were calculated for the

Written Test. Results are presented in Table 4-4. The Interval Identification section had the

highest average difficulty level (.80) of any section of the Written Test. Items on the Bass,

Treble, and Tenor Clef Note Identification section were found to be the least difficult. Item 23









Solos from the selective music lists of the National Interscholastic Music Activities

Commission of the MENC were content analyzed, and 50 performance skills were identified

coinciding with range, slide technique and articulation. Each skill was measured by four

excerpts and administered to 30 junior high school trombonists. These performances were taped

and evaluated by three judges. Results from this preliminary form of the measurement were

analyzed, providing two excerpts per skill area. Equivalent forms of the measure were created,

each using one of the two excerpts selected. This final version was administered to 50 high

school students. Interjudge reliability coefficients were .92 for form A and .91 for form B.

Equivalent forms reliability was found to be .98. Validity coefficients ranged from .77 to 1.0 for

both forms. Zdzinski (1991, p.49) notes that the use of a paired-comparison approach rather than

the use of teacher rankings may have affected validity coefficients.

Kidd concluded that the Scale of Trombone Performance Skills would be useful to

instrumental music educators in their appraisal of the following areas of student progress:

guidance, motivation, improvement of instruction and program, student selection maintenance of

standards, and research. Kidd recognized that the time requirement (thirty-six minutes for

administration, twenty one minutes for judging, and nine minutes for scoring) could make this

version of the scale impractical in a public school situation and acknowledged that some

modifications in the administration and scoring procedures could facilitate the extent of the

scale's use (pp. 93-94).

Janet Mills

Mills (1987) conducted an investigation to determine what extent it was possible to

explain current assessment methods for solo music performances. In a pilot study, she chose six

instrumental music students, aged 15 years or above, who were capable of performing grade-









say that in most instances evaluation is treated in an incidental manner and is not emphasized in

a systematic and rigorous way" (Lehman, pp. 5-6). As the standards movement grows, fueled by

greater interest in achievement testing in the arts, it is likely that this attitude will change.

Lehman describes how he sees the emerging role of music assessment:

I believe that the standards movement has set the stage for an assessment movement, and
I believe that assessment may become the defining issue in music education for the next
decade. Developing standards and defining clear objectives that flow naturally from
standards make assessment possible where it was often not possible before. But
standards do more than make assessment possible. They make it necessary. Standards
have brought assessment to the center of the stage and have made it a high-priority, high-
visibility issue. Standards and assessment inescapably go hand in hand. We cannot have
standards without assessment (p. 8).

Furthermore, we cannot have assessment without tests that are designed to measure all

kinds of music making, whether it be in bands, orchestras, choirs, or jazz ensembles. Included in

this list should be assessment of individual performance. New ways of more objectively

determining achievement in individual performance are greatly needed.

The need for assessment measures capable of assessing the multiple intelligence present

in the arts has been articulated:

Although some aspects of learning in the arts can be measured adequately by paper-and-
pencil techniques or demonstrations, many skills and abilities can be properly assessed
only by using subtle, complex, and nuanced methods and criteria that require a
sophisticated understanding. Assessment measures should incorporate these subtleties,
while at the same time making use of a broad range of performance tasks (Reimer, p. 15).

When Reimer observes that assessment in the arts is a complex task with subtle shades of

meaning, he is alluding to the ill-structured quality of many of the subject content domains in

music. Spiro, Vispoel, Schmitz, Smarapungavan, and Boeger (1987) define ill-structured

domains as content areas where "there are no rules or principles of sufficient generality to cover

most of the cases, nor defining characteristics for determining the actions appropriate for a given

case" (p. 184, as quoted Brophy, p. 7). Criteria for judgment in performance, therefore, must be














Part 2: Basic Bowing Strokes


Slurred Legato

Execution: Smoothly connected; Groups of notes phrased as smoothly as possible.

[Excerpt from Suite No. 1: Allemande, by J. S. Bach]


ALLEMANDE J =54


a 7n e.n -A h


. .' P _.a ..2 r 4lip


Birenreiter Music Corporation


[Excerpt from Sonata in A Major, by C. Franck, I" movement]


J. =60


7,,


r.' i p r ._ I l
------ IL i 4


I


International Music Company


a'In


L);Rbt~~f~7+-rt~-~FE PC Fff~


/- /-^. -^


-Pz


S2 -


~7~f~a










Table K-3. (concluded)

Student Portato Staccato Spiccato Sautille Pizzicato Total
Score

1 10 8 8 10 8 152
2 8 10 8 4 10 136
3 8 8 10 10 8 156
4 8 8 6 10 10 144
5 8 8 8 4 4 134
6 10 10 8 8 10 146
7 10 10 10 8 8 152
8 10 10 8 6 4 128
9 4 6 8 10 6 116
10 10 10 8 10 8 152
11 6 4 4 4 6 114
12 10 8 8 8 6 140
13 10 6 2 8 6 120
14 10 8 2 4 4 96
15 10 8 8 10 6 116
16 4 6 2 2 2 76
17 10 8 6 8 4 102
18 4 10 6 6 2 100
19 8 10 8 10 10 142
20 10 10 3 8 8 134
21 6 6 2 4 2 86
22 0 6 2 2 4 82
23 8 8 4 0 4 76
24 4 8 8 8 4 104
25 0 6 6 10 4 92
26 10 6 4 10 4 124
27 10 8 10 10 8 152
28 8 8 8 4 6 140
29 10 8 6 10 10 148
30 6 8 8 8 6 140
M 7.67 7.93 6.3 7.13 6.07 123.33
SD 2.97 1.62 2.61 3.00 2.55 25.18









terminology. The Interval Identification and Single Position Fingering sections of the pilot test

were extended to provide greater accuracy in measurement of these skills.

Forty four percent of respondents agreed that the excerpts chosen for the Playing

Examination were a valid way of determining a student's competence in left hand and bowing

technique. Several teachers suggested the addition of specific excerpts to reveal other aspects of

a student's technique such as pizzicato, and passages with greater variety of double stops

(simultaneously playing on two strings). These suggestions were implemented in the present

study. Part two of the Playing Examination (Basic Bowing Strokes) was expanded to include

Accented DetachN, Flying Spiccato, and Pizzicato.

Reaction to the choice of excerpts used in the Playing Examination included the

suggestion that a better assessment of a student's abilities would be to arrange the material in

progressive order from easiest to hardest and then see at what point the student began to have

difficulty. Ordering and expanding the range of difficulty of the excerpts would provide useful

information about the student's playing level so that repertoire of an appropriate difficulty-level

could be assigned. The present study applied these recommendations by finding additional

excerpts and making them sequentially more demanding. An effort was made to find excerpts in

each category that could be played by undergraduate cellists.

Seventy eight percent of the teachers responded positively to the Student Self-

Assessment Profile. Comments included, "I really like the Student Self-Assessment page. I

think that it is not just valuable to the teacher but important that the students examine their own

situations as well." One teacher remarked, "It seems the profile would be a useful tool to gauge

the goals and general level of a new student." A teacher proposed having some more open ended

questions as well, noting that, "There is music beyond solo, chamber and orchestral." As a









APPENDIX A
PILOT STUDY

A pilot study was carried out (Mutschlecner, 2004) which provided indications of ways to

improve an initial form of the Diagnostic Test of Cello Technique. Five undergraduate music

majors studying at a school of music located in the southeastern region of the United States

volunteered to participate in the pilot study. Four out of the five students were cello performance

majors. One was a music education major. These students were met with individually at the

school of music in a studio space reserved for this use.

The students were first given the Self-Assessment Profile to fill-out. Following

this, students were given the Written Examination, which took between ten and fifteen minutes

for them to complete. The Written Examination used in the pilot study was shorter than the one

developed for the present study. It included: a fingerboard chart, horizontal and linear (on one

string) interval identification, note identification in three clefs, and single-position fingering

exercises.

In the pilot study students were not given the Playing Examination ahead of time but

were required, essentially, to sight-read the excerpts. However, students were advised that this

was not a sight-reading test per se, but rather a test to assess their ability to demonstrate specific

technical skills and bowings. The students were encouraged to take some time to visually

familiarize themselves with the excerpts, and were told they could repeat an excerpt if they felt

that they could play it better a second time, an option only chosen twice. The students took

between thirty and forty-five minutes to complete the playing portion of the test. The pilot

study's version of the Playing Examination was shorter then the present study, measuring fewer

categories of left hand and bowing technique and not using as many excerpts for each technique.















Slurred Staccato

Execution: Two or more staccato notes to a bow. Press into the string, release the
pressure after drawing the bow and quickly draw the next note. The bow stops after each
stroke.

[Excerpt from Sonata in E Minor by B. Marcello, 4dh movement]

S=126


Allegretto
~2


4 ^ 4


I I I I I I I I;Op T a


,96


CAlfred Publishing




[Excerpt from Sonata in E Minor, Op. 38, No- I by J. Brahms, 2" movement]


J =134


C.


4)


International Music Company


4
Af-; g i'i


g7N


Allegretto quasi Minuetto
Pi anoV


b W if t f m 1 1


-0' Si


a


wi


r-- Iva IN. r ,' .....


. ... i


v
-7--ii *


Oj 4,.









Interjudge Reliability

Two adjudicators were recruited to determine interjudge reliability of the Playing

Test. Both judges were professional cellists who teach at the college-level. To decrease

selection bias as a threat to external validity, the adjudicators were chosen from two different

geographical regions and teaching institutions. An introductory DVD was provided, explaining

how to use the Playing Test evaluation form in assessing student performances.

Each judge viewed and listened to DVDs of five separate student performances of the

Playing Test, and rated the performances using the Playing Test evaluation form (Appendix H).

Judges were asked to return the results by a specified date, using a self-addressed stamped

envelope provided. The combined judges' evaluations often individual students were correlated

to the primary investigators evaluation results of these same students.

Data Analyses

Data analyses included item analysis for both the Written and the Playing Test. The

distribution of total scores was described using means and standard deviations. Item difficulty,

as expressed as the proportion of students who answered an item correctly, was determined.

Item discrimination analysis was conducted using the point biserial correlation to reveal the

strength and direction of the relationship between success on a particular item and success on the

total test. Qualitative data from the Observations/Comments portion of the Playing Test were

examined and compared with individual scores.

The content of the Student Self-Assessment Profile was evaluated and correlated to the

data from other sections of the test. Relationships were studied between the student's scores on

the Written and Playing Test and: a) year in college, b) major/minor distinction c) years of study,

d) piano experience, e) extent and content of repertoire, f) degree of interest in performance









student's real ability with a given technique. In some cases, for example the double-stop excerpt

from the Dvorak concerto, other challenges in playing the passage may have adversely affected a

student's ability to demonstrate the technique. However, after administering the test and

receiving positive feedback from students as well as teachers, it is felt that the benefits of using

real music far outweigh the disadvantages. Students liked the fact that they were playing from

standard works for cello and ones that they would quite possibly study someday, if they hadn't

already. This illustrates a weakness of the DTCT if it is used normatively. Unlike the FSS

passages, which would be unfamiliar to all test takers, students approach the DTCT with varying

degrees of familiarity with the excerpts. It would be unfair and ill-advised to use this test as a

means to compare students among themselves or to assign grades. Each student's performance

of the test must be judged solely on the criteria defined in the evaluation form.

One university professor declined to have his students participate in this study because

the bowings and fingering were not always the ones that he taught. Although he was alone in

this objection, it does demonstrate a dilemma that this kind of test design faces: If the test-maker

provides ample fingerings and bowings, there will be students who have learned these passages

differently and will be thrown off. If few or none are provided, it will create much more work

for the average student to play these excerpts. The best compromise may be to seek bowings and

fingerings that are most commonly used, even while instructing students that they are free to

develop their own choices.

Zdzinski and Barnes

The design of this study owes much to the string performance rating scale of Zdzinski

and Barnes (2002). The success they found in using a criteria-specific rating scale was validated

in this research. High interjudge reliability correlations (Judge A r2 = 0.92, Judge B r2 = 0.95)













Double Stops (continued)

[Excerpt from Concerto in C Major, Hob. VIIb. 1 by J. Haydn, 3" movement]


Allegro molto J =152-162
4


;6 4


t~t


P f I .-J 'J
P fF F- L:


A
S" V


-M -.


Alfred Publishing




[Excerpt from Concerto in B Minor, Op. 104 by A. Dvorak, 2" movement]


freely: 90-102

nrt .V


V Quasi Cadensa._ -

IJ.P ^^35 ^k. 5


pt^1 1- f pzz
pp r -' z p c /P z
pp i t17 0' ^ iZZ*-'P f'


International Music Company


fi; V


A


P F-. -* r &









were written, resulting in a final form of 14 exercises that are designed to evenly increase in

difficulty level.

Like the WFPS, the Farnum String Scale uses scoring based on measure-by-measure

performance errors. The performance errors that can be taken into account are as follows:

* Pitch Errors (A tone added or omitted or played on a wrong pitch)
* Time Errors (Any note not given its correct time value)
* Change of Time Errors (A marked increase or decrease in tempo)
* Expression Errors (Failure to observe any expression marks)
* Bowing Errors
* Rests (Ignoring a rest or failure to give a rest its correct value)
* Holds and Pauses (Pauses between notes within the measure are to be counted as errors)
* Repeats (Failure to observe repeat signs)

The Farnum String Scale manual does not indicate how to use test results, except for the

title page which states: "A Standard Achievement Test for Year to Year Progress Records,

Tryouts, Seating Placement, and Sight Reading" (1969). Grading charts are included as part of

the individual sheets.

Despite the extensive revision process, criticism has been leveled at this test by some,

suggesting that the bowings were not well thought out (Warren, 1980). In examining the

exercises written, the following problems are found: 1. bowings that require excessive retakes, 2.

bowings that are awkward, i.e. non-idiomatic, and 3. bowings that are ambiguous, or not clearly

marked. Clarity in bowing is a concern because bowing errors often lead to other errors,

especially in rhythm. In several of the exercises, arbitrary bowing decisions have to be made

when sight-reading. Since bowing is one of the tested items, students should not be required to

devise bowing solutions that are not clearly marked. Bowing ambiguity represents a flaw in the

test validity.

Boyle and Rodocy observe that, "despite the criticisms that may be leveled against the

WFPS and the FSS, the tests do attain a certain amount of objectivity by providing highly









Instrumental technique is a means to an end, not the end itself. Certainly the virtuosic

pyrotechniques required for some pieces blurs this distinction, but by and large most teachers

would be quick to acknowledge that complete absorption with the mechanics of playing is a

recipe for bum-out and loss of the joy of music-making. Cello teacher Fritz Magg observed that

'Calisthenics' literally comes from two Greek words: kalos, which means beautiful and stenos,

which means strength (Magg, 1978, p. 62). Accepting the principle that the development of

'strength' is a requisite for expression of 'the beautiful' serves as a rationale for designing a test

to assess technique.

Reimer believes that past and present attempts of assessment have two crucial flaws

(2003). First, they are not tailored to a specific musical activity, making the false assumption

that what is tested for is applicable to any and all musical involvements. Reimer states, "The

task for the evaluation community... is to develop methodologies and mechanisms for identifying

and assessing the particular discrimination and connections required for each of the musical

roles their culture deems important (p. 232). Just as Gardner (1983) brought to our attention the

need to define distinct kinds of intelligence, Reimer cautions that we should be wary of assuming

direct transfer of musical intelligence from role to role.

The second weakness of music testing according to Reimer is its almost exclusive

concentration on measuring the ability to discriminate, thereby neglecting to examine the

necessary connections among isolated aspects of musical intelligence (2003). The question of

how meanings are created through connections has been largely ignored, he suggests. This may

be partially attributed to heavy dependence on objective measurement in music research.

Qualitative studies may be better suited for this purpose. Reimer notes that many recent studies

in cognitive science may be applicable to musical evaluation.









David Elliott

Elliott (1995) makes a clear distinction between evaluation and assessment. He notes,

"The assessment of student achievement gathers information that can benefit students directly in

the form of constructive feedback". He sees evaluation as "being primarily concerned with

grading, ranking, and other summary procedures for purposes of student promotion and

curriculum evaluation" (p. 264). For Elliott, however, achieving the goals of music education

depends on assessment. He describes the primary function of assessment as providing accurate

feedback to students regarding the quality of their growing musicianship. "Standards and

traditions" are the criteria by which students are measured in determining how well they are

meeting musical challenges. Elliot leaves it to the reader to define what these standards and

traditions are and more specifically what means are used to determine their attainment.

Elliott's concept of assessment is one of supporting and advancing achievement over

time, noting "the quality and development of a learner's musical thinking is something that

emerges gradually" (p. 264). Elliott is concerned with the inadequacy of an assessment which

focuses on the results on a student's individual thinking at a single moment in time. Real

assessment of a student's development occurs when he or she is observed making music

surrounded by "musical peers, goals, and standards that serve to guide and support the student's

thinking" (p. 264).

Regarding evaluation, Elliott is unequivocal: "...there is no justification for using

standardized tests in music" (p. 265). He sees conventional methods of evaluation as

inappropriate in music because they rely on linguistic thinking. Like Gardner, Elliott insists that

an assessment, if it is to be intelligence-fair, must be aimed directly at the student's artistic

thinking-in-action.









result, a line asking for other areas of performance interest was added. The study indicated that

teachers are either using a similar tool in their studios or would consider doing so.

The responses from teachers who participated in the validity study support the premise

that the diagnostic test of cello technique is a legitimate way to gather information about a

student's technical playing ability. The recommendations of these teachers were taken into

account in developing this present test.










Vibrato The student's vibrato:
(check One only)
1 10 is full, rich, even, and continuous. It is used consistently throughout the
fingerboard.
D8 is full and rich, but occasionally interrupted due to fingering/position changes.
D6 is mostly utilized, but is irregular in its width or speed and lacks continuity
throughout the fingerboard. Excessive tension is apparent in the vibrato.
04 is demonstrated, but in a tense, irregular way. It is not used consistently by all
fingers in all positions. Vibrato width/speed may be inappropriate.
D2 is demonstrated marginally with a tense, uneven application. Vibrato is
inconsistently used and lacks appropriate width/speed.
Observations/Comments:



Intonation The student's intonation:
(check One only)
D 10 is accurate throughout on all strings and in all positions.
D8 is accurate, demonstrating minimal intonation difficulties, with occasional lack of pitch
correction.
D6 is mostly accurate, but includes out of tune notes resulting from half-step
inaccuracies, inaccurate shifting or incorrect spacing of fingers.
D4 exhibits a basic sense of intonation, yet has frequent errors of pitch accuracy and
often doesn't find the pitch center.
02 is not accurate. Student plays out of tune the majority of the time.
Observations/Comments:



Part Two: Basic Bowing Strokes

Slurred Legato The student's legato bow stroke:
(check One only)
D 10 is smoothly connected with no perceptible interruption between notes.
S8 is smooth, but has some breaks within phrases.
06 includes some disconnected notes and detached bowing.
D4 shows breaks within phrases and is often not smoothly connected.
D2 exhibits little skill of smooth bowing. Bowing has many interruptions between
notes.
Observations/Comments:









As Aristotle noted, "We are what we repeatedly do. Excellence then, is not an act, but a habit"

(Aristotle, trans. 1967).

Goal setting is most effective when it is measurable, as with a student's two-year goal of

memorizing a full concerto. Academic ambitions, such as pursuing graduate studies, are

important to share with ones teacher, and can dictate a student's course of study. Teachers may

occasionally be surprised in reviewing their students' long-term goals: One performance major

stated her goal as a cellist was to play recreationally, not as a career. However, most four-year

and ten-year goals were career-oriented. There is value in having students express these goals

concretely; through this activity, students visualize doing something they are presently not able

to do. Goal setting requires a leap of faith.

Discussion of Research Questions

In this section the original research questions are reexamined in light of the results.

These questions are restated below with discussion following.

To what extent can a test of cello playing measure a student's technique?

The extent to which the Playing Test is able to measure an individual cello student's

technique depends on the way a teacher uses it. If students are strongly encouraged by their

teacher to practice the excerpts and are expected to play from them in the lesson, testing error

resulting from unfamiliarity with the music and sight-reading mistakes can be minimized. The

results can come much closer to a true diagnosis of a student's technical level. The comparison

of teacher-ratings to Playing Test ratings (Table 4-7) revealed a high correlation and tended to

confirm the test's validity. It is possible that, in some cases, ranking differences occurred due to

a teacher's bias based on his or her estimation of a student's potential. As one teacher noted in

discussing a student's rating: "It pains me to make this assessment, as I confirm that (student)




Full Text

PAGE 1

1 CONSTRUCTION, VALIDATION, AND ADMINISTRATION OF A DIAGNOSTIC TEST OF CELLO TECHNIQUE FOR UNDERGRADUATE CELLISTS By TIMOTHY M. MUTSCHLECNER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORI DA IN PARTIAL FUFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007

PAGE 2

2 2007 Timothy M. Mutschlecner

PAGE 3

3 The most perfect technique is that which is not noticed at all. Pablo Casals

PAGE 4

4 ACKNOWLEDGMENTS This work is dedicated to my dear wi fe Sarah who had shown unwavering support and encouragement to me in my studies. In every way she made possible the fulfillment of this goal which would have be unimaginable without her. To my children Audr ey, Megan, and Eleanor I owe a debt of gratitude for their patient understa nding. My parents Alice and Paul, through their continued reassurance that I was up to this tas k, have been much appreciated. Dr. Donald and Cecelia Caton, my parent s-in-law, spent many hours editing this manuscript, and I am very grateful for their skill and encouragement. Th e professional editing expert ise of Gail J. Ellyson was invaluable. The transition from music performance to academic scholarship has not always been easy. Dr. Timothy S. Brophys expertise in the fiel d of music assessment and his enthusiasm for the subject was truly the inspiratio n for what grew from a class paper into this dissertation. As Chair of my committee he has provided the nece ssary guidance and direction leading to the completion of this work. I consider myself very fortunate to have worked under Dr. Brophys mentorship. As members of my supervisory committee, Dr. Art Jennings, Dr. Charles Hoffer, and Dr. Joel Houston have generously offered their insigh t in refining this research. I thank them for their service on my committee and for their support. Gratitude is also extended to Dr. Camille Smith and Dr. David Wilson for serving as initial members of my committee. A study of this magnitude would have b een impossible without the commitment from colleagues: Dr. Wesley Baldwin, Dr. Ross Harbough, Dr. Christopher Hutton, Dr. Robert Jesselson, Dr. Kenneth Law, and Dr. Greg Sauer. Their willingness to a llow their students to participate in my research, made this study possible. The suppor t and insight from these master

PAGE 5

5 teachers was invaluable. Dr. E lizabeth Cantrell and Dr. Christ opher Haritatos, as independent judges, spent many hours viewing video-taped recordi ngs of student performances. Their care in this critical aspect of my study was much appreciated. Dr. Tanya Carey, one of the great li ving cello pedagogues, provided valuable suggestions for the design of this test. Thanks go as well to the many Suzuki cello teachers who participated in this research. Their willingness to shar e ideas and provide suggestions on ways to improve my test was heartening. Finally, thanks go to the 30 students who agre ed to participate in this research. The effort they made to prepare and pl ay their best is gratefully acknow ledged. Future cellists are in their debt for being pioneers in the field of assessment in string performance.

PAGE 6

6 TABLE OF CONTENTS ACKNOWLEDGMENTS.4 LIST OF TABLES...... DEFINITION OF TERMS ABSTRACT... CHAPTER 1 INTRODUCTION .... Purpose of Study.... Research Questions Delimitations...... Significance of the Study.......14 2 REVIEW OF LITERATURE....16 Introduction........16 Philosophical Rationales.... Bennet Reimer... David Elliott... Comparing and Contrasting the Phil osophic Viewpoints of Reimer and Elliott......22 Theoretical Discussion...23 Assessment in Music: Theories and Definitions....23 Constructivism and Process/Product Orientation..................26 Definitions..27 Research.....29 The Measurement of Solo Instrumental Performance...29 John Goodrich Watkins..29 Robert Lee Kidd.....31 Janet Mills..32 The Use of Factor Analysis in Performance Measurement... Harold F. Abeles Martin J. Bergee.....36 The Development of a Criteri a-Specific Rating Scale...37 The Measurement of String Performance.. Stephen E. Farnum.....39 Stephen F. Zdzinski and Gail V. Barnes.... Summary: Implications for the Present Study...42

PAGE 7

7 3 METHODOLOGY Setting and Participants..45 Data Collection..45 The Written and Playing Test....46 The Student Self-Assessment Profile. Rationale for the Assessment Methodology..47 Interjudge Reliability.49 Data Analysis. Content Validity.....50 4 RESULTS..51 Data Analysis.....51 Participants. Part One: The Written Test....52 Scoring the Written Test Results from the Written Test53 Regression Analysis of Written Test Items...55 Part Two: The Playing Test... Scoring the Playing Test Results from the Playing Test....56 Comparison of Left Hand Tec hnique and Bowing Stroke Scores.57 Comparison of Playing Test Scores and Teacher-Ranking... Interjudge Reliability of the Playing Test......58 Part Three: The Student Self-Assessment Profile..............................................................58 Repertoire Previously Studied.......................................................................58 How Interested Are You In Each of These Areas of Performance: Solo, Chamber, and Orchestral?............................................................................59 Other Areas of Performance Interest?...................................................................59 What Are Your Personal Goals for Study On the Cello?......................................59 What Areas of Cello Technique Do You Feel You Need the Most Work On?..............................................................................60 Summarize Your Goals in Music and What You Need To Accomplish These Goals.. Summary of Results...61 5 DISCUS SION AND CONCLUSIONS.....75 Overview of the Study...75 Review of the Results....75 Observations from the Results of Administering the Diagnostic Test of Cello Technique. The Written Test....76 The Playing Test................................................78 The Student Self-Assessment Profile.81 Discussion of Research Questions.....84

PAGE 8

8 To What Extent Can a Test of Cello Playing Measure a Students Technique?.............................................................................................84 To What Extent Can a Criter ia-Specific Rating Scale Provide Indications of Specific Strengths an d Weaknesses In a Students Playing?..........85 Can a Written Test Demonstrate a Students Understanding of Fingerboard Geography, and the Ability to Apply Music Theory To the Cello?..........................................................................................................86 Observations on the Playing Test from Participating Teachers.88 Comparative Findings The Farnum String Scale...89 Zdzinski and Barnes...90 Conclusions APPENDIX A PILOT STUDY.. B VALIDITY STUDY..97 C VALIDITY STUDY EV ALUATION FORM. D INFORMED CONSENT LETTER. E THE WRITTEN TEST....103 F THE WRITEN TEST EVALUATION FORM... G THE PLAYING TEST. H THE PLAYING TEST EVALUATION FORM.....144 I REPERTOIRE USED IN THE PLAYING TEST...149 J THE STUDENT SELF-ASSESMENT PROFILE..152 K DESCRIPTIVE STATISTICS FOR RAW DATA.154 LIST OF REFERENCES..... BIOGRAPHICAL SKETCH...

PAGE 9

9 LIST OF TABLES Table Page 4-1 Summary of regression analysis for year in school as a predictor of written, playing, and total test scores ( N = 30)... 4-2 Years of study, frequenc y and test scores.. 4-3 Summary of regression analysis for piano experience as a predictor of written, playing, and total test scores ( N = 30)... 4-4 Item difficulty, discrimination, and point bi serial correlation for the written test. 4-5 Mean scores of playing test items in rank order....68 4-6 Comparison of teacher-ranking to playing test ranking.9 4-7 Comparison of researchers and inde pendent judges scoring of student performances of the playing test....70 4-8 Numbers of students expressing inte rest in solo, chamber, and orchestral performance ( N = 29)....71 4-9 Personal goals for studying the cello.....72 4-10 Student perception of priori ties for technical study...73 4-11 Goals in music and means of accomplishing them....74 K-1 Raw scores of the written te st items, composite means, and standard deviations... K-2 Raw score, percent score, fre quency distribution, z score, and percentile rank of written test scores... K-3 Raw scores of the playing te st items, composite means, and standard deviations...156

PAGE 10

10 DEFINITION OF TERMS Fingerboard geography the knowledge of pitch location and the understanding of the spatial relationships of pitches to each other Horizontal intervals intervals formed across two or more strings Vertical intervals intervals formed by the distance between two pitches on a single string Visualization the ability to conceptualize the fi ngerboard and the names and locations of pitches while performing or away from the instrument Technique 1) the artistic execution of the skills required for performing a specific aspect of string playing, such as vi brato or staccato bowing 2) the ability to transfer knowledge and pe rformance skills prev iously learned to new musical material Target Note a note within a playing position used to find the correct place on the fingerboard when shifting

PAGE 11

11 Abstract of Dissertation Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy CONSTRUCTION, VALIDATION, AND ADMINISTRATION OF A DIAGNOSTIC TEST OF CELLO TECHNIQUE FOR UNDERGRADUATE CELLISTS By Timothy M. Mutschlecner August 2007 Chair: Timothy S. Brophy Major: Music Education The purpose of this study was to construct, va lidate, and administer a diagnostic test of cello technique for use with underg raduate cellists. The test consisted of three parts: (1) A written test, which assessed a students understa nding of fingerboard geogr aphy, intervals, pitch location, and note reading, (2) A playing test, wh ich measured a students technique through the use of excerpts from the standard repertoire for cello, and (3) A self-assessment form, through which students could describe th eir experience, areas of interest, and goal s for study. A criteriaspecific rating scale with descriptive statements for each technique was desi gned to be used with the playing test. The written test, playing te st, and self-assessment were pilot-tested with five undergraduate students at a university in the so utheast. A validation study was conducted to determine to what extent teachers felt this test measured a students technique. Nine cello teachers on the college and preparatory leve l were asked to evaluate the test. The test was administered to 30 undergradua te cellists at universities located in the southeastern region of the United States. Str ong interitem consistency was found for the written test ( r KR20 = .95). A high internal consistency of items from the playing test was found ( =

PAGE 12

12 .92). Interjudge reliability of the playing test was high, as measured by comparing the independent evaluations of two judges with th e researchers evalua tions using Pearsons r (Judge A r = .92; Judge B r = .95. Other conclusions drawn fr om the study include: (1) Piano experience has a significant positive effect on the result s of the playing test ( R2 = .15); (2) The playing test is a good predictor of teacher-rankings of their student in terms of technique; (3) Year in school, degree program, or years of play ing experience were not significant indicators of students playing ability as measured by this test. Participating teachers described this test as a valuable tool for evaluating students and charting their course of study. They found it to be an efficient means to identify a students strengths and weaknesse s in cello technique.

PAGE 13

13 CHAPTER 1 INTRODUCTION Diagnosing a students playing is a primar y function of every music teachers daily routine. Boyle and Rodocy (1987) note that applied music teachers fo cus instruction on the basis of their diagnostic evalua tions of a performers strength s and weaknesses. In short, diagnostic evaluation is a critic al and ever present part of any good music program (p. 11). Without denying the role and value of traditional means of gathering information subjectively in the teaching studio, educators agr ee that evaluative decisions are better when they have a strong information base, that is a base including bot h subjective and objective information (Boyle and Rodocy, p. 2). A diagnostic test of cello technique, designed for use at the college-level, could supplement existing methods of evaluation and provide a greater degree of objectivity in assessing a students needs. The successful teacher has much ability to rapidly determine strengths and weaknesses of a new students technique and prescribe exercise s, pieces, or new ways of thinking about the instrument to correct errors in playing. Howe ver, deficiencies of technique or understanding often show up in a students playing while wo rking on an assigned piece from the standard repertoire. When this occurs, teachers must th en backtrack and correct the deficiencies with etudes or exercises, or jettison the work for a simpler piece--a demoralizing experience for the student. Determining the playing level and technical needs of ea ch new student is an immediate need. Within a few weeks of a college students entry into a studio, the focus of lessons often becomes preparation for a degree recital or jury exam. The opportunity to study technique on a broader scale than what is merely required to prepare an upcoming program can quickly diminish. A diagnostic test, admini stered to assess technique, coul d be a valuable tool in this process.

PAGE 14

14 Purpose of the Study The purpose of this study was to design, validate and administer a diagnostic test of cello technique for use with undergra duate college-level students. Research Questions 1. To what extent can a test of cello playing measure a students technique? 2. To what extent can a criteria-speci fic rating scale provide indications of specific strengths and weaknesse s in a students playing? 3. Can a written test demonstrate a students understanding of fingerboard geography, and the ability to appl y music theory to the cello? To answer these questions, a diagnostic test of cello technique was administered to thirty college-level students currently st udying the cello. The test results were analyzed using a rating scale designed for this study (see Chapter 3). In terjudge reliability of the test was measured by comparing independent evaluations of two judges who viewed vide o-recordings of five students taking the test. Delimitations This study was not concer ned with the following. Instruments other than the cello Creating an assessment instrument for ranking students, determining a letter grade, or determining chair placement in ensembles Creating a playing test to be used in auditions Determining the subjects sight-reading ability The measurement of musical aptitude The measurement of a students musicality or expressivity Significance of the Study A review of literature indicates that this is the first attempt to systematically measure the diverse elements of cello techni que. The five items used by Zdzinski/Barnes (2002) in their String Performance Rating Scale : Interpretation/Musical Effect, Articulation/Tone, Intonation

PAGE 15

15 Rhythm/Tempo and Vibrato do not attempt to examine a broad array of technical skills, but rather provide a general assessment of a students performance. This present study appears to be the first to evaluate specific aspects of cello technique. The results of this study can inform the teach ing of strings, particul arly the cello, at the college-level. For example, teachers may find it usef ul to have a diagnostic tool to evaluate the technical level of new students. Results from such a test ma y support or bring into question conclusions commonly made by teachers based primar ily on audition results and/or the students performance in initial lessons. Si milarly, the test could expose area s of deficiencies in technique and provide the teacher with indi cations regarding the etudes exercises or solo materials most appropriate for study. An assessment of the studen ts overall playing level can assist the teacher in choosing repertoire that is ne ither too easy nor too difficult. Often errors in cello playing can be traced to a students lack of clarity about the location and relationship of pitches on the fingerboard. This understanding of so called fingerboard geography is measured in the Written Test, as well as an awareness of intervals, fingering skill, and the ability to read in the three clefs used in cello music. The written test can quickly reveal if a student is deficient in understanding this abi lity. Clarification of these areas can bring instant results that no amount of practice can achieve. The approach and design of this study could be used to create similar diagnostic tests for violin, viola, and bass. Though there are aspects of technique that are unique to each of the instruments in the string family, much of what is explored in this study would be transferable. Future studies could also include a ve rsion designed for high school students.

PAGE 16

16 CHAPTER 2 REVIEW OF LITERATURE Introduction Literature dealing with assessment of mu sical performance tends to fall into two categories: summative assessments focus on the value of a finished project; formative assessments focus on data gathered during the proce ss of reaching a goal or outcome (Colwell, 2006). A studio teachers ongoing process of di agnosis, correction, and reevaluation is an example of formative assessment in music. A student recital or jury exemplifies summative assessment in music performance. The diagnostic test of cello techniqu e designed for this study is a formative assessment, in that it measures a students performance ability as a certain point on a continuum that leads to mastery. This literature review is di vided in three parts. Part One examines the philosophical foundation for this study. Part Two explores assessment theory and provides the theoretical bases for this research. Part Three reviews rese arch in assessment with particular emphasis on performance. Part One: Philosophical Rationales A philosophic rationale is the bedrock upon wh ich any scholarly inquiry is made. Reimer (2003) succinctly describes its importance: The Why questionsthe questions addresse d by philosophyare the starting point for all conceptualizations of edu cation, whether in music, other subjects, or education as a whole. Answers to these questionsquesti ons of valueprovide the purposes of education, purposes dependent on what people in a culture regard to be so important that education must focus on them (p. 242). These questions must be asked not only of a given educational curriculum but also of the means chosen for evaluation of material ta ught. Simply asking ourselves, How do we determine what we know? brings our educationa l materials and pedagogy into greater focus.

PAGE 17

17 Subjective as well as objective information shape our systems of evaluation. As Boyle and Radocy (1987) observe, subjective information te nds to vary from observer to observer and its value in informing decision making is limite d. Objective information, by definition, is relatively unaffected by personal feelings, opinions, or biases. Musical evaluation should not be limited to gathering only objective data but should include subj ective observations as well. Although certain aspects of musical performance can be measured with scientific precision, such as vibrato width or decibel levels, the comp lex multi-faceted nature of music makes the reliability of any measure less than perfect. This observation need not discourage music educators, but rather help them recognize the need for stronger objective cr iteria for evaluation. A music educators personal phi losophy of assessment is not ta ngential to their work, but an essential base from which to define and di rect teaching. Brophy (2000) explains the need for a philosophy of assessment: A personal assessment philosophy is an essentia l element in the development of a general teaching philosophy. Exploring ones reas ons for being a music teacher should inevitably reveal personal r easons and motivations for be lieving that assessment is important, including why it is important. Th e depth of ones commitment to music education as a profession is also a fairly reliable predictor of ones commitment to assessment as an important aspect of the music program (p. 3). Deciding what is important for students to learn and why it is important determines how one will assess what students know. Attitudes toward assessment directly influence the content and quality of teaching. Inevitably, a teachers philo sophy of assessment will be most influenced by how he or she was taught and evaluated as a st udent. This may help explain the range of attitudes noted by Colwell (2006): Evidence from learning psychol ogy reveals that assessment properly conducted makes a major difference in student learning and when incorrectly used, a corresponding negative effect. The current hype, however, has not produced much action in the United States, Canada, or Great Britain. To many music educators, assessment is so much a part of instructionespecially in achieving goals in performancethat they do not believe more

PAGE 18

18 is needed. Other music educators believe that any assessment is inappropriate as either too quantitative or t oo mechanical (p. 210). That some applied music teachers believe that they have no need for methods to assess technique beyond their own listening skill is understa ndable. Most have spent their lives refining evaluative skills: first, of their own playing, and th en that of their students. These teachers may feel it insulting to suggest that a test is better than they are at diagnosi ng a students strengths and weaknesses. However, these same teachers would not think twice about having a diagnostic test of their cars electrical system if it were acting strangely. If a diagnostic test of cello technique could be shown to give a reasonably accurate and rapid assessment of a students playing level and particular needs, skeptical teachers might co me to appreciate the tests pragmatic value. Aristotle in his Politics stated what is implied by ever y music school faculty roster: It is difficult, if not impossible, for those who do not perform to be good judges of the performance of others (p. 331). These philosophic roots may he lp to explain why teachers of applied music are almost always expected to be expert performe rs. Skills of critical listening required of a teacher must be refined and molded in the furnac e of performance; these listening skills are the essential abilities that a musi c teacher cannot do without. Because music performance involves competence in the cognitive, affective, and psychomotor dom ains of learning, authentic assessment must extend beyond single criterion, bi-level test s of the type appropr iate for math or spelling. No single test can measur e all factors that go into a perf ormance; at best a single test may evaluate only a few aspects of a students playing. Two contemporary philosophical views on the role of evalua tion in music are those of Bennett Reimer (1989/2003), and David Elliott ( 1995). Though these scholars share many beliefs about the role and value of universal music e ducation, they represent two poles of thought

PAGE 19

19 regarding the best way to achieve a musically fluent society. Their differences are philosophic and concern a definition of th e very nature of music. Bennett Reimer Reimer makes an important claim when discus sing evaluation in music. After raising the question: By what criteria can t hose who partake of the work of musicians evaluate that work, he asserts, the same criteria applied to their work by musicians all over the word are the criteria that can be applied to evaluating the results of their work (pp. 266-267). For example, if certain technical skills are required for a musically satisfying performance, these same skills can and should be criteria for evaluation. Reimers use of the term craft comes close to what musicians mean when they speak of technique: Craft, the internaliza tion within the body of the ways and means to make the sounds the music calls on to be made, is a foundational crit erion for successful musicianship. This is the case whether the musician is a first grader being a musician, a seasoned virtuoso, or anything in between. It is th e case whatever the music, of whatever style or type, from whatever culture or time (p. 266). What is universal is the craft of music making, in all its variet ies. However, the expression of that craft is very distinct: But crucially what coun ts as craft is particular to the particular music being evaluated (p. 266). Reimers argumen t seems to support the validity of designing assessment measures that are instrument, and even genre, specific. Bennett Reimer notes: everything the musi c educator does in his job is carrying out in practice his beliefs about his subject (Rei mer, 1970, p. 7). It is important that the pedagogical approach a teacher uses reinforces his or her philosophical belief about why we do what we do in music. If we believe, as Reimer does, that we are fundamentally teachers of aesthetics through the medium of music, then every aspect of our work should support and defend this view rather then detract from it.

PAGE 20

20 Instrumental technique is a means to an e nd, not the end itself. Certainly the virtuosic pyrotechniques required for some pieces blurs this distinction, but by and large most teachers would be quick to acknowledge that complete absorption with the mechanics of playing is a recipe for burn-out and loss of the joy of musicmaking. Cello teacher Fritz Magg observed that Calisthenics literally comes from two Greek words: kalos which means beautiful and stenos which means strength (Magg, 1978, p. 62). Accepti ng the principle that the development of strength is a requisite for expres sion of the beautiful serves as a rationale for designing a test to assess technique. Reimer believes that past and present attempts of assessment have two crucial flaws (2003). First, they are not tailored to a speci fic musical activity, maki ng the false assumption that what is tested for is applicable to any a nd all musical involvements. Reimer states, The task for the evaluation communityis to develo p methodologies and mechanisms for identifying and assessing the particular di scriminations and connections re quired for each of the musical roles their culture deems important (p. 232). Just as Gardner (1983) brought to our attention the need to define distinct kinds of intelligence, Re imer cautions that we should be wary of assuming direct transfer of musical inte lligences from role to role. The second weakness of music testing accord ing to Reimer is its almost exclusive concentration on measuring the ability to discriminate, thereby neglecting to examine the necessary connections among isolated aspects of musical intelligence (2003). The question of how meanings are created through connections has been largely ignored, he suggests. This may be partially attributed to heavy dependence on objective measurement in music research. Qualitative studies may be better suited for this pu rpose. Reimer notes that many recent studies in cognitive science may be appl icable to musical evaluation.

PAGE 21

21 David Elliott Elliott (1995) makes a clear distinction betw een evaluation and assessment. He notes, The assessment of student achie vement gathers information that can benefit students directly in the form of constructive feedback. He sees evaluation as being primarily concerned with grading, ranking, and other summary procedur es for purposes of student promotion and curriculum evaluation (p. 264). For Elliott, how ever, achieving the goals of music education depends on assessment. He describes the primar y function of assessment as providing accurate feedback to students regarding the quality of their growing musicianship. Standards and traditions are the criteria by which students ar e measured in determining how well they are meeting musical challenges. Elliot leaves it to the reader to define what these standards and traditions are and more specifically what means are used to determine their attainment. Elliotts concept of assessment is one of supporting and advancing achievement over time, noting the quality and development of a learners musical thinking is something that emerges gradually (p. 264). Elli ott is concerned with the inad equacy of an assessment which focuses on the results on a students individual thinking at a single moment in time. Real assessment of a students development occurs when he or she is observed making music surrounded by musical peers, goals, and standards that serve to guide and support the students thinking (p. 264). Regarding evaluation, Elliott is unequivo cal: there is no jus tification for using standardized tests in music (p. 265). He sees conventi onal methods of evaluation as inappropriate in music because they rely on linguistic thinking. Like Gardner, Elliott insists that an assessment, if it is to be intelligence-fair, mu st be aimed directly at the students artistic thinking-in-action.

PAGE 22

22 To summarize, Elliott sees assessment as a process-oriented appr oach to teaching, using constructive feedback embedded into the daily ac ts of student music making. Music is something that people do; music assessment must th en occur in the context of music making. Comparing and Contrasting the Philosophic Viewpoints of Reimer and Elliott The crux of the difference in music philos ophies of Reimer and Elliott revolves around the role of performance. Elliott sees all asp ects of music revolving around the central act of performing. As stated by Elliott, Fundamentally, music is something that people do (Elliott, p.39, italics in original). Reimer notes that processes (music making) produce products (integral musical works) and that, performance is not su fficient for doing all that music education is required to do, contrary to what Elliott insists (Reimer, p. 51). Reimer sees performance as only one of several ways musical knowledge is acquire d, as opposed to being th e essential mode of musical learning. Elliott defines assessment of student achievement as a means of gathering information that can be used for constructive feed back. He also values it as a means to provide useful data to teachers, parents, and the surrounding educational community (p. 264). However, Elliott is uncomfortable with a ny use of testing that simply focuses on a students thinking at one moment in time. One can imagine him acknowledging the value of a diagnostic performance test, but only if it were part of a continuum of evaluations. Elliotts insistence on the central role of performance prevents him from recognizing the value in a critique of a musicians abilities at a given moment in time. Reimer sees the act of performing composed music and improvisation as one requiring constant evaluation. Because he is willing to acknowledge musical products (form) separately from the act of creating or regenerating, he asks a more incisive question: By what criteria can those who partake of the work of musicians evaluate that work? (p. 265). Considering the myriad styles, t ypes and uses of music, Reimer

PAGE 23

23 concludes that criteria fo r judging music must be distinctive to each form of music and therefore incomparable to one another (p. 266). Reimer softens his stance by providing examples of universal criteria: that is, criteria applicable to diverse musical forms. He does insist, however, that they must be applied distinctively in each case: Assessment of musical intelligence, the n, needs to be role-specific. The task for the evaluation community (those whose intelligence centers on issues of evaluation) is to develop methodologies and mechanisms for identifying and assessing the particular discriminations and connections required for each of the musical roles their culture deems important. As evaluation turn s from the general to the specific, as I believe it urgently needs to do, we are lik ely to both significantly increase our understandings about the divers ities of musical intelligences and dramatically improve our contribution to helping individuals id entify and develop areas of more and less musical capacity (p. 232). Reimer accepts the view that there is a ge neral aspect of musical intelligence, but suggests that it takes its reality from its varied ro les. This allows him to see evaluation in music as a legitimate aspect of musicianship, part of the doing of music that Elli ott insists on. His philosophic position supports creatin g new measures of musical perf ormance, especially as they bring unique musical intelligences to light and ai d in making connections across diverse forms of music making. Part Two: Theoretical Discussion Assessment in Music: Theories and Definitions Every era has a movement or event that seems to represent the dynamic exchange between the arts and the society of that ti me. Creation of the National Standards for Art Education is one such event. Th e Goals 2000: Educate America Act de fined the arts as being part of the core curriculum in the United States in 19 94. That same year witnessed the publication of Dance Music Theatre Visual Arts: What Every Young American Should Know and Be Able to Do in the Arts (MENC, 1994). It is signif icant that among the nine cont ent standards, number seven

PAGE 24

24 was: Evaluating music and music performances Bennett Reimer, one of the seven music educators on the task force appointed to writ e the document, discusse s the central role of evaluation in music: Performing composed music and improvising re quire constant evaluation, both during the act and retrospectively. Listen ing to what one is doing as one is doing it, and shaping the sounds according to how one judges their effec tiveness (and affectiveness), is the primary doingresponding synthesis occu rring within the act of creating performed sounds (Reimer, 2003, p. 265). Central to success is the ability to assess ones wo rk. This assessment includes all of the content standards, including singing, performing on in struments, improvising, and composing. Evaluation is the core skill that is required fo r self-reflection in music. When a student is capable of self evaluation, to some extent teach ers have completed their most important task. Reimer sees the National Standards as the em bodiment of an aesthetic ideal, not merely a tool to give the arts more legislative clout: The aesthetic educational agenda was give n tangible and specific formulation in the national content standards, and I suspect that the influence of the standards will continue for a long time, especially si nce their potential for broadening and deepening the content of instruction in music education has barely begun to be realized (p. 14). Reimer and the other members of the task fo rce were given an opportunity to integrate a philosophy into the national standa rds that values music educati on. With this statement they articulated a philosophy defending the scholastic validity of the arts: The Standards say that the arts have academi c standing. They say there is such a thing as achievement, that knowledge and skills matte r, and that mere willing participation is not the same thing as education. They affirm that discipline and ri gor are the road to achievementif not always on a numerical s cale, then by informed critical judgment (MENC, 1994, p. 15). Such statements are necessary in a culture that perniciously sees the arts as extracurricular activities and not part of the core educational experience of every child. Reimer has provided a philosophical foundation fo r assessment in the arts. Others, like Lehman (2000), observe that, Our attention to this topic is very uneven. It is probably fair to

PAGE 25

25 say that in most instances evaluation is treated in an incidental manner and is not emphasized in a systematic and rigorous way (Lehman, pp. 5-6). As the standards movement grows, fueled by greater interest in achievement testing in the ar ts, it is likely that this attitude will change. Lehman describes how he sees the emerging role of music assessment: I believe that the standards movement has se t the stage for an assessment movement, and I believe that assessment may become the defi ning issue in music education for the next decade. Developing standards and defining cl ear objectives that fl ow naturally from standards make assessment possible wher e it was often not possible before. But standards do more than make assessment possi ble. They make it necessary. Standards have brought assessment to the center of th e stage and have made it a high-priority, highvisibility issue. Standards and assessment inescapably go hand in hand. We cannot have standards without assessment (p. 8). Furthermore, we cannot have assessment wit hout tests that are designed to measure all kinds of music making, whether it be in bands, orch estras, choirs, or jazz ensembles. Included in this list should be assessment of individua l performance. New ways of more objectively determining achievement in individua l performance are greatly needed. The need for assessment measures capable of assessing the multiple intelligences present in the arts has been articulated: Although some aspects of learni ng in the arts can be measur ed adequately by paper-andpencil techniques or demonstrations, many skills and abilities can be properly assessed only by using subtle, complex, and nuanced methods and criteria that require a sophisticated understanding. Assessment meas ures should incorporate these subtleties, while at the same time making use of a broa d range of performance tasks (Reimer, p. 15). When Reimer observes that assessment in the arts is a complex task with subtle shades of meaning, he is alluding to the ill-structured quality of many of the s ubject content domains in music. Spiro, Vispoel, Schmitz, Smarapungavan, and Boeger (1987) define ill-structured domains as content areas where there are no rules or principles of sufficient generality to cover most of the cases, nor defining characteristics for determining the actions appropriate for a given case (p. 184, as quoted Brophy, p. 7). Criteria for judgment in performance, therefore, must be

PAGE 26

26 tailored to the idiosyncrasies of the particular instrument, its role as a solo or ensemble member, the age and/or playing level of student and the purpose of assessment. Constructivism and Pro cess/Product Orientation Brophy defines the constructivist view of knowle dge as those situations in which students draw upon previous experience to understand ne w situations (2000, p. 10). This occurs when teachers assess something specific like cello techni que. Students are asked to transfer knowledge and psycho-motor skills from one context: (previ ous playing experience) to another (performing new or unfamiliar excerpts). Constructivist theory coincides with one of the definitions of technique used in this res earch: the ability to transfer knowledge and performance skills previously learned to new musical material. Process-orientation tends to be aligned with a constructivist approac h. Inquiry into new areas of knowledge and understanding does not necessarily have a predetermined outcome. Learning occurs during the process of explor ation. Methods of eval uation in music and elsewhere have tended to be produc t-oriented. The need to objectively quantify what has been learned is an ongoing problem in the arts. The desire to evaluate student achievement in relation to the attainment of pre-specified objectives led to the creation of criterion-referenced or objective-referenced tests. These tests evaluate achievement in relation to specific criteria rather than through comparing one student to another (Boyle and Radocy, pp. 9-10). These tests, however, have been criticized for measuring verbal intelligence rather than authentic music making (Elliott pp. 75-76). It is possible, however, for tests to be designed that measur e components of both the process (technique) and product (complete musical statement) of making music. Diagnostic tests that ev aluate students as they progress through increasing challenges may give the teacher insight regarding the

PAGE 27

27 students cognitive and psychomotor abilities. Thus, a diagnostic test in music can be designed to evaluate both process and product. Definitions To understand theoretical rati onale behind the evaluation of music ability terminology must be clear. The term test refers to any systematic pr ocedure for observing a persons behavior relevant to a specific task or series of tasks. Measurement is a system designed to quantify the extent to which a pe rson achieves the task being tested. In music, testing usually involves some form of a scori ng system or rating scale. Evaluation means making judgments or decisions regarding the level of quality of a musi c behavior or of some other endeavor (Boyle, 1992). The ideal evaluation model has a strong objective data component but encompasses subjective but enlightened judgments from expe rienced music teachers (Boyle, p. 247). Boyle and Radocy claim that evaluative decisions are best made whe n, decision makers (a) have a strong relevant information base, including both subjective and objective information, (b) consider affective and, where appr opriate, aesthetic reactions of (or to) the individual, group, or endeavor being evaluated, and (c) be made with the primary goal of improving the quality of the learners educational experiences (1987, p. 8). True evaluation mu st provide information that enhances the educational experience and does not simply provide data for the purpose of assigning grades, for determining who is allowed to play, or what the students chair placement will be. A diagnostic test is one which focuses on the present and is used to classify students according to their strengths and weaknesses rela tive to given skills or knowledge (Boyle and Radocy, p. 10). Such a test can be used to (a) group students for instru ction or (b) provide individualized instruction that corrects errors or challenges th e learner. The diagnostic test of

PAGE 28

28 cello technique created for this st udy is designed to serve the latt er purpose. It falls into the category of a narrow content focus test, which is defined as intensive in nature (Katz, 1973). This type of test is appropriate for judging an individuals st rengths and weaknesses. It allows for intra -individual comparisons, such as ability levels of differing sk ills. Intensive tests provide the basis for remedial instructi on, as well providing i ndications of the means of improving areas of weakness. The purpose of a test largely determines wh at type of test needs to be chosen or constructed for assessment purposes. If a test s primary purpose is to discriminate among individuals, then the test is norm-referenced (Boyle and Ra docy, p. 75). An individual performance is judged in comparison to the performances of his or her peers. This type of test is appropriate for making comparisons among individuals, groups or institutions. Criterion-referenced tests de scribe student achievement in terms of what a student can do and may be evaluated against a criterion or absolute standard of performance (Boyle, p. 253). Such a test is ideally suited to individual performance; the ch allenge for this test is how to establish the criteria to be used as a standa rd. If a performance ev aluation uses excerpts accurately revealing a students ability in demons trating specific tasks, then that test has good content validity ; the test materials coincide with the skills being tested. The focus of performance assessment may be global i.e. a judgment of its totality, or specific i.e. a judgment of only particular aspects of performanc e. A diagnostic test would be expected to use criteria that re veal specific aspects of performance, although the evaluation could still include global statements about overall playing ability. The use of global and specific approaches are explored in the review of literature at the end of this chapter.

PAGE 29

29 Part Three: Research The field of testing in string instrument performance is remarkably uncultivated. However, there is a growing body of literature d ealing with performance assessment in general, and this writing has many implications for the pr oblem addressed in this study. Examination of this literature will begin with a survey of resear ch in solo instrumental performance, noting the specific aspects of performance measured and the a pproaches used. An exploration of the use of factor analysis as a means of achieving high reliab ility and criterion-related validity will follow. This section will close with a review of the re search in measurement of string performance. The Measurement of Solo Instrumental Music Performance John Goodrich Watkins The earliest known research in the area of solo instrumental performance was carried out by Watkins (1942) for his doctora l dissertation at Teachers Colle ge, Columbia University. Watkins constructed an objectively scored, cornet rating scale. For this he composed 68 melodic exercises based on selected cornet methods. Four equivalent forms of the test were designed, each containing sixteen melodies of increasing di fficulty. The measure was established as the scoring unit and was considered to be played incorrectly if any errors of pitch, time, change of tempo, expression, slur, rests, holds and pauses, or repeats occurred. Af ter administering the four preliminary test forms to 105 students, he used item analysis to cons truct two final forms of the test. Equivalent forms and test-retest reli ability coefficients were high (above .90). Following this research, Watkins developed the Watkins-Farnum Performance Scale (WFPS) (1954) for wind instruments and snare drum. This scale, along with the subsequently constructed Farnum String Scale (Farnum, 1969), constitutes the only readily available performance measure. As with the Watkins corn et study, this test, admi nistered individually,

PAGE 30

30 requires the performance of a seri es of passages of increasing diffi culty. The student plays with the aid of a metronome, continuing through the exer cises until he or she scores zero in two consecutive exercises. Again, the scoring unit is the measure, and the examiner is given a detailed explanation of what constitutes an e rror. Two equivalent forms were constructed and 153 instrumentalists were tested. Correlations between Form A and Form B of the test have ranged from .84 to .94. Criterion-related validit y based on rank-order co rrelations ranged between .68 for drum to .94 for cornet and trumpet. Concerns have been raised about how we ll-suited the examples are for particular instruments (Boyle and Radocy 1987). Some dynami c markings appear artificial and no helpful fingerings are provided for technical passages. There is no attempt to measure tone quality, intonation, or musical interpretation. The latt er is an inherently subjective judgment but nevertheless a critical part of an assessment of musical performance. As a result, the tests content validity has been questioned (Zdzinski and Barnes, 2002). The WFPS contains highly sp ecific directions for scoring aspects of playing, that teachers can all agree upon. As a result, it continues to be used by default, as no other measure provides a similar level of objectivity. A nu mber of investigators have used the WFPS as a primary measurement tool for their research. B oyle (1970), in an experime ntal study with junior high wind players, demonstrated that students who practiced reading rhythms by clapping and tapping the beat showed signifi cantly greater improvement as measured by the WFPS. More recently Gromko (2004) investigated relationshi ps among music sight reading as measured by the WFPS and tonal and rhythmic audiation ( AMMA Gordon, 1989), visual field articulation ( Schematizing Test Holzman, 1954), spatial orient ation and visualization ( Kit of FactorReferenced Cognitive Tests, Ekstrom et al., 1976), and academic achievement in math concepts

PAGE 31

31 and reading comprehension ( Iowa Tests of Educational Development Hoover, Dunbar, Frisbie, Oberley, Bray, Naylor, Lewis, Ordman, and Qualls, 2003). Using a regression analysis, Gromko determined the smallest combinations of variables in music sight reading ability, as measured by the WFPS The results were consistent with ea rlier research, suggesting that music reading draws on a variety of cognitive skills in cluding visual perception of patterns rather than individual notes. The WFPS has its greatest validity as a test for si ght reading. Sight reading is a composite of a variety of skills, some highly speciali zed. Using only this test to rank students on musicianship, technique or aptit ude would be inappropriate, however This test design reveals a certain degree of artificiality; the use of the m easure as a scoring unit and choice of ignoring pauses between measures are somewhat contrived. Nevertheless, Watkins and Farnum succeeded in developing the most reliable and obj ective performance testi ng instrument in their day. Robert Lee Kidd Kidd (1975) conducted resear ch for his dissertation con cerning the construction and validation of a scale of trombone performance sk ills at the elementary and junior high school levels. His study exemplifies a trend toward mo re instrument-specific research. Kidd focused on the following questions: What performance skills are necessary to perform selected and graded solo trombone literature of Grades I and II? What excerpts of this body of literature provide good examples of these trombone performance skills? To what extent is the scale a valid instrume nt for measuring the performance skills of solo trombonists at the elementary and junior high school level? To what extent is the sc ale a reliable instrument?

PAGE 32

32 Solos from the selective music lists of th e National Interscholas tic Music Activities Commission of the MENC were content analyzed, and 50 performance skills were identified coinciding with range, slide technique and ar ticulation. Each skill was measured by four excerpts and administered to 30 junior high schoo l trombonists. These performances were taped and evaluated by three judges. Results from th is preliminary form of the measurement were analyzed, providing two excerpts per skill area. E quivalent forms of the measure were created, each using one of the two excerpts selected. This final version was administered to 50 high school students. Interjudge reli ability coefficients were .92 fo r form A and .91 for form B. Equivalent forms reliability was found to be .98. Validity coefficients ranged from .77 to 1.0 for both forms. Zdzinski (1991, p.49) notes that the us e of a paired-comparison approach rather than the use of teacher rankings may have affected validity coefficients. Kidd concluded that the Scale of Trombone Performance Skills would be useful to instrumental music educators in their appraisal of the followi ng areas of student progress: guidance, motivation, improvement of instruction and program, student selection maintenance of standards, and research. Kidd recognized that the time requirement (thirty-six minutes for administration, twenty one minutes for judging, and nine minutes for scoring) could make this version of the scale impractical in a public school situati on and acknowledged that some modifications in the administrati on and scoring procedures could facilitate the extent of the scales use (pp. 93-94). Janet Mills Mills (1987) conducted an investigation to determine what extent it was possible to explain current assessment methods for solo musi c performances. In a pilot study, she chose six instrumental music students, aged 15 years or above, who were capable of performing grade-

PAGE 33

33 eight music from a British graded music list. Videotapes were made of their performances and these were scored by 11 judges. Judges were as ked to write a comment about each performance and give it a mark out of 30 based on the scale of the Associated Boards of the Royal Schools of Music. Two adjudicating groups were formed consisting of: 1) Music teachers and music specialist students, and 2) Nons pecialists with experience of musical performance. After the judging occurred, judges were interviewed about the evaluative criteria. From these interviews, the following 12 statements or constructs were generated: The performer was Nervous/Confident The performer Did not enjoy/Did enjoy playing The performer Hardly knew/W as familiar with the piece The performer Did not make sense/Made sense of the piece as a whole The performers use of dynamics was Inappropriate/Appropriate The performers use of tempi was Inappropriate/Appropriate The performers use of phrasi ng was Inappropriate/Appropriate The performers technical problems we re Distracting/Hardly noticeable The performance was Hesitant/Fluent The performance was Insensitive /Sensitive The performance was Muddy/Clean I found this performance Dull/Interesting In the main part of her study, phase two, Mills taped ten performances, again dividing her 29 judges into the groupings previously mentione d. Judging was done using both the original 30-point overall rating (with commen ts), as well as with the newly created criteria. Inter-item correlations and correlations among marks on the 30-point scale were all positive. Correlations between overall marks and individual items were all negative. Because of the small sample size, no data on significance could be provided. Nevert heless, this study demonstrates a well designed method for examining criterion-related validity of newly created evaluative statements with an existing performance measurement.

PAGE 34

34 The Use of Factor Analysis in Performance Measurement The tests discussed so far, and others lik e them, have a fundamental problem with reliability; the measures employed were typical ly subjective judgments based on uneven and unspecified observations. It beca me increasingly clear to researchers that greater attention needed to be focused on systematically objectif ying the methods used in musical evaluation. The use of rating scales to replace or substantiate judges general im pressions is an approach that has been explored by several resear chers. Factor analysis of desc riptive statements generated for assessment became an important technique fo r improving content validity and interjudge reliability. Factor analysis comprises a number of t echniques that can be used to study the underlying relationships between la rge numbers of variables. Co mmon factor analysis reveals the factors that are based on th e common or shared variance of the variables (Asmus and Radocy, 2006). All methods of factor analysis seek to define a smaller set of derived variables from a larger collection of data. When applie d to performance evaluation, factor analysis can help to determine systematically common evalua tive criteria. Potent ial benefits include increased content validity and gr eater interjudge reli ability. The groundbreaking work of Abeles in the use of factor analysis to develop a highl y reliable and valid perfor mance scale for clarinet led other researchers to use factor analysis in designing their scales. The following studies are examples of the applicati on of factor analysis to performance measurement. Harold F. Abeles Abeles (1973) research in the developmen t and validation of a clarinet performance adjudication scale grew from a desire to re place a judges general impressions with more systematic procedures. He turned to rating scales because they would allow adjudicators to base

PAGE 35

35 their decisions on a common set of evaluative dimensions rather than their own subjective criticisms. In the first phase of the study, 94 statements were generated through content analyses of essays describing clarinet performance. These st atements were also formulated through a list of adjectives gathered from several studies which described music performance. Statements were paired with seven a priori ca tegories: tone, intonation, interp retation, technique, rhythm, tempo, and general effect. The statements were then transformed to items phrased both positively and negatively; items that could be used by instrumental music te achers to rate actual clarinet performances. Examples from this item pool ar e: 1. The attacks and releases were clean. 2. The clarinetist played with a natural tone. 3. The clarinetist played flat in the low register. The items were randomly ordered and paired with a five poin t Likert scale, ranging from highly agree to highly disagree. Factor analysis was performed on the evalua tion of 100 clarinet performances using this scale. Six factors were identified: interp retation, intonation, rhythm, continuity, tempo, articulation, and tonewith five de scriptive statements to be judge d for each factor. The final form of the Clarinet Performance Rating Scale (CPRS) was comprised of items chosen on the basis of having high factor loadi ngs on the factor they were sele cted to measure and low factor loadings on other factors. The thirty statements chosen were grouped by factors and paired with a five-point Likert scale. Ten taped performances were randomly selected and rated using the CPRS by graduate instrumental music educatio n students. For the purpose of determining interjudge reliability, judg es were divided into groups of 9, 11 and 12 judges. Item ratings from these judges were again factor analy zed to determine structure stability.

PAGE 36

36 Abeles found that the six-factor structur e produced from the factor analysis was essentially the same as the a priori theoretical st ructure. This suggested good construct validity. He concluded that this structure would be a ppropriate for classifying music performance in general, as none of the factors seemed to reflect idiosyncratic cl arinet characteristics. On the other hand, Zdzinsky (2002) found that the factors identified to assess st ringed instrument, wind instrument and vocal performance are distinct and rela ted to unique technical challenges posed by each performance area. The interjudge reliability estimates for the CP RS were consistently high (.90). Individual factor reliabilities ranged from .58 to .98, with all factor s but tone and intonation above .70. Criterion-related validity based on correlations be tween CPRS total scores and judges ratings were .993 for group one, .985 for group two, and .978 for group three. Predictive validity (<.80) was demonstrated between the CPRS and global performance ratings. Martin J. Bergee The development of a rating scale for tuba and euphonium (ETPRS) was the focus of a doctoral dissertation by Bergee (1987). Using methods similar to Abeles, Bergee paired descriptive statements from a literature, adjudication sheets and essays with a Likert scale to evaluate tuba and euphonium performances. Judges initial responses led to identification of five factors. A 30-item scale was then constructed ba sed on high factor loadings Three sets of ten performances were evaluated by three panels of judges ( N = 10) using the rating scale. These results were again factor analyzed, resulting in a four-factor structure measuring the items: interpretation/musical effect, tone qualit y/intonation, technique, and rhythm/tempo. Interestingly, factor analysis produced sli ghtly different results then in the Abeles Clarinet Performance Adjudication Scale Technique was unique to this measure, while articulation was

PAGE 37

37 unique to the Abeles measure. Abeles measure also isolated tone qua lity and intonation as independent items. The idiomatic qualities of specific instruments or families of instruments may result in the use of unique fact ors in performance measurement. Interjudge reliability for the ETPRS was found to be between .94 and .98, and individual factor reliabilitie s ranged from .89 to .99. Criterion-related validity was determined by correlating ETPRS scores with global ratings based on magnit ude estimation: (.50 to .99). ETPRS scores were also correlated with a MENC-constructed wind instrument adjudication ballot resulting in validity estimates of .82 to .99. The Development of a Criteria-Specific Rating Scale T. Clark Saunders & John M. Holahan Saunders and Holahan (1997) investigated th e suitability of crit erion-specific rating scales in the selection of high school students for participation in an honors ensemble. Criteriaspecific rating scales differ from traditionally us ed measurement tools in that they include written descriptors of specific le vels of performance capability. Judges are asked to indicate which of several written criteria most closel y describes the perceive d level of performance ability. They are not required to express their lik e or dislike of a perfor mance or decide if the performance meets an indeterminate standard. In this study, criterion-specifi c rating scales were used by 36 judges in evaluating all 926 students seeking selection to the Connecticut All-State Band. These students were between grades 9-12 and enrolled in public and private high schools throughout the state of Connecticut. Only students who performed with woodwind and brass instruments were examined in this study, because the judges were able to use th e same evaluation form. The 36 adult judges recruited in this study were comp rised of elementary, secondary, and college-level instrumental

PAGE 38

38 music teachers from Connecticut. All had a minimu m of a bachelors degree in music education and teachers certification. Three aspects of student performances were examined: solo evaluation, scales and sight reading The following specific dimensions of in strumental performance were assessed: Solo Evaluation: Tone, Intonation, Technique/A rticulation, Melodic A ccuracy, Rhythmic Accuracy, Tempo, and Interpretation Scales: Technique, Note Accuracy, and Musicianship Sight-Reading: Tone, Note Accuracy, R hythmic, Technique/Articulation, and Interpretation For each performance dimension, a five-point criteria-specific rating scale was constructed using either continuous (se quentially more demanding perfor mance criteria) or additive (nonsequential performance criteria) Each of the criteria were chosen to describe a specific level of music skill, content, and technical achievement. Th e Woodwind/Brass Solo evaluation was comprised of 11 continuous rating scales and four additive rating scales. The overall level of performance achievement for each student was de rived from the sum of the scores for each of the performance dimensions. The observed means and standard deviations indicated that ju dges found substantial variation in the performances in each dimensi on and for each instrument. Despite the relative homogeneity of the student sample, judges demons trated a high level of variability. Students were provided specific informati on about levels of performance strengths and weaknesses. The median alpha reliability among the 16 inst ruments was .915, suggesting that there was a sufficient level of internal consistency among judges. The correlations between each performance dimension and the total score ranged from .54-.75 with a median correlation of .73. These correlations suggest that each scale dimens ion contributed substantial reliable variance to the total score. Saunders and Holahan concluded th at the pattern of correla tions provided indirect

PAGE 39

39 evidence of the validity of the criteria-specific rating scales for diagnosing the strengths and weaknesses of individual performances. The re searchers noted that because three kinds of performances (prepared piece, scales, and sight-r eading) were measured, factor analysis would provide insight into the interd ependence of performance dime nsions across th ese types of playing. Factor analysis would i ndicate the constructs that guid e adjudicators in the evaluation process as well. Saunders and Holahans findings have impli cations for the present study. Their data provide indirect evidence that criteria-specific rating scales ha ve useful diagnostic validity. Through such scales, students are given a diagnos tic description of deta iled aspects of their performance capability, somethi ng that Likert-type rating scal es and traditional rating forms cannot provide. Such scales help adjudicators list en for specific aspects of a performance rather than having them make a value judgment a bout the overall merits of a performance. The Measurement of String Performance Stephen E. Farnum Because of the success obtained and reported with the Watkins-Farnum Performance Scale and its practical value as a si ght-reading test for use in de termining seating placement and periodic measurement, it was suggested that a similar scale be developed for string instruments (Warren, 1980). As a result, the Farnum String Scale: A Perf ormance Scale for All String Instruments (1969) was published. Both te sts require the student to play a series of musical examples that increase in difficulty. No reliabili ty or validity information is provided in the Farnum String Scale (FSS) The test manual describes four prel iminary studies used to arrive at sufficient range of item difficulty. Initially Farnum simply attemp ted to transpose the oboe test from the WFPS, but he found that there was an in adequate spread of di fficulty. New exercises

PAGE 40

40 were written, resulting in a fina l form of 14 exercises that are designed to evenly increase in difficulty level. Like the WFPS, the Farnum String Scale uses scoring based on measure-by-measure performance errors. The performance errors that can be taken into account are as follows: Pitch Errors (A tone added or omitted or played on a wrong pitch) Time Errors (Any note not given its correct time value) Change of Time Errors (A marked increase or decrease in tempo) Expression Errors (Failure to observe any expression marks) Bowing Errors Rests (Ignoring a rest or failure to give a rest it s correct value) Holds and Pauses (Pauses between notes within the measure are to be counted as errors) Repeats (Failure to observe repeat signs) The Farnum String Scale manual does not indicate how to us e test results, except for the title page which states: A St andard Achievement Test for Ye ar to Year Progress Records, Tryouts, Seating Placement, and Sight Reading ( 1969). Grading charts are included as part of the individual sheets. Despite the extensive revision pr ocess, criticism has been leveled at this test by some, suggesting that the bowings were not well thought out (Warren, 1980). In examining the exercises written, the following problems are found: 1. bowings that require excessive retakes, 2. bowings that are awkward, i.e. non-idiomatic, a nd 3. bowings that are am biguous, or not clearly marked. Clarity in bowing is a concern because bowing errors often lead to other errors, especially in rhythm. In severa l of the exercises, arbitrary bo wing decisions have to be made when sight-reading. Since bowing is one of the te sted items, students shou ld not be required to devise bowing solutions that are not clearly marked. Bowing ambigu ity represents a flaw in the test validity. Boyle and Rodocy observe that, despite the criticisms that may be leveled against the WFPS and the FSS the tests do attain a certain amount of objectivity by providing highly

PAGE 41

41 specific directions for scoring performance aspect s about which most experienced teachers could agree regarding correctness (p. 176). These te sts established a precedent for providing explicit detail as to what constitute s an error in performance. Stephen F. Zdzinski & Gail V. Barnes Zdzinski and Barnes demonstrated that it was possible to achieve high reliability and criteria-related validity in a ssessing string instrument perfor mances. In their 2002 study, they initially generated 90 suitable statements gath ered from essays, statements, and previously constructed rating scales. These statements were sorted into a priori categories that were determined by previous research. As with the Ab eles study, a Likert scale was paired with these items. Fifty judges were used to assess one hundr ed recorded string performances at the middle school through high school level. Results from the initial item pool were factor-analyzed using a varimax rotation. Five factors to a ssess string performance were identified: (interpretation/musical effect, articulation/tone, intonation, rhyt hm/tempo and vibrato). These were found to be somewhat different than Abel es (1973) and Bergee (198 7) in their scales construction studies of woodwind and brass performa nce. This is not su rprising, considering the unique challenges of string instrument and woodw ind instrument technique. String instrument vibrato had items that were idiomatic for the inst rument. Likewise, articu lation and tone quality are largely controlled by the right (bowing) side in string perf ormance and were loaded onto a single factor, as contrasted with wind instrume nt assessment scales. The authors found that factors identified to assess string instrument wind instrument, and vocal performance are distinct, and related to unique technical challenges specific to the instrument/voice (Zdzinski, p.253).

PAGE 42

42 Twenty-eight items were selected for subs cales of the String Performance Rating Scale (SPRS) based on factor loadings. The reliability of the overall SPRS was consistently very high. Reliability varied from .873 to .936 for each judgi ng panel using Hoyts analysis of variance procedure. In two studies conducted to establish criterion related validity, zero order correlations ranged from .605 to .766 between the SPRS and two other rating scales. The researchers concluded that string performance measurement may be improved through the use of more specific criteria, similar to those used in their study (Zdzinsky, p. 254). Such tools may aid the educator/researcher by pr oviding highly specific factors to listen and watch for when analyzing student performances. Summary: Implications for the Present Study Studies carried out in the measurement of instrumental music performance have increased in reliability, validity, and specificity since the first standardized test for band instrumentsthe Watkins-Farnum Performance Scale of 1954. Surprisingly, along with the Farnum String Scale this is still the only readily availabl e published performance measure. One can conjecture that the use of teacher-made tests account for this, but the more plausible explanation is music teachers dist rust of any test that would clai m to be capable of measuring a subject as complex and multifaceted as music performance. The use of descriptive statements that were found through factor analysis to have commonly accepted meanings has been a significan t development in increasing content validity in performance measurement. As researcher s applied the techniques pioneered by Abeles (1973), they discovered that factors identified for one instrument or group of instruments did not necessarily transfer directly to another instrumental medium. Statements about tonal production

PAGE 43

43 on a clarinet may not have the same high factor loadings on a string in strument where tone production is controlled primarily by bowing technique (Zdzinski, 2002). Through factor analysis the reliability of the new measures improved. However, with additional research came more questions. In the Abeles (1973) and Zdzi nski (2002) studies, only the audio portions of performances were analyz ed by judges. The reasons these researchers chose not to include visual input is not addressed in their studies, but the fact that they chose to record results using audio only may have contri buted to the higher reliability found in these studies. Gillespie (1997) compared ratings of vio lin and viola vibrato performance in audio-only and audiovisual presentations. Thirty-three in experienced players and 28 experienced players were videotaped while performing vibrato. A panel of experts rated the vi deotaped performances and then six months later rate d the audio-only porti on of the performances on five vibrato factors: width, speed, evenness, pitch stabil ity, and overall sound. While the experienced players vibrato was rated higher regardless of what mode of presentation, results revealed significantly higher audiovisual ratings for pitc h stability, evenness, and overall sound for inexperienced players and for pitch stability for e xperienced players. The implications are that visual impressions may cause adjudicators to be less critical of th e actual sound produced. Gillespie notes; The visual stimuli give viewer s additional information about a performance that can either be helpful or distracting, causing them to rate the performance di fferently than if they had simply heard it. He adds, If the memb ers of the panel see an appropriate motion for producing vibrato, they may rate the vibrato higher, regardless if the pitch drifts slightly (Gillespie, p. 218). At the very least, the st udy points out the need fo r the strictest possible consistency in the content-format given to the judge s to assess. If assessment is made from an

PAGE 44

44 audiovisual source or a viewed liv e performance, the possible eff ects of visual influence on the ratings needs to be considered. Concerns about content valid ity were uppermost in mind when choosing the excerpts for the Diagnostic Test of Cello Technique In the following chapter the development and validation of these materials is discussed, as well as the measurement used to quantify the data from the written and playing portions of the test.

PAGE 45

45 CHAPTER 3 METHODOLOGY The purpose of this study was to construct, validate and administer a diagnostic test of cello technique for use with college-level students. This test is criterion-referenced and included both quantitative and qualitative measurements. This study was implemented in the following stages: (a) development of an initial testing in strument, (b) administration of a pilot test, (c) administration of a validity study, (d) administration of the final test, and (e) data analyses procedures for the final test, including an inte rjudge reliability measurement. This chapter describes the following methodological elements of the study: setti ng and participants, instrumentation, data collection, data analys is, and validity and reliability procedures. Setting and Participants Approval for conducting this study was obtai ned first from the Institutional Review Board (IRB) of the University of Florida. A copy of the informed consent letter is included in Appendix D. The testing occurred at the respec tive schools of the participants, using studio or classroom space during times reserved for this study. College-level students ( n = 30) were recruited for this st udy from three private and three public universities in the southeastern region of the United States. While this demographic does not include all the regions of the United States, the variability is c onsidered adequate for this test, which was not concerned with regional variations, if such variations exist, in cello students. The participants selected were undergraduate cello students, both majoring an d minoring in music. This subject pool consisted of music performance majors ( n = 16), music minors ( n = 1), double majors ( n = 3), music therapy majors ( n = 2), music education majors ( n = 6), and music/premed. students ( n = 2). Using subjects from a diversity of academic backgrounds assumes that

PAGE 46

46 this test has value as a diagnostic tool for students studying music through a wide variety of degree programs, not just those majoring in performance. A letter of introduction that explained the pur pose of the study was mailed to the cello faculty of the six schools. Upon receiving approval from the faculty cello teacher, the letter of consent along with the Playing Test (Appendix G) was provided for each participant. One copy of the consent form was signed and returned from each participating student. Following this, times were arranged for each student to take the Written and Playing Test. Each student received a copy of the Playing Test a minimum of two w eeks before the test date. Included with the Playing Test was a cover letter in structing the students to prepare all excerpts to the best of their ability. Attention was directed toward the metronome markings provided for each of the excerpts. Students were instructed to perform these excerpts at the tempos indicated, but not at the expense of pitch and rhythmic accuracy. Data Collection The Written and Playing Test Each participant met individually with the pr imary investigator for forty-five minutes. The first thirty minutes of testing time was used for the Playing Test. Before beginning to perform the Playing Test, students were asked to check their tuning with the pitch A-440 provided for them. Students were also asked to take a moment to visually review each excerpt prior to performing it. Students were asked to at tempt to play all the ex cerpts, even if some seemed too difficult for them. The primary investigator listened to and j udged the individual students skill level for each performance. For each aspect of technique assessed, a fivepoint criteria-specific rating scale was constructed. The Playing Test evalua tion form (Appendix H) used both continuous

PAGE 47

47 (sequentially more demanding performance criteria) and additive (nonsequential performance criteria). When a technique was measured using a continuous rating scale, the number next to the written criterion that corresponded to the perceive d level of skill was circled. When using the additive rating scale, the primary investigator marked the box beside each of the written criteria that described one aspect of the performance demonstrating mastery of the skill. Both the continuous and the additive rating scale have a score range of 2-10 point s, as two points were awarded for each level of achievement or each performance competency. It was theoretically possible for a student to score 0 on an item using an additive scal e if their performance matched none of the descriptors. Seven c ontinuous rating scales and ten a dditive rating scales constituted the Playing Test evaluation form. The overall leve l of performance achievement for each student was calculated as the sum of the scores for each area of technique. The Student Self-Assessment Profile The last fifteen minutes was devoted to th e completion of the Written Test (Appendix E) and the Student Self-Assessment Profile (Appendix J). To maintain the highest control in administering the test, the primary investigator remained in the room while the Written Test was taken, verifying that neither a piano nor cello wa s referred to in completing the test. The Written Test evaluation form is provided in Appendix F. Rationale for the Assessment Methodology Saunders and Holahan (1997) have observed th at traditional rating instruments used by adjudicators to determine a level of quality and characte r (e.g., outstanding, good, average, below average, or poor) provide little diagnostic feedback. Such rating systems, including commonly used Likert scales, cause adjudicators to fall back on their own subjective opinions without providing a means to inte rpret the results of the examination in new ways. Furthermore,

PAGE 48

48 due to their design, these rating sc ales are incapable of providing mu ch in the way of interpretive response. As Saunders and Holahan observe, k nowing the relative degree to which a judge agrees or disagrees that, rhythms were accurate, however, does not provide a specific indication of performance capability. It is an evaluation of a judges magnitude of agreement in reference to a nonspecific and indeterminate performan ce standard and not a precise indication of particular performance attainment (p. 260). Criteria-specific rating scal es are capable of providing gr eater levels of diagnostic feedback because they contain wr itten descriptors of specific leve ls of performance capability. A five-point criteria-specific ra ting scale was developed for this study to allow for greater diagnostic input from judges. As pects of left hand and bowing technique were evaluated using both continuous (sequentially more exacting cr iteria) and additive (nons equential performance criteria). Both continuous and additive scales requir e a judge to choose which of the several written criteria most closely desc ribe a students performance. The additive scale was chosen when a particular technique (s uch as playing scalar passages) has a number of nonsequential features to be evaluated, such as evenness, good bow distributi on, clean string crossings, and smooth connections of positions. Along with the five-point criteria specific ra ting scale, the Playing Test evaluation form (Appendix H) provided judges with an option of writing additiona l observations or comments about each technique evaluated. While these data are not quantifiable for measurement purposes, recording the judges immediate reac tions in their own words to a students performance may capture an insight into some as pect of performance that the written criteria overlooks. Because the primary purpose of this test is diagnostic, allowing room for commentary is important.

PAGE 49

49 Interjudge Reliability Two adjudicators were recruited to dete rmine interjudge reliability of the Playing Test. Both judges were professional cellists who teach at the college-level. To decrease selection bias as a threat to external validity, the adjudicators were c hosen from two different geographical regions and teachi ng institutions. An introductory DVD was provided, explaining how to use the Playing Test evaluation form in assessing student performances. Each judge viewed and listened to DVDs of five separate student performances of the Playing Test, and rated the performances using th e Playing Test evaluation form (Appendix H). Judges were asked to return the results by a specified date, using a self-addressed stamped envelope provided. The combined judges evaluati ons of ten individual students were correlated to the primary investigators evaluation results of these same students. Data Analyses Data analyses included item analysis for both the Written and the Playing Test. The distribution of total scores was described using means and standard deviations. Item difficulty, as expressed as the proportion of students who answered an item correctly, was determined. Item discrimination analysis was conducted usin g the point biserial co rrelation to reveal the strength and direction of the relationship betw een success on a particular item and success on the total test. Qualitative data from the Observa tions/Comments portion of the Playing Test were examined and compared with individual scores. The content of the Student Se lf-Assessment Profile was evaluated and correlated to the data from other sections of the test. Relationships were studie d between the students scores on the Written and Playing Test and: a) year in college, b) major/minor distinction c) years of study, d) piano experience, e) extent and content of repertoire, f) de gree of interest in performance

PAGE 50

50 areas, g) personal goals for studying the cell o, h) expressed area of technique needing improvement, and i) short term and long term goals in music. Content Validity The techniques that were assessed in this study are believed to be esse ntial aspects of lefthand and bowing techniques for a college-level st udent. The choice of categories for left-hand and bowing technique was based on the frequency these techniques are f ound in the repertoire for cello, as well as the discussion of them in the following sources: The Ivan Galamian Scale System for Violoncello arranged and edited by H. J. Jensen; The four Great Families of Bowings by H. J. Jensen (Unpublished Paper); Cello Playing of Today by M. Eisenberg; Cello Exercises: A Comprehensive Survey of Essential Cello Technique by F. Magg; and Dictionary of Bowing and Pizzicato Terms by J. Berman, B. Jackson, and K. Sarch. A validation study was conducted to determine to what extent teachers felt this test measured a students technique (Mutschlecner, 2005). Cello teachers ( N = 9) on the college and college preparatory level agreed to participate in this validity st udy by reading all sections of the diagnostic test and then responding to questions in an evaluation form. The results of this study are provided in Appendix B

PAGE 51

51 CHAPTER 4 RESULTS This chapter describes the procedures used to analyze the data collected and presents the results of these analyses. Data from the Written Test, the Playing Test, and the Student SelfAssessment were collected from 30 participants in accordance with the procedures outlined in Chapter 3. The dependent variables of this st udy were the Written and Playing Test scores. Independent variables were (a) year in school, (b ) major/minor distincti on, (c) years of cello study, and (d) piano experience. Data Analysis Descriptive data for the scores were tabul ated and disaggregated by independent variable. Data were explored using t -tests, regressions, an d correlations. Regressions were used to determine the effect of the independent variable s on the obtained test scores. The independent variables of major/minor distinct ion, year in school, and piano experience are categorical, and dummy codes were used to represent these variab les in the regression anal yses. Item difficulty, item discrimination, and point bise rial correlations were calc ulated for the Written Test. Cronbachs Alpha ( ) was used to estimate of reliability of individual items on the Playing Test. The Spearman rank-order correlation was used as a measure of the Playing Tests validity. Interjudge reliability was calculated using Pearsons r Questions on the Written Test were dichot omous, and tests were scored and yielded continuous data. The Playing Test performances were evaluated using the criteria-specific rating scale that was revised following th e pilot test (see Appendix A fo r the Pilot Study report). Two external reliability researchers viewed and evaluated videotapes of 33% ( N = 10) of the Playing Tests. These data were then correlated with the primary investigators scores of these same student performances as a measure of interjudge re liability. The participan ts cello teachers rank

PAGE 52

52 ordered their students by level of technical skill based on thei r assessment of the students playing technique. These rankings were correlated to those based on the Playing Test results as a measure of validity. The data an alysis was designed to explore th e following research questions: 1. To what extent can a test of cello playing measure a students technique? 2. To what extent can a criteria-speci fic rating scale provide indications of specific strengths and weaknesse s in a students playing? 3. Can a written test demonstrate a students understanding of fingerboard geography, and the ability to appl y music theory to the cello? Participants Written and Playing Test scores, and student answers to questions in the Student SelfAssessment Profile were obtained ( N = 30). Participants were undergraduate music majors and minors studying cello at three privat e and three public universities ( N = 6) in the southeastern region of the United States. Part One: The Written Test Scoring the Written Test The Evaluation Form used to tabulate the scores for the Written Test is provided in Appendix F. Items on the Written Test were assigned points using the following system: (1) Fingerboard Geography: 11 points. (44 pitch locations to identify were divided by 4) (2) Interval Identification: 8 points. (3) Pitch Location and Fingering: 32 points. (a single point was assigned for correctly identifying both pitch and fingering) (4) Single Position Fingering: 32 points. (5) Bass, Treble, and Tenor Clef No te Identification: 12 points. The total possible score for the combined se ctions of the Written Test was 95 points.

PAGE 53

53 Results from the Written Test Table K-1 (Appendix K) presents the raw scores of the Written Test items and the composite means and standard deviations. Reliability of the Written Test was obtained using the Kuder-Richardson formula, re vealing the internal cons istency of test items: rKR20 = .95. This result indicates that despite th e narrow range of scores, the Wr itten Test has strong interitem consistency. Table 4-1 presents the data from a regre ssion analysis for year in school (freshmen, sophomore, junior, and senior) and the Written, Pl aying, and combined Test scores. Freshmen classification emerged as a significant predictor ( p < .05) for the Playing Test and combined test scores. The R -squared value of .28 indicates that fres hmen classification accounted for 28% of the variance in the Playing Test Scores. For the combined Written and Playing Test scores, the R-squared value of .265 indicates that freshmen cl assification accounted for 27% of the variance. With the exception of these findings, year in sc hool does not seem to bear a relationship to technical level, as measured by the Written and Playing Test. Exploring the relationship of test scores and students degree program was complicated, as there was a mixture of music performance majors, double majors, music education majors, music therapy majors, and music minors. One sc hool did not allow freshmen to declare music performance as a major until their sophomore year, insisting they enter the studios initially as music education majors. If one classified double majors in the music performance category, then there were 21 music performance majors and nine students in the other category. A regression analysis was conducted with major/minor distinc tion as a predictor of the written, playing and total scores. No effect of major or minor distinction was found for the Written Test ( R2 = .001). Results were nearly significant for the Playing Test ( p = .08)) and not si gnificant for the

PAGE 54

54 combined Written and Playing Tests ( p = .15). A students choice to major in cello does not appear to be an indication of his or he r technical level accord ing to this test. The 30 cellists participating in this resear ch had studied the cello between five and sixteen years (Table 4-2). A regression was conduc ted with years of cello study as a predictor of the scores. For the Written Test, ( B = .037, SE B = .069, = .53) and the Playing Test, ( B = .044, SE B = .024, = 1.82) years of cello playing was not found to be a significant predictor ( p = .60; p = .08). A lack of relationship between years of cello playing and scores may reflect the wide range of students innate ability and developmen tal rate. The relatively small sample size also means that outliers have skewed the results. Effi cient use of practice time is an acquired skill; it is possible for students with fewer years of experi ence to surpass those that, while having played longer, are ineffective in their practice. Though no data on actual numbers of years of piano experience were collected, exactly one-half of the participants reported having piano experience, and one-half reported having no piano experience ( ns = 15). A t -test of the means for Written and Playing Test scores was conducted based on the participants self-reported piano experience Both tests were significant. Students reporting piano expe rience scored significantly higher on the Playing Test ( M = 91.93, SD = 3.08), t (30) = 115.55, p = .000, than those without piano experience ( M = 78.47, SD = 12.71), t (30) = 23.92, p = .000. Students reporting piano experi ence also scored significantly higher on the Written Test ( M = 129.73, SD = 20.63), t (30) = 24.35, p = .000, than those without piano experience ( M = 116.93, SD = 28.28), t (30) = 16.01, p = .000. Because significant differences were found in these groups based on reported piano experience, a regression was conducte d with piano experience as a predictor of the scores. For the Written Test, ( B = -2.00, SE B = 4.21, = -.48) piano experience was not found to be a

PAGE 55

55 significant predictor. In the Playing Test, ( B = -19.20, SE B = 8.63, = -2.23) piano experience emerged as a significant predictor ( p < .05). The R2 value of .15 indicates th at piano experience accounted for 15 % of the variance in the Playing Test scores. Results are shown in Table 4-3. Regression Analysis of Written Test Items In the Interval Identification section of th e Written Test, the mean score for those with piano experience was 7.07 out of 8 possible points as compared with a mean of 5.73 for those without experience. Through regression analysis piano experience was shown to be a significant predictor ( p = .002) of the Interval Identification scores ( B = 1.56, SE B = .41, = 3.81). The R2 value of .528 indicates that pia no experience accounted fo r 53 % of the varian ce in the Interval Identification scores. This is a highly significant figure. Students with piano experience clearly are better at thinking intervallically on the cello. For the Pitch Location and Fingering section of the test, the mean s were 31.13 out of 32 possible points for those with piano experi ence compared with 22.26 for those without. Regression analysis revealed that this piano ex perience was nearly significant as a predictor of these scores ( p = .061). Piano experience again emer ged as a significant predictor ( p = .002) of the Single-Position Fingering scores ( B = 1.80, SE B = .47, = 3.83). The R2 value of .53 indicates that piano experience accounted fo r 53 % of the variance in the Single-Position Fingering scores. This section re quired students to look at notes vertically through a series of arpeggios and arrive at a fingering, something that pianists ar e frequently required to do. Item difficulty, item discrimination, and point bi serial correlations were calculated for the Written Test. Results are presented in Table 4-4. The Interval Identification section had the highest average difficulty level (.80) of any section of the Written Test. Items on the Bass, Treble, and Tenor Clef Note Iden tification section were found to be the least difficult. Item 23

PAGE 56

56 ( rpbs = 0.80) and item 31 ( rpbs = 0.82) of the Pitch Location an d Fingering Section had the two highest correlations to the total test score. The range of difficu lty level (1.0-.80) indicates that the Written Test is not at an appropriate leve l of difficulty for undergraduate cellists. Using Pearsons r a low positive correlation was obtained between student scores on the Written and Playing Test ( r2 = .16). This suggest s little relationship between scores on these tests. This suggests that the cognitive knowledge required to do well on the Written Test may be distinct from the psychomotor ability needed to demonstrate the techniques found in the Playing Test. Part Two: The Playing Test Scoring the Playing Test A discussion of the criteria-spe cific rating scale used to score the Playing Test is found in Chapter Three. Ten techniques were evaluated us ing an additive rating scale which ranged from 0 and 10 points per item. Seven techniques were evaluated using a contin uous rating scale with a range of 2 to 10 points possible. A zero sc ore resulted from none of the criteria being demonstrated for an additive item. The total pos sible score for the combined sections of the Playing Test was 170. Results from the Playing Test Reliability was estimated by using Cronbachs Alpha to find the relationship between individual items on the Playing Test. The results ( = .92) indicate high internal consistency of test items: this suggests that the means of assessing each technique are well-matched. Table K-3 (Appendix K) presents the raw sc ores of the Playing Test items and the composite means and standard deviations. Table 4-5 lists these items from highest to lowest based on their mean scores. These data re veal that students scored highest on the detach bowing

PAGE 57

57 stroke ( M = 8.46), and lowest on pizzicato ( M = 6.06). Discussion of the significance of these mean scores is found in Chapter Five. Comparison of Left Hand Technique and Bowing Stroke Scores The total mean scores were calculated for th e two sections of the Playing Test: Left Hand Technique ( M = 7.21), and Bowing Strokes ( M = 7.31). Students performed at very similar level for both sections and performed uniformly, i.e. higher-scoring students di d well on both sections and lower-scoring students di d less well on both sections. Comparison of Playing Test Scores and Teacher-Ranking To determine the predictive validity of the Playing Test, teachers from the six music school participating in this resear ch were asked to rank their stude nts from lowest to highest in terms of their level of technique. Five of th e six teachers responded to this request. These rankings were compared to the rank-order based on the Playing Test scores. The results are shown in Table 4-6. Two teachers (School A and B) ranked their students in exactly the same order as the Playing Test ranking ( r2 = 1.0). Using the Spearman rank-order correlation, the correlations of the other three schools who responde d were positive and strong: ( r2= 0.65, 0.84, and 0.76 respectively). Results indicate students performance on the Play ing Test closely corresponds to the level of their technique as perceived by th eir teachers. The Play ing Test is criterionreferenced and not designed to be used as a norm-reference test. However, the strong positive correlations of the teachers rank-or der of their students to that of the rank order of the scores on the Playing Test suggests that th is measure is a valid means of determining undergraduate cello students technical ability.

PAGE 58

58 Interjudge Reliability of the Playing Test Two judges were recruited to evaluate five di fferent student performances of the playing test as described in Chapter Three. Interjudge reliabilities were cal culated using Pearsons r Correlations were as follows: Judge A and the primary investigator ( r = 0.92); Judge B and the primary investigator ( r = 0.95). These results are presented in Table 4-7. The students observed by judges A and B represented 33% of the total nu mber of students participating. These data, with its highly significant correlations, appear to confirm the effectiveness of the criteria-specific rating scale used in this study as a means of r ecording information about specific strengths and weakness in a students playing. Part Three: The Student Self-Assessment Profile The Student Self-Assessment Profile (SSAP) was created as another means to gather diagnostic information about students. Many teach ers have developed ques tionnaires to better understand the performance backgro und of their students. The self -assessment used in this study served this function, as well as providing add itional information about areas of performance interest and personal goals. In addition, the SSA P allows students to comment on what aspects of technique they feel they need to improve. Tw enty-nine of the thirty students participating in this study completed the Student-Assessment Pr ofile. The following subheadings represent sections of the Student Self-Assessment Profile. Repertoire Previously Studied Students listed many of the standard methods and etudes collections for the cello: Cossman, Dotzauer, Duport, Fuillard, Franchomme, Piatti, Popper, Sevick, Starker, and Suzuki. Pieces from the standard literature for cello were listed. For a teacher, such information shows

PAGE 59

59 the extent and breadth of a new students experience and may indica te appropriate directions for further study. How interested are you in each of these a reas of performance: Solo, Chamber, and Orchestral? Table 4-8 lists students res ponses to this question. Eightythree percent of the students stated they either agreed or strongly agreed to having interest in solo and orchestral performance, and ninety-three percent expressed the same for ch amber music. Noting res ponses to this section could be a means for teachers to in itiate discussion with students about their plan of study. If a students greatest interest was in playing chamber mu sic, his teacher might help to facilitate this desire. Knowing that a students primary goal was to win an orchestral a udition would dictate in part the choice of repertoire studied. Other areas of performance interest? Students listed the following areas of performing interest: jazz ( n = 2), conducting ( n = 1), piano accompanying ( n = 1), choir n = 1), improvisation ( n = 1), bluegrass ( n = 1), praise bands ( n = 1), and contemporary performance ( n = 2). Teachers provided with this information might choose to direct students to nontrad itional sources of study, such as improvisation methods, learning to read chor d charts, or playing by ear. What are your personal goals for studying the cello? Responses to this question are provided in Table 4-9. Five out of the twenty-nine students (17%) listed teaching privately as a goal for study. The second most frequently mentioned goal was orchestral performance ( 10%). If this study was conducted with the highest ranking music conservatori es in the United States, the researcher suspects that solo performance might be frequently mentioned as well.

PAGE 60

60 What areas of cello technique do you feel you need the most work on? Answers to this question are presented in Table 4-10. Bow stroke was mentioned by ten students as needing the greates t attention. Nine students discussed the need to work on relaxation as they played, specifica lly referring to left and right hand, shoulder, and back tension. Many of the techniques assessed in the Pl aying Test were alluded to such as spiccato bowing or thumb position. The specificity of many of the ar eas of technique mentioned may have been due to the students filling out the SSAP after having taken the Playing Test. The difficulty students had with playing certain passages caused them to list these techniques as ones to work on. This appears to be anecdotal evidence that the playing test can cause students to be more self-aware. Summarize your goals in music and what you need to accomplish these goals. In answering this question, students descri bed their broad musical objectives, often discussing career goals. The goals in music were to be written for six month, one, two, four, and ten-year intervals, but not all students comple ted each sub-category. Table 4-11 presents the responses to this section in th e students own words. Many of the goals implied an understanding between the teacher and the student, such as a tw o-year goal of memorizing a full concerto. Acquiring advanced degrees were goals for two of the students. One students six-month goal was to practice more musically than technically. Without ag reement between the teacher and student on such a goal, conflicts could arise: what if the teacher felt the next six months were best spent drilling technique? One students four-year goal was, To get pa st the pre-eliminations in an orchestra audition. The Student Self-Assessment Profile woul d help to assure that the teacher was privy to this information. One music majors long-t erm goal was to, play recreationally, not as a career. This belies the assumption that every music-major is planning on a career in music.

PAGE 61

61 Access to this kind of information could pr event misunderstandings developing between a teacher and student that resu lt from conflicting goals. Summary of Results The following summarizes the results obtained in these analyses: 1. The Written Test was found to be too easy fo r most undergraduate cellists. Lower scores in the Interval Identification sec tion indicate that some students have difficulty applying their understanding of interval s to the cello. 2. Strong interitem consistency was found for the Playing Test, indicating high reliability for this section of the test. 3. Year in school was a significant predicto r of Playing Test scores and combined scores for freshmen students. 4. Music performance majors scores did not diffe r significantly from scores earned by students in other degree programs. 5. The number of years a student had played th e cello was not found to be a significant predictor of the Written or Playing Test scores. 6. Piano experience was found to be a significa nt predictor of Playing Test scores, and scores on two sections of the Written Test. 7. Playing Test scores were a significant predictor of how teach ers would rank their students in terms of level of technique. 8. The criteria-specific rating scale devel oped for this study appears to be a highly reliable measurement tool based on interjudge reliability.

PAGE 62

62 Table 4-1. Summary of Regression Analysis for Year in School as a Predictor of Written, Playing, and Total Test Scores ( N = 30) Score B SE B Written Test Freshmen ( n = 11) .0069 .0079 .88 Sophomore ( n = 8) .0060 .0074 .82 Junior ( n = 5) .0085 .0060 -1.40 Senior ( n = 6) .0031 .0067 -0.47 Playing Test Freshmen .010 .003 3.30* Sophomore .0058 .0032 -1.83 Junior .0032 .0028 -1.16 Senior .0014 .0030 -0.46 Total Score Freshman .009 .0027 3.18* Sophomore .0038 .0029 -1.32 Junior .0040 .0024 -1.67 Senior .0009 .0027 -0.34 Note. Written Test Scores: R2 = .027 Freshmen; R2 = .023 Sophomore; R2 = .065 Junior; R2 = .008; Senior. Playing Test Scores: R2 = .280 Freshmen; R2 = .107 Sophomore; R2 = .046 Junior; R2 = .008; Senior. Total Test Scores: R2 = .265 Freshmen; R2 = .058 Sophomore; R2 = .091 Junior; R2 = .004; Senior. p < .05

PAGE 63

63 Table 4-2. Years of Study, Fr equency, and Test Scores Years of Study Frequency Written Test Mean Score Playing Test Mean Score ________________________________________________________________________ 5 1 91 144 6 1 91 114 7 6 83 108 8 4 75.5 101.5 9 2 93 126 9.5 1 95 142 10 2 93.5 140 11 7 87.71 141.1 11.5 1 68 140 12 3 81.31 108.7 13 1 93 156 16 1 87 104

PAGE 64

64 Table 4-3. Summary of Regression Analysis for Piano Experience as a Predictor of Written, Playing, and Total Test Scores ( N = 30) Test Section B SE B Written Test Scores -2.00 4.21 -.48 Playing Test Scores -19.20 8.63 -2.23* Total Combined Score -21.20 11.74 -1.81 Note. R2 = .008 for Written Test Scores; R2 = .15 for Playing Test Scores; R2 = .10 for Total Test Scores. p < .05

PAGE 65

65 Table 4-4. Item Difficulty, Disc rimination, and Point Bi-Serial Co rrelation for the Written Test Category Item Item Item Point Bi-Serial Number Difficulty Discrimination Correlation _____________________________________________________________________________________ Fingerboard Geography 1 .97 .13 Interval Identification 1 .77 .25 0.15 2 .87 .38 0.26 3 .80 .50 0.37 4 .77 .13 0.06 5 .77 .25 0.08 6 .70 .38 0.06 7 .90 .25 0.49 8 .83 .50 0.40 Pitch Location 1 .93 .25 0.63 And Fingering 2 .93 .25 0.63 3 .90 .25 0.52 4 .90 .25 0.49 5 .87 .38 0.41 6 .93 .25 0.47 7 .90 .38 0.57 8 .80 .50 0.50 9 .97 .13 0.78 10 .90 .25 0.63 11 .77 .38 0.75 12 .87 .38 0.40 13 .90 .38 0.63 14 .83 .50 0.63 15 .93 .13 0.33 16 .87 .38 0.63 17 .83 .50 0.70 18 .83 .50 0.70 19 .87 .38 0.71 20 .87 .38 0.70 21 .90 .38 0.75 22 .90 .38 0.70 23 .90 .38 0.80 24 .90 .38 0.65 25 .90 .38 0.71 (Table continued on next page)

PAGE 66

66 Table 4-4. (continued) Category Item Item Item Point Bi-Serial Number Difficulty Discrimination Correlation _____________________________________________________________________________________ Pitch Location 26 .80 .50 0.71 And Fingering 27 .83 .50 0.73 28 .80 .50 0.73 29 .83 .50 0.74 30 .83 .50 0.74 31 .83 .50 0.82 32 .80 .50 0.76 Single Position Fingering 1 .97 .13 0.07 2 .97 .13 0.07 3 .97 .13 0.07 4 .97 .13 0.07 5 1.0 0.0 N/A 6 1.0 0.0 N/A 7 1.0 0.0 N/A 8 1.0 0.0 N/A 9 .97 .13 0.43 10 .97 .13 0.43 11 .83 .38 0.16 12 .80 .38 0.15 13 .93 .13 0.23 14 .90 .25 0.16 15 .97 .13 0.36 16 .97 .13 0.36 17 .83 .13 0.06 18 .83 .25 0.32 19 .90 .25 0.40 20 .87 .25 0.35 21 .93 .13 0.23 22 .93 .13 0.23 23 .93 .13 0.23 24 .93 .13 0.23 25 .90 .25 0.23 26 .87 .13 0.12 (Table continued on next page)

PAGE 67

67 Table 4-4. (concluded) Category Item Item Item Point Biserial Number Difficulty Discrimination Correlation _____________________________________________________________________________________ Single Position Fingering 27 .93 .25 0.31 28 .87 .25 0.18 29 .93 .13 0.23 30 .93 .13 0.23 31 .97 .13 0.36 32 .97 .13 0.36 Bass, Treble, and Tenor Clef Note Identification 1 1.0 0.0 N/A 2 .97 .13 0.46 3 .97 .13 0.43 4 1.0 0.0 N/A 5 1.0 0.0 N/A 6 .97 .13 0.02 7 .97 .13 0.0 8 1.0 0.0 -0.04 9 .93 .13 0.08 10 .93 .13 0.27 11 1.0 0.0 -0.05 12 .90 .13 0.01 Note Point Biserial Correlations were not found for the Fingerboard Geography items as 97% of the students had perfect scores on this section.

PAGE 68

68 Table 4-5. Mean Scores of Play ing Test Items in Rank Order Item Rank Order Mean Score ____________________________________________________ Detach 1 8.47 Slurred Legato 2 8.23 Arpeggios 3 8.13 Staccato 4 7.93 Vibrato 5 7.93 Portato 6 7.67 Position Changes 7 7.67 Scales 8 7.60 Arp. Chords 9 7.20 Sautill 10 7.13 Thumb Position 11 7.00 Broken Thirds 12 6.80 Martel 13 6.67 Double Stops 14 6.40 Spiccato 15 6.30 Intonation 16 6.20 Pizzicato 17 6.00 Note. Ratings ranged from 2 through 10.

PAGE 69

69 Table 4-6. Comparison of Teacher-R anking to Playing Test-Ranking Teacher Ranking Playing Test Scores Playing Test Ranking r2 _____________________________________________________________________________________ School A 1 76 1 1.0 2 102 2 3 136 3 4 152 4 5 156 5 School B 1 124 1 1.0 2 140 2 3 140 3 4 148 4 5 152 5 School C 1 100 1 0.65 2 142 4 3 134 2 4 134 3 School D 1 92 1 0.84 2 116 2 3 116 3 4 128 4 5 146 6 6 152 7 7 132 5 School E 1 76 1 0.76 2 86 3 3 114 4 4 120 5 5 82 2 6 144 7 7 140 6

PAGE 70

70 Table 4-7. Comparison of Researchers a nd Independent Judges Scoring of Student Performances of the Playing Test Student Primary Judge A No Investigator __________________________________________________ 1 152 162 2 136 142 3 156 158 4 144 142 5 134 136 __________________________________________________ M 144.4 148 SD 9.63 11.31 r2 0.92 __________________________________________________ Student Primary Judge B No Investigator 6 146 138 7 152 152 8 128 104 9 116 98 10 152 134 __________________________________________________ M 138.8 125.2 SD 16.09 8.67 r2 0.95

PAGE 71

71 Table 4-8. Numbers of Student s Expressing Interest in Solo Chamber, and Orchestral Performance ( N = 29) Category Strongly Agree Agree Disagree Strongly Disagree ______________________________________________________________________________ Solo 10 14 5 0 Chamber Music 20 7 2 0 Orchestral 16 8 4 1 Note Students could indicate interest in multiple cate gories, resulting in totals exceeding the number of students completing the form ( N = 29).

PAGE 72

72 Table 4-9. Personal Goals for Studying the Cello Specified Goal Frequency Mentioned ( N = 29) ________________________________________________________________________ Teaching privately 5 Orchestral performance 3 Chamber music performance 2 Expand repertoire 2 Lifelong hobby, personal enjoyment 2 College-level teaching 1 Obtain advanced degrees with the goal of college teaching 1 Improve concentration 1 Become a fluid improviser 1 Work as a studio musician 1 Ability to convey interpretation of music to others 1

PAGE 73

73 Table 4-10. Student Perception of Priorities for Technical Study Technique Frequency Mentioned _______________________________________________________________ Bow Stroke 10 Relaxation, including right and left hand, shoulders and back 9 Vibrato 4 Vibrato in upper positions 2 Thumb position 3 Musicality 3 Sound production/tone 2 Double stops 2 Sautill 2 Sight-reading 1 Reading in different clefs 1 Rhythm 1 Coordination between right and left hand 1 Proper employment of left hand position and whole arm movement 1 Extensions 1 Shifting 1 Spiccato 1

PAGE 74

74 Table 4-11. Goals in Music a nd Means of Accomplishing Them Six Months: Catch up to my peers. To shift easily. Work strictly on technique, not worrying about pieces or recitals. Practice more musically than technically. Have lessons with other teachers. Improve jazz vocabulary. One Year: Keep my scholarships. To have perfect intonation. Become an effective musi c educator (lifelong). Resolve all tension issues; slow, loose practice-making it a habit. Increase in difficulty of music. Work on awareness of bowing choices. Practice. Two Years: To be able to support myself solely through playing and teaching. I hope to memorize a full concerto and feel comfortable performing. Much practice; memorization and pe rformance practice will be needed. Graduate, and find a graduate school with a fabulous teacher. Four Years: To get past the prelims in an orchestral audition. To graduate, get a job as a music therapist, and join the commun ity of a professional orchestra. Play recreationally, not as a career. Ten Years: To be a guest artist at a major music festival. Be teaching at a university with a Ph.D. in music. Be employed in a high school as a music teacher, but still make time to perform and possibly give private lessons. Able to teach other cellists. Gigging professionally. Be a financially stable musician.

PAGE 75

75 CHAPTER 5 DISCUSSION AND CONCLUSIONS This chapter presents a discussion of the re sults of administering the Diagnostic Test of Cello Technique. Following a review of the purpos es and procedures of this study, the findings of this study are addressed in light of (a) th e research questions pos ed, (b) a comparison of results with similar studies, and (c) implications for string education. This chapter closes with conclusions and recommended dir ections for future research. Overview of the Study The purpose of this study was to design, validate and administer a diagnostic test of cello technique for use with college-lev el students. Written and play ing tests were designed, pilot tested, and a validity study was unde rtaken. Thirty students from si x different universities in the southeastern United States were re cruited to participate in this re search. Each student completed a written test, playing test, a nd a self-assessment profile. A criterion-based rating scale was developed to evaluate the Play ing Test performances. Two uni versity-level teachers were recruited to judge ten video-taped performances of students taking the Playing Test. Evaluations from those judges were correla ted with the primary research ers to determ ine interjudge reliability. Review of Results The independent variables in this study we re (a) year in school, (b) major/minor distinction, (c) years of cello study, and (d) pian o experience. Freshmen classification emerged as a significant predictor of Playing Test scores ( p = .003) and total scores ( p = .004). No effect of major/minor distinction was found for the Written Test ( R2 = .001). Results were nearly significant for the Playing Test ( R2 = .104) and not significant for the combined Written and Playing Tests ( R2 = .072). Years of cello study were not si gnificant predictors of test results.

PAGE 76

76 Piano experience was shown to have a signifi cant effect on the Playing Test scores: ( p = .034). Students with piano experience scored 14% higher on the Written Test and 7% higher on the Playing Test that those without pi ano experience. The reliability of the Playing Test was high as shown by coefficient alpha ( rtt = 0.92). Correlation coefficients obtained between the primary researcher and the two reliability resear chers were positive and strong (Judge A, r2 = 0.92; Judge B, r2 = 0.95), suggesting that the criteria-specifi c rating scale designed for this study was effective. Observations from the Result s of Administering the Diagnos tic Test of Cello Technique The Written Test Future versions of the Written Test designed for college-students should eliminate The Fingerboard Geography section, as only one student ma de errors in filling out this section. This section should be included for a high school ve rsion of the test; the lik elihood is that not all students at this level would be clear about the location of pitches on the fingerboard. The Interval Identification section as a whol e had the highest averag e difficulty level of any section of the Written Test based on item analys is. In this section, item six (a major sixth across two strings) had the highest difficulty leve l of any item on the test (.70). This item, however did not discriminate well between high-scoring and low-sc oring students (.38). On this item students most likely erred by not keeping in mi nd that on the cello, the interval of two notes lying directly across from each other on adjacent strings is always a perfect fifth. Adding a whole step to a perfect fifth, results in the inte rval of a major sixth. This is an example of something universally known by undergraduate cello students but not necessarily visualized by them on the fingerboard. This suggests that some students were either unclear about interval designations or that they do not th ink intervallically when playing th e cello. It is the researchers

PAGE 77

77 opinion that an awareness of inte rvals while playing represents a higher-order of thinking than simply playing note-by-note. Additional research is needed to determine to what extent intervallic thinking while playi ng the cello is a distinguishing characteristic of advanced performers. In the Interval Identification section of the Written Test, the mean score for those with piano experience was 7.07 out of 8 possible points as compared with a mean of 5.73 for those without experience. Piano expe rience was found to be a signifi cant predictor for this item ( p = .002). Students who play piano are able to identi fy intervals more easily on a representation of a cello fingerboard than those w ithout piano experience. In th e Single-Position Fingering section piano experience again was found to be a sign ificant predictor of a students score ( p = .002). This suggests that students with piano experience may think more clearl y about vertical pitch relationships. String instrument teachers would likely concur, observing that their students who play piano tend to: 1) be better sigh t readers, 2) have a clearer sens e of pitch and intervals, and 3) have better rhythmic accuracy. Additional eviden ce of the positive effect of piano experience on cello performance would be gained through stud ies that compared st udents length of time studying both instruments to their performance on the Playing Test. The Single Position Fingering section may be unc lear in its directions Several students thought they were being asked fo r fingering that would allow th e notes to be played as a simultaneous chord, which wasnt possible fo r some items. The final section (Note Identification in Three Clefs) had several very low point biserial correlati ons (0.07) Errors in this section were almost certainly caused by carelessness and did not reflect a students ability in note reading. One single exception was a student who missed all the tenor clef items but got all the other note identification items right. Comple te fluency in note reading is an essential

PAGE 78

78 prerequisite for sight-reading abi lity. As a result, this secti on should be included in future versions of this test. The Written Test needs to be revised for unde rgraduate students in terms of difficulty level. A greater range of scores would likely result if the present version of the test was administered to high school students. In future versions, using actual passages from the cello repertoire to evaluate a st udents understanding of interv als, fingering, and fingerboard geography would be in keeping with the tes ting philosophy of using situated cognition. The Playing Test Left Hand Technique (nine items) and Basic Bowing Strokes (eight items) were evenly dispersed within the range of lowest to highest m ean scores (Table 4-5). The choice in this study to divide technique into left hand techniques and bowing techniques does not reflect in reality how integrated these two areas are. This study s design did not isolate bo w techniques from the musical context in which they are found. If su ch a study was conducted, it might reveal that some students excel in bowing techniques and others in left hand technique. These two areas of technique are so intermeshed that it would be diffi cult to isolate them. Bo wing serves literally to amplify what the left hand does. Development of bowing skill, through practice on open strings without using the left hand, is limited, and is usually, though not always, confined to initial lessons. The Playing Tests mean scores reveal ed that students scored highest on the detach bowing stroke ( M = 8.46), followed by legato bowing ( M = 8.33), and arpeggios ( M = 8.13). Detach bowing is the most commonly used bow stroke; legato playing is also very ubiquitous. One might have expected to find Scales, Broken Thirds and Arpeggios grouped together the same difficulty category. Thes e three areas of technique are considered the core left hand

PAGE 79

79 techniques: indeed most music is comprised of fragments of scal es, arpeggios, broken thirds, and sequential passages. The excerpts used in the Pl aying Test to evaluate scales may have been more challenging to perform than the arpeggios; this may partially explain why scales were not among the easier items. Another explanation may be that scales are the first item on the test. Initial nervousness or stage fright may have affect ed this item more than subsequent ones. The researcher noted that most students seemed to become free of nervousness shortly after commencing the test, but these initial jitters may have had a deleterious effect on their performance of the first item. In the Pilot Study (Appendix A) broken thirds we re the fourth most difficult item. It was conjectured that broken thirds are under-assigne d by teachers, and as a result, not practiced much. In this present study broken thirds ag ain were found to be difficult for students to demonstrate. The ability to play (and especia lly to sight read) broken thirds requires two skills: 1) The capacity to quickly discer n if a written third is major or minor, and 2) having an accurate sense of interval distances on a given string. The correlation of st udents scores on broken thirds to their total Playing Te st scores was strong ( r = .81), suggesting that stude nts ability to perform well in this area may be a good indicator of their overall level of technique. The difficulty of demonstrating a gi ven technique through excerpts varies. Spiccato bowing, the third lowest score ( M = 6.3), requires a succession of separately bowed notes played rapidly enough that the bow bounces off the string almost of its own accord. This is not a technique that is easily demonstrated unless th e player is very familiar with the notes. Sautill bowing, another bounced-bow stroke ( M = 7.13) appears to be slightly easier than spiccato Though sautill bowing requires a faster bow speed than spiccato the repetition of pitches meant the speed of fingering changes is actually slow er for these passages, thus easier to play.

PAGE 80

80 The relatively low score for martel bowing is likely due to a lack of understanding as to what constitutes this bow stroke. The two excerpt s used for this item were moderately easy to play. A large number of students, however did not demonstrate the heavily, accented articulation, and stopping of the bow on the strin g, which characterizes this stroke. While many method books include a description of martel bowing, students are unlik ely to have a clear grasp of how to execute this bowing unless it is demonstrated by a teacher. The item with the lowest score was pizzicato ( M = 6.06). The excerpts chosen featured three separate techniques: (a) ar peggiated chords using the thumb (Elgar), (b) notes with a strong vibrant tone (Brahms), (c) clear ringing sound in the upper register (Kabalevsky). These excerpts were not easy to sight read for students who were ill-p repared. This was the final section in a series of excerpts requiring great concentration; me ntal and/or physical fatigue may have been a factor. It is al so possible that the study of pizzicato is neglected in lessons. Intonation was the second lowest score ( M = 6.20). Judge B assi gned the only perfect score given to a student. It is axiomatic that string players must be constantly vigilant about playing in tune. Not allowing students to become tolerant of playing outof-tune is one of the essential roles of the teacher. Pablo Ca sals words on this subject are timeless: Intonation, Casals told a student, is a que stion of conscience. You hear when a note is false the same way you feel when you do something wrong in life. We must not continue to do th e wrong thing (Blume, 1977, p.102). Five students (15%) mentioned intonation when asked, What areas of cello technique do you feel you need the most work on (see Chapter 4, p. 63). From this study it appears the Playing Test may help make students more aware of the importance of work on intonation.

PAGE 81

81 The Student Self -Assessment Profile The premise for designing the Student Self-Asse ssment Profile is that better information about a students background, intere sts, and goals for study can resu lt in more effective teaching. It value as a diagnostic tool is in revealing a student s years of study, previous repertoire studied, and playing experience. The emphasis on identif ying personal goals for studying the cello as well as overall goals in music opens a window into a students self awareness. Communication of these goals to a teacher can affect the course of study. Allowing students goals to influence their education may result in thei r feeling more invested in the learning process. The outcome may be more effective, goal-dire cted practice. Students are more likely to be motivated by goals that they perceive as being se lf-initiated. Awareness of these goa ls is not necessarily derived by conventional teaching methods; it comes from a di alogue between the teacher and student. The Student Self-Assessment Profile can act as a catalyst for such a dialogue. The personal goal for studying the cello most often mentioned was teaching privately (Table 4-9). When a teacher knows that a student wants to teach the cello as a vocation, his role becomes more of a mentor, exemplifying for the st udent the art of teaching. A greater role for discussion during the lesson may ensue as the need for various approaches to problems becomes apparent. Perhaps the most important thing a teac her can provide a student aspiring to teach is to help them become reflective about their own play ing; asking themselves why they do something a certain way. Questions that ask why rather than how take precedence. Two students mentioned college-level teaching as one of their personal goals. Providing stude nt-teaching opportunities for these students as well as opportunities to observe experienced teachers at work would be invaluable.

PAGE 82

82 Goals such orchestral or chamber music pe rformance could have a direct effect on the program of study if the teacher ag reed that these objectives were appropriate and attainable. A student who has expressed a sincer e goal of playing professionally in a major orchestra deserves to know both the playing standards required a nd the fierce competition involved. A serious attempt to address some of the personal goals mentioned here would challenge even the most veteran of teachers. How do you help a stud ent improve concentration? Become a fluid improviser? Convey their interpreta tion of music to othe rs? Addressing these goals as a teacher means taking risks, varying ones approach, and being flexible. Over one third of the students who filled out the Student Self-Assessment Profile listed bow stroke as a priority for technical study (Table 4-10). They are in good company; string musicians agree that true artistry lies in a play ers mastery of the bow. Musical issues such as phrasing, dynamics, and timing are the bows domain. A cellists approach to bowing largely determines their tone, and articu lation. These qualities, along with vibrato, are the distinguishing unique characteristics of an individual cellists sound. After bow stroke the most commonly noted area of technique addressed was relaxation or lowering body tension. This is an aspect of technique that musicians have in common with athletes. Gordon Epperson summarized the observations of many teachers: What is the chief impediment to beauty of sound, secure intonation, and technical dexterity? I should an swer, without hesitation, excess tension Sometimes tension alone is blamed; but surely, we cant make a move without some degree of tension. Its the excess we must watch out for. (Epperson, 2004, p. 8). Excessive tension may not always be readily ap parent; teachers may not realize students are struggling with this area unless the issue is raised. Students who mention excessive tension while playing as a major concern should be dir ected to a specialist in Alexander Technique.

PAGE 83

83 Work on Vibrato, either in general or in upper positions was mentioned by six students. Despite sounding like an oxymoron, it is true that an effortless sounding vibrato is very difficult to make. Dorothy Delay, of the Juilliard School of Music, assigned the fi rst hour of practice to be spent on articulation, shifting, and vibrato exercises for the le ft hand, and various bow strokes for the right (Sand, 2000). Students who express a desire to develop thei r vibrato should be guided with appropriate exerci ses, etudes, and solos. Other areas of technique are far more easil y addressed. A student who mentions sightreading or reading in differen t clefs can be easily directed to materials for study. Applying oneself to the exercises in Rhythmic Training, by Robert Starer will benefit any student who felt deficient in rhythm (Starer, 1969). There are mate rials to address virtuall y every technical need, as long as the need is made apparent to the teacher. The final question of the SSAP asks, Sum marize your goals in music and what you need to do to accomplish these goals. The words wi th underlined emphasis were added based on input from the Validity Study (Appendix B). Th is phrase is meant to suggest a students personal responsibility to follow-th rough with their stated goals. Table 4-11 is a tr anscription of student responses to this que stion in their own words. Six-month goals are short term, and reflect a students semester-long objectives. Work strictly on technique, not worrying about pieces or recitals, is one example. Some one-year goals seem nave: To have pe rfect intonation. Goals are th e driving forces behind ones outward acts; playing with perfect intonation may not be attainable but that doesnt mean it isnt a valid aspiration. One student has shown they understand the need to make some aspects of playing virtually automatic thr ough repetition: Resolve all tens ion issues: slow loose practicemaking it a habit. Music and athletics have in common the need for drilling desired actions.

PAGE 84

84 As Aristotle noted, We are what we repeatedly do. Excellence the n, is not an act, but a habit (Aristotle, trans. 1967). Goal setting is most effective when it is m easurable, as with a stude nts two-year goal of memorizing a full concerto. Academic ambiti ons, such as pursuing graduate studies, are important to share with ones teacher, and can di ctate a students course of study. Teachers may occasionally be surprised in reviewing their st udents long-term goals: One performance major stated her goal as a cellist was to play recreationally, not as a ca reer. However, most four-year and ten-year goals were career-oriented. There is value in having students express these goals concretely; through this activity, students visualize doing somethi ng they are presently not able to do. Goal setting requires a leap of faith. Discussion of Research Questions In this section the original research questions are reexamined in light of the results. These questions are restated below with discussion following. To what extent can a test of cello playing measure a students technique? The extent to which the Playing Test is ab le to measure an individual cello students technique depends on the way a teacher uses it. If students are strongly encouraged by their teacher to practice the excerpts a nd are expected to play from them in the lesson, testing error resulting from unfamiliarity with the music and sight-reading mistakes can be minimized. The results can come much closer to a true diagnosis of a students technical level. The comparison of teacher-ratings to Playing Test ratings (Table 4-7) revealed a high correlation and tended to confirm the tests validity. It is possible that, in some cases, ra nking differences occurred due to a teachers bias based on his or her estimation of a students potential. As one teacher noted in discussing a students rating: It pains me to make this assessme nt, as I confirm that (student)

PAGE 85

85 has far underperformed both her stated aspirations and potential the last se veral years (personal correspondence, May 2007). One of the primary purpos es of this test was to provide a tool that allows greater diagnostic objectivity, thereby providing a counterbalance to the subjective impressions that a teacher receives about each student. Each technique is represented by several excerpts of increasing difficulty. On those items using an additive scale, the listener can fi nd a descriptive statement that corresponds to the performance level a given student has demonstrate d. In thirty minutes of concentrated listening the teacher/evaluator is able to come to defi nite conclusions about a students ability to demonstrate seventeen essential ar eas of technique. As the Playing Test is made up of excerpts from the standard repertoire for cellists, the teacher is given insight into what pieces are appropriate for study. To what extent can a criteria-specific rating scale provide indication s of specific strengths and weaknesses in a students playing? Interjudge reliability was positive and strong (Judge A r2 = 0.92, Judge B r2 = 0.95), suggesting that the criteria-speci fic rating scale designed for this study was an effective means of directing the evaluator to listen and watch for specific aspects of technique. A factor analysis of the descriptive statements generated for the Pl aying Test evaluation form is recommended. Statements that were found to have low factor lo adings could be replaced, and reliability of this measure could be increased. One example where im provement might be made is in the criteria choices provided for Vibrato There were students who did not really match any of the descriptors provided for this item ; their vibrato was not tense nor too fast, but to the contrary, was unfocused and too slow.

PAGE 86

86 Can a written test demonstrate a students understanding of fingerboard geography, and the ability to apply musi c theory to the cello? The answer to this research que stion is a provisional yes, not ing that the results of such a test do not necessarily predict how well a student plays. Additional research is needed to determine to what degree intervallic understandin g or fingerboard visuali zation is part of the practical core knowledge of an advanced cellist. While scores on the Written Test ranged fr om 62% to 100% correctly answered, the difficulty level for all items was found to be low. However, it is good that the Fingerboard Geography section was filled out flawlessly by 29 out of the 30 students. Any real confusion here would be a signal that something was seriou sly lacking in a student s understanding of half steps, the chromatic scale, or th e relationship of strings tuned a fifth apart fr om each other. The Written Test may be seen as a kind of barrier examination; if students score below 90%, review of these content domains is indicated. Item di fficulty could be increa sed by more challenging interval identification an d pitch location items. Perhaps a means to achieve more authentic as sessment of fingering skills would be to have students provide fingerings for passages from actual orches tral, chamber, or solo music written for the cello. The challenge in this would be the number of acceptable choices. Nevertheless, a teacher might gain more insight about a students situated cognition, that is, the thinking process at the cello, by using this approach, Ensuing results could become the basis for discussion about why one fingeri ng might be better than another. The point biserial correlations from the Interv al Identification section indicate that some students, who otherwise had high sc ores, were less successful on th is section. However, seven of the nine students who made perfect scores in this section also we re the top scorers of the whole test. Of the nine students who correctly identified all the interval s, eight had piano experience.

PAGE 87

87 Piano experience emerged as a significant effect on students scores on the Interval Identification section through regr ession analysis ( p = .002). It is suspected that di scussions of intervals rarely occur in the teaching of string in struments. A students understanding of intervals derived from music theory classes may not automatically transf er to the cello fingerboard and cello music. The use of fingerboard representations to test interval understanding may have favored visual learners. This test doe s not extend beyond mere interval identification to the more important skill of seeing a written interval and be ing able to imagine how it will sound. This skill, traditionally tested in sight-singing classes, is very valuable to instrumentalists but is often underdeveloped. Future versions of the test mi ght include having students play passages based on a series of intervals rather than given pitches. Students Written Test scores do not have a st rong correlation to their Playing Test scores ( r2 = 0.16). The Written Test may measure a theoreti cal understanding that, while valuable, does not directly influence a students demonstration of the techniques found in the Playing Test. A comparison of students scores on the Written Te st and a sight reading test such as the Farnum String Scale (Farnum, 1969), might be found to have a higher correlation. Pitch Location and Fingering, as well as the Single Position Fingering section, require the students to demonstrate a skill that is required for effective sight reading, namely, coming up with efficient fingerings. Additional research is needed to explore to what extent an unders tanding of fingerboard geography and music theory, as applied to the cel lo, affects a students playing. It can be hypothesized that there is a cogni tive skill set that accompanies the psychomotor skills of string playing. Better understanding of th e kind of situated cognition requi red to think well on a string instrument would be valuable to teachers and students.

PAGE 88

88 Observations on the Playing Test from Participating Teachers After the research for this study was complete, participa ting teachers were asked to comment on the value of the test as a diagnostic t ool. In one particular case, a teacher had his students play from the Playing Test during lessons at the be ginning of the semester. He comments on the beneficial aspects of us ing the excerpts within his studio: In terms of using the playing test as a studio project, it was helpful in several ways. First, it was great to have a community project that I could get everyone involved in working on. Secondly, it was useful to ha ve excerpts that were shorter than any etude I might assign (I do sometimes assign etude excerpts, however) but focused on a small sub-set of technical problems. For some stude nts, certain excerpts were a lot harder than others (though they all struggled on the doubl e-stop section of the Dvorak concerto!) which meant it was also a pro cess of self-discovery. Fina lly, in some cases I later assigned repertoire included in the excerpt s, and students were able to build upon the work theyd already done, learning some of the trickier parts (Personal communication, May 2nd, 2007). The reference to self-discove ry corroborates evidence gath ered through the Student SelfAssessment Profile (SSAP) that the Playing Test can result in greater st udent self-awareness of their playing. The number of comments f ound in the SSAP referring back to techniques encountered in the Playing Test s uggests that the test can indeed make students more self-aware of their strengths and weaknesses. That the test could influence the choice of repertoire assigned to students was also demonstrated. The positiv e value the test had un iting the studio in a community project was unexpecte d. If students worked on this common repertoire and played it for each other in cello class, the test could f unction as a means to connect members of a studio and to learn from each other. The completeness of the Playing Tests c ontent and its capacity to quickly assess a students skill level was noted by another teacher: I found the test to be a very thorough and comp rehensive survey of all of the basic issues in cello technique, using excerpts drawn mostly from the standard repertoire, so that at least some of them should already be fam iliar to any cello student. By asking an

PAGE 89

89 intermediate-to advanced level student to play through these ex cerpts (or even just one or two excerpts under each technical element), with or without prior preparation, the teacher should be able to quickly (in about thirty minutes or less) identify the students strengths and weaknesses in any of the essential aspects of cello technique. (Personal communication, May 23rd, 2007) Another participating teacher confirmed the diagnostic worth of the test and its usefulness in setting goals: I feel the diagnostic test designed by Tim Mutschlecner is a valuable tool for evaluating students and charting their c ourse of study. Students come to a teachers studio with such a wide di versity of skill and backgrounds that any aid in assessing their ab ilities is welcome. Tha nk you for your original and worthwhile test. (Personal communication, May 10th, 2007). This teacher addresses what the test results have shown; students enter co llege with a wide range of experience and preexisting abilities. One of the student participants, a freshman, scored higher on the Playing Test than fi ve out of six seniors. This exemplifies why the test has questionable value as a norm-referenced measure. When ranking students, one teacher observed that comparing students was like comparing apples and oranges. The playing test provides a set of criteria that can supplement a teacher s performance standards and expectations. Comparative Findings The Farnum String Scale When discussing the Farnum String Scale ( FSS) in Chapter Two, it was observed that the test requires the student to play a series of musical examples th at increase in difficulty. This approach was adopted in the Diagnostic Test of Cello Technique (DTCT). Unlike the FSS musical examples were taken directly from actua l music written for the cello. The rationale for this was that using real music increased the tests capacity for authentic assessment; students would be playing the actual passages where th e techniques in questi on would be found. The downside to this was the potential of distracters, aspects of the excerpts that would mask a

PAGE 90

90 students real ability with a given technique. In some cases, for example the double-stop excerpt from the Dvork concerto, other challenges in pl aying the passage may have adversely affected a students ability to demonstrate the technique. However, after administering the test and receiving positive feedback from students as well as teachers, it is felt that the benefits of using real music far outweigh the disadva ntages. Students liked the fact that they were playing from standard works for cello and ones that they w ould quite possibly study someday, if they hadnt already. This illustrates a w eakness of the DTCT if it is used normatively. Unlike the FSS passages, which would be unfamiliar to all test ta kers, students approach the DTCT with varying degrees of familiarity with the excerpts. It woul d be unfair and ill-advised to use this test as a means to compare students among themselves or to assign grades. Each students performance of the test must be judged solely on th e criteria defined in the evaluation form. One university professor declined to have hi s students participate in this study because the bowings and fingering were not always the ones that he taught. Although he was alone in this objection, it does demonstrate a dilemma that this kind of test design faces: If the test-maker provides ample fingerings and bowings, there will be students who have learned these passages differently and will be thrown off. If few or none are provided, it will create much more work for the average student to play these excerpts. The best compromise may be to seek bowings and fingerings that are most commonl y used, even while instructing students that they are free to develop their own choices. Zdzinski and Barnes The design of this study owes much to th e string performance rati ng scale of Zdzinski and Barnes (2002). The success they found in usi ng a criteria-specific ra ting scale was validated in this research. High interjudge reliability correlations (Judge A r2 = 0.92, Judge B r2 = 0.95)

PAGE 91

91 indicate that drawing a judges at tention to specific aspects of the playing is an effective way to increase consistency in evaluating music perf ormances. Additive rating scales, as used by Saunders and Holahan, (1997) elimin ate the use of unspecific numer ical ratings such as those commonly used in Likert scales. By requiring a judge to listen for specific evaluative criteria, rather than trusting in their general impressions of a music performance, reliability is increased. Conclusions The following conclusions can be drawn from the results of this study. 1. Results from the Interval Identification section of the Written Test indicate that not all students recognize in tervals confidently on the cello. 2. The excerpts used in the Playing Test ar e a valid and reliable way to measure a undergraduate cellists technique. 3. Piano experience improves how well stude nts perform on the Playing Test. 4. The Playing Test is a good predictor of teacher-rankings of their students in terms of technique. 5. The criteria-specific rating scale used in this study is a reliable instrument for measuring a students technique. 6. A students year in sc hool, degree program, or years of cello study are not strong indicators of their playing ability. Recommendations for future research in th e area of string instrument teaching and assessment are: 1. A high school version of this test shou ld be developed for use in diagnostic evaluation and teaching.

PAGE 92

92 2. This test can be used as a m odel for violin, viola, and bass diagnostic tests of technique. 3. Future studies should explore the relati onship of theoretical knowledge and performance ability on the cello. As testing increasingly becomes a majo r focal point in discussions on improving education, questions regarding the value and pur pose of assessment will increasingly be raised. Diagnostic evaluation, because of its capacity to inform teaching, is an important component of music education, including applied music. Tool s like the Diagnostic Test of Cello Technique help clarify for both teachers and students wh at needs to be learne d. Along with existing approaches to evaluation, music educators will co ntinue to seek better objective means to assess musical behavior. Normative assessment has limited value in the arts; students come from such diverse backgrounds and experiences that their work must be judged by established criteria, not from comparison. The effectiveness of instrumental teaching depends on how clearly performance objectives are communicated to the student. Welldefined performance criteria results in clear objective goals. In music, as in life, when the ta rget is clear, it is easier to hit the mark.

PAGE 93

93 APPENDIX A PILOT STUDY A pilot study was carried out (Mutschlecner, 200 4) which provided indications of ways to improve an initial form of the Diagnostic Test of Cello Technique Five undergraduate music majors studying at a school of music located in the southeastern region of the United States volunteered to participate in the pilot study. Four out of the five students were cello performance majors. One was a music education major. Thes e students were met with individually at the school of music in a studio space reserved for this use. The students were first given the Self-A ssessment Profile to fill-out. Following this, students were given the Written Examination, which took between ten and fifteen minutes for them to complete. The Written Examination used in the pilot study was shorter than the one developed for the present study. It included: a fingerboard chart, horiz ontal and linear (on one string) interval id entification, note id entification in three clefs, and single-position fingering exercises. In the pilot study students were not give n the Playing Examination ahead of time but were required, essentially, to sight-read the exce rpts. However, students were advised that this was not a sight-reading test per se, but rather a test to assess their ability to demonstrate specific technical skills and bowings. The students were encouraged to take some time to visually familiarize themselves with the excerpts, and were told they could repeat an excerpt if they felt that they could play it better a second time, an option only c hosen twice. The students took between thirty and forty-five minutes to comple te the playing portion of the test. The pilot studys version of the Playing Examination was shorter then the presen t study, measuring fewer categories of left hand and bowing technique and not using as many excerpts for each technique.

PAGE 94

94 Results of the Written Examination showed that the students had no difficulty with the questions asked. What errors there were amounted to careless mistakes. This suggests that the Written Examination did not discriminate well for cello students at this level. These results led the researcher to increase the difficulty level of the present study The rating instrument used for the Playing Examination was a five-point Likert scale which included brief descriptions as to wh at each performance level represented. Student performances of the Playing Examination ranged between 74.7% and 93.3% of a perfect score. The student who had the weakest score was a musi c education major. St udents in general did slightly better in the Basic Bowing Strokes section of the exam than in the Left Hand Technique section (91% compared to 86%). This was not surprising: The musical excerpts used to demonstrate left hand technique were of necessity more difficult, and less easy to sight-read. The lowest combined score was for the portato bowing. This was defined in the Playing Examination as: A series of broad strokes played in one bow with a smooth slightly separated sound between each note. The bow does not stop as in the slurred staccato. Each note is to be clearly enunciated with a slight pressure or nudge from the index finger and upper arm. Despite this extended definition students were un able to consistently de monstrate this bowing. The evidence suggested that this stroke is not being taught or discussed to the same extent as other bowings. The next three lowest combined scores after portato bowing were for position changes, string crossings, and broken th irds. Well-performed position ch anges and string crossings may be part of the identifying characteristics of an advanced player. The researcher suspects that broken thirds are not practiced much and not em phasized by teachers, thus explaining the lower

PAGE 95

95 scores in this area. Results from the Playi ng Examination indicate the need to increase the difficulty level. The results of the Student Self-Assessment Profile include d the following responses: How interested are you in each of these areas of performance? I am interested in solo performance. 1 Strongly agree 1 Agree 3 Disagree 0 Strongly disagree I am interested in chamber music performance. 4 Strongly agree 1 Agree 0 Disagree 0 Strongly disagree I am interested in orchestral performance. 3 Strongly agree 2 Agree 0 Disagree 0Strongly disagree What was most unexpected was the number of students who chose disagree for the statement: I am interested in solo performance. One would have expected performance majors to at least agree with this statement, if not strong ly agree. They may have been influenced by the choice of the word, performance, and were thinking about whethe r they enjoyed the experience of solo playing which, by connotation, meant auditions, juries and degree recitals. These students may have been reading solo as meaning solo career, and responded likewise. In the Student Self-Assessment Profile stud ents responded to the question, What are your personal goals for studying the cello in a variety of ways such as: (a) Would like to have a chamber group and coach chamber groups. (b) To play anything that is set before meI dont want to have limits in terms of technique. To be able to convey to the audience what I feel when I play. (c) Perfect intonation before I graduate, a ttempt to win the concerto competition. (d) To get an orchestra gig, have a quartet/q uintet, and teach students on the side. (e) I want to be able to use the cello in all sorts of ways including orchestral, chamber, rock & roll, and studio recording. These answers are very specific and focused. A teacher, informed about these goals, could modify teaching to address some of these goals. For example, students that have expressed an interest in teaching would find discussions on how one might teach a particular skill very

PAGE 96

96 valuable. If a student expresses the desire to be able to pla y, anything set before me, they would be likely to respond enthusia stically to a rapid, intense surv ey of a wide variety of cello literature. For the student who specifically ment ions perfecting intonatio n as a goal, there are studies and approaches th at would be recommended. The question, What areas of technique do you f eel you need the most work on? elicited even more specific responses such as shifting, general knowledge of higher positions, fluid bow arm, relaxing while playing, e xploring musical phrasing, etc. These responses help give the teacher a window into the students self-aware ness. They could become excellent starting points for examining technique and would go far in helping technical study be goal-directed rather than a mechanical process. The final section of the Student Self-Assessm ent Profile had the students summarize their goals for six months, one, two, four, and ten year periods. Responses showed students had clear ideas about what they wanted to do after school, such as orchestral auditions or graduate school. One revision made for the present study was to ask students what they needed to do to accomplish their goals. A personal commitment in the plan of study is essential for insuring the students motivation to accomplish the goals form ulated by both the teacher and himself. For example, if a student seriously wants to compet e for an orchestral job, preparation must began long before the position opening is announced, thr ough study of orchestral excerpts, a concerto, and the Suites for Unaccompanied Cello by J.S. Bach. It is incumbent upon the teacher to discuss these kinds of issues w ith students who express ambitions to play professionally in an orchestra.

PAGE 97

97 APPENDIX B VALIDITY STUDY A validity study was conducted following the pilot study to determine what extent teachers felt this test measured a students t echnique (Mutschlecner, 2005). Cello teachers ( N = 9) on the college and college preparatory level ag reed to participate in this validity study by reading all sections of the dia gnostic test and then responding to questions in an evaluation form (Appendix C). In answer to the question, To what extent does this test measure a students technique, responses ranged from Very extensively, and, Rat her completely, to, T he written part tests knowledge, not technique. Fifty six percent of the teachers felt the test measured a students technique in a significant way. Sixty seven percent of the re spondents suggested that sightreading difficulties might mask or obscure an accurate demonstration of a students technical ability. As one teacher said, playing the exce rpts shows if they have worked on this repertoire. If they are reading it, it shows their reading ability. Two te achers came up with the same solution: Provide the playing test to stude nts early enough for them to develop familiarity with the passages which they are asked to play. This would not eliminate the inherent advantage students would have who had studied the piece fr om which the excerpt was derived, but it could mitigate some effects, such as anxiety or poor sight-reading skill, which adversely affects performance. These suggestions were implemented in the present study. Criticism of the Written Examination include d the concern that, some fine high school students ready for college might not know intervals yet. In response to this, a new section of the Written Examination was developed (Pitch Loca tion and Fingering) that measures a students capacity to locate pitches on a fingerboard representation wit hout the use of intervallic

PAGE 98

98 terminology. The Interval Identification and Single Position Fi ngering sections of the pilot test were extended to provide greater accuracy in measurement of these skills. Forty four percent of res pondents agreed that the excer pts chosen for the Playing Examination were a valid way of determining a students competence in left hand and bowing technique. Several teachers sugge sted the addition of specific excer pts to reveal other aspects of a students technique such as pizzicato and passages with greater variety of double stops (simultaneously playing on two strings). These suggestions were implemented in the present study. Part two of the Playing Examination (Bas ic Bowing Strokes) was expanded to include Accented Dtach, Flying Spiccato, and Pizzicato Reaction to the choice of excerpts used in the Playing Examination included the suggestion that a better assessment of a students abilities would be to arrange the material in progressive order from easiest to hardest and then see at what point the student began to have difficulty. Ordering and e xpanding the range of difficulty of the excerpts would provide useful information about the students playing level so th at repertoire of an a ppropriate difficulty-level could be assigned. The present study applie d these recommendations by finding additional excerpts and making them sequentially more demandi ng. An effort was made to find excerpts in each category that could be played by undergraduate cellists. Seventy eight percent of the teachers re sponded positively to the Student SelfAssessment Profile. Comments included, I real ly like the Student Self-Assessment page. I think that it is not just valuable to the teacher but important that the students examine their own situations as well. One teacher remarked, It seems the profile w ould be a useful tool to gauge the goals and general level of a new student. A teacher proposed having some more open ended questions as well, noting that, There is musi c beyond solo, chamber and orchestral. As a

PAGE 99

99 result, a line asking for other areas of performa nce interest was added. The study indicated that teachers are either using a similar tool in their studios or would consider doing so. The responses from teachers who participated in the validity study support the premise that the diagnostic test of cello technique is a legitimate way to gather information about a students technical playing ability. The reco mmendations of these teachers were taken into account in developing this present test.

PAGE 100

100 APPENDIX C VALIDATION STUDY EVALUATION FORM The Diagnostic Test of Cello Technique: Validation Study Evaluation Form Instructions: Please read all parts of the test be fore responding to these questions. 1. To what extent does this test measure a students technique? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 2. What changes to the test construction do you feel would make the test more valid? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 3. What changes in content do you feel would make the test more valid? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 4. To what extent does the content of the Wr itten Examination: i.e. Fingerboard Geography, Horizontal Intervals, Linear Intervals, Cl ef Identification and Single Position Fingering demonstrate a basic essentia l knowledge of music theory as applied to the cello? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 5. Would you consider using the Written Examina tion as a means of assessing a new students knowledge of music theory as applied to the cello? Why or why not? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

PAGE 101

101 6. Are the excerpts chosen for the Playing Examin ation a valid way of determining a students competence ina) Left hand technique? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ b) Bowing technique? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 7. If you feel a particular excerpt is not a good pr edictor of a students ability, what alternative passage do you recommend using? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 8. Would you consider using the Playing Examina tion as a means of assessing a new students technique? Why or Why not? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 9. How would you use information gathered from the Student Self-Assessment and Goal Setting Profile in working with your students? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ 10. To what extent would you be willing to particip ate in future Field Test ing of this test through administering it to a portion of the students in your studio? ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ Please include any additional comments here: ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

PAGE 102

102 APPENDIX D INFORMED CONSENT

PAGE 103

103

PAGE 104

104

PAGE 105

105

PAGE 106

106

PAGE 107

107

PAGE 108

108

PAGE 109

109

PAGE 110

110

PAGE 111

111

PAGE 112

112 APPENDIX F THE WRITTEN TEST EVALUATION FORM Students Name ____________________________ Adjudicators Code _____________ Grade Level _______________________________ Degree Program ______________________________ Audition Day _________ Audition Time _________ Test Section Total Points Students Score Fingerboard Geography 11 points ________ (divide total by 4) Interval Identification 8 points ________ Pitch Location and Fingering 32 points ________ Single Position Fingering 32 points ________ Bass, Treble, and Tenor Clef Note Identification 12 points ________ Total Possible Score Total Students Score and % 95 ____________

PAGE 113

113 APPENDIX G THE PLAYING TEST

PAGE 114

114

PAGE 115

115

PAGE 116

116

PAGE 117

117

PAGE 118

118

PAGE 119

119

PAGE 120

120

PAGE 121

121

PAGE 122

122

PAGE 123

123

PAGE 124

124

PAGE 125

125

PAGE 126

126

PAGE 127

127

PAGE 128

128

PAGE 129

129

PAGE 130

130

PAGE 131

131

PAGE 132

132

PAGE 133

133

PAGE 134

134

PAGE 135

135

PAGE 136

136

PAGE 137

137

PAGE 138

138

PAGE 139

139

PAGE 140

140

PAGE 141

141

PAGE 142

142

PAGE 143

143

PAGE 144

144 APPENDIX H THE PLAYING TEST EVALUATION FORM Students Name ___________________________________ Adjudicators Code ______ Grade Level _________________________________ Degree Program ______________________________ Audition Day _________ Audition Time _________ Part One: Left Hand Technique Scales The students playing of scales exhibits: ( Check All that Apply, worth 2 points each) 95 % accurate whole and half steps. evenly divided bow distribution. steady tempo. effortless position changes. smooth string crossings. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Arpeggios The students playing of arpeggios demonstrates: (Check All that Apply, worth 2 points each) mostly accurate intonation. smooth connections of positions. little audible sliding between notes. clean string crossings. a steady and consistent tempo. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Broken Thirds The students playing of broken thirds: (check One only) 10 demonstrates the highest level of competency. 8 shows a high degree of experience, with only minor performance flaws. 6 indicates a moderate degree of competence or experience. 4 is tentative and faltering with some pitch and/or intonation errors. 2 is undeveloped and results in many inaccu rate pitches and out of tune notes. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________

PAGE 145

145Double Stops The students playing of double stops features: ( Check All that Apply, worth 2 points each) consistently good intonation with all intervals. a clear, unscratchy tone. the clean setting and releasing of fingers when playing double stops. even bow-weight distribution on two strings. the ability to vibrate on two strings simultaneously. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Position Changes The students technique of changing positions: (check One only) 10 demonstrates well-prepared, smooth shifting between notes, without interruption of the melodic line, or creating a break between notes. 8 shows smooth shifting and uninterrupted melodic line, but includes excessive audible slides. 6 indicates experience with position chang es, but includes some sudden jerky motions when shifting and several audible slides. 4 indicates some experience with shifting but position changes are often either, jerky, unprepared, or filled with audible slides. 2 exhibits un-prepared a nd inaccurate shifting. Sliding between notes is often heard and hand/arm motions are jerky. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Arpeggiated Chords The students playing of arpeggiated chords exhibits: ( Check All that Apply, worth 2 points each) coordinated action between th e left hand and bow arm. even string crossings, with steady rhythm. an ease in preparing chordal fingering patterns. clear tone on all strings. graceful, fluid motion. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Thumb Position The students playing of thumb position reveals that ( Check All that Apply, worth 2 points each) the thumb rests on two strings and remains perpendicular to the strings. the fingers stay curved and do nt collapse while playing. correct finger spacing is consistently used. there is an ease of changing from string to string. the arm and wrist support the thumb and fi ngers versus resting on the side of the cello. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________

PAGE 146

146Vibrato The students vibrato: (check One only) 10 is full, rich, even, and continuous. It is used consistently throughout the fingerboard. 8 is full and rich, but occasionally interru pted due to fingering/position changes. 6 is mostly utilized, but is irregular in its width or speed and lacks continuity throughout the fingerboard. Excessive te nsion is apparent in the vibrato. 4 is demonstrated, but in a tense, irregular way. It is not used consistently by all fingers in all positions. Vibrato width/speed may be inappropriate. 2 is demonstrated marginally with a tense, uneven application. Vibrato is inconsistently used and lacks appropriate width/speed. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Intonation The students intonation: (check One only) 10 is accurate throughout on all strings and in all positions. 8 is accurate, demonstrating minimal intonati on difficulties, with occasional lack of pitch correction. 6 is mostly accurate, but includes out of tune notes resulting from half-step inaccuracies, inaccurate shifting or incorrect spacing of fingers. 4 exhibits a basic sense of intonation, yet has frequent errors of pitch accuracy and often doesnt find the pitch center. 2 is not accurate. Student plays out of tune the majority of the time. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Part Two: Basic Bowing Strokes Slurred Legato The students legato bow stroke: (check One only) 10 is smoothly connected with no percep tible interruption between notes. 8 is smooth, but has some breaks within phrases. 6 includes some disconnected notes and detached bowing. 4 shows breaks within phrases and is often not smoothly connected. 2 exhibits little skill of smooth bowing. Bowing has many interruptions between notes. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________

PAGE 147

147Dtach/Accentuated Dtach The students dtach bow stroke is: (check One only) 10 vigorous and active-played on the string. Accentuated Dtach features greater accented attacks. 8 vigorous and active, but occasionally l acking articulation or bow control. 6 moderately active, but lacking articulation or suffering from too much accentuation. 4 not making sufficient contact with the string, or else producing a scratchy sound. 2 undeveloped, and lacking the control to produce a consistent vigorous sound. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Martel The students playing of martel bowing features: ( Check All that Apply, worth 2 points each) a fast, sharply accentuated bow stroke. a heavy separate stroke resembling a sforzando. bow pressure being applied before the bow is set in motion. the bow being stopped after each note. great initial speed and pressure with a quick reduction of both. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Portato The students use of portato bowing demonstrates: ( Check All that Apply, worth 2 points each) a slightly separated legato bow stroke. the pressure of the index finger being applied to pulse each note within a slur. an enunciation of each note through a s light change of bow pressure/speed. the bow does not stop between notes. notes being articulated without lifting the bow from the string. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Staccato/Slurred Staccato The students playing of staccato: (check One only) 10 is crisp and well-articulated, with the bow stopping after each note. 8 demonstrates a high level of mastery, with minor flaws in execution. 6 shows a moderate level of attainment. 4 reveals only a limited amount of bow control. 2 does not demonstrate the abilit y to execute these strokes. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________

PAGE 148

148Spiccato/Flying Spiccato The students playing of spiccato indicates: ( Check All that Apply, worth 2 points each) a bounced-bow stroke with good control of the bows rebound off the string. good tone production through control of bow pressure and speed. the bow springs lightly from the string. notes are individually activated. even use of bow distribution (Flying Spiccato excerpts). Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Sautill The students use of sautill bowing demonstrates: ( Check All that Apply, worth 2 points each) a rapid, natural rebounding of the bow. a primary movement initiated from the wr ist and hand, using a light bow hold. the bows contact with the string is centered around the balance point of the bow. the tempo is fast enough for the bow to continue to bounce of it own momentum. the resilience of the bow stick is used to allow the bow to spring off the string. Observations/Comments: ________________________________________________________________ _____________________________________________________________________________________ Pizzicato The students playing of pizzicato illustrates: ( Check All that Apply, worth 2 points each) confidently played arpeggiated chords, using the thumb. strong, vibrant tone (as demonstrated in the Brahms excerpt). clear ringing sound in the upper register (as in the Kabalevsky excerpt). an absence of snapping sounds caused by pulling the string at too steep an angle. an absence of buzzing or dull, thudding t ones due to inadequate setting of the left-hand fingers. Observations/Comments: ________________________________________________________________ ____________________________________________________________________________________

PAGE 149

149 APPENDIX I REPERTOIRE USED IN THE PLAYING TEST Composer Piece Technique/Bow Stroke Bach, J.S. Arioso (from Cantata 156) Intonation Sonata in G Minor, No. 3, 3rd mvt. Staccato Suite No. 1 in G Major, Allemande Slurred Legato Suite No. 3 in C Major, Allemande Double Stops Suite No. 5 in C Minor, Sarabande Intonation Boccherini, L./Grutzmacher Concerto in Bb Major, 1st mvt. Scales Concerto in Bb Major, 1st mvt. Arpeggiated Chords Concerto in Bb Major, 1st mvt. Thumb Position Concerto in Bb Major, 3rd mvt. Spiccato Beethoven, L. van Sonata in G Minor, Op. 5, No. 2 Spiccato 3rd mvt. Sonata Op. 69 in A Major, 1st mvt. Scales Sonata Op. 69 in A Major, 3rd mvt. Thumb Position Sonata in C Major, O p. 102, No. 1 Accentuated Dtach 3rd mvt. Brahms, J. Sonata No. 1 in E Minor, Op. 38, 1st mvt. Position Changes Sonata No. 1 in E Minor, Op. 38, 1st mvt. Portato Sonata No. 1 in E Minor, Op. 38, 2nd mvt. Slurred Staccato Sonata No. 2 in F Major, Op. 99, 2nd mvt. Pizzicato Breval, J. B. Concerto No. 2 in D Major, Rondo Thumb Position

PAGE 150

150 Debussy, C. Sonata in D Minor, Prologue Portato Dotzauer, Etude Op. 20, No. 13 Arpeggios Dvorak, A. Concerto in B Minor, Op. 104, 1st mvt. Vibrato Concerto in B Minor, Op. 104, 2nd mvt. Double Stops Eccles, H. Sonata in G Minor, 1st mvt. Vibrato Sonata in G Minor, 2nd mvt. Staccato Elgar, E. Concerto in E Minor, Op. 85, 2nd mvt. Pizzicato Concerto in E Minor, Op. 85, 2nd mvt. Sautill Concerto in E Minor, Op. 85, 4th mvt. Arpeggiated Chords Faur, G. lgy Op. 24 Scales lgy Op. 24 Vibrato lgy, Op. 24 Intonation Franck, C. Sonata in A Major, 1st mvt. Slurred Legato Frescobaldi, G. Tocatta Martel Goens, D. van Scherzo Op. 12 Sautill Scherzo Op. 12 Thumb Position Golterman, G. Concerto in G Major, Op. 65, No. 4 Position Changes 3rd mvt. Concerto in G Major, O p. 65, No. 4 Arpeggiated Chords 3rd mvt. Haydn, J. Concerto in C Major, Hob. VIIb. 1 Double Stops 3rd mvt. Concerto in D Major, Op. 101, 1st mvt. Broken Thirds Jensen, H. J. The Ivan Galamian Scale System for Broken Thirds Violoncello

PAGE 151

151 Kabalevsky, D. B. Concerto in G Minor, Op. 49, 1st mvt. Pizzicato Lalo, E. Concerto in D Minor, 2nd mvt. Slurred Legato Locatelli, P. Sonata in D Major, 1st mvt. Flying Spiccato Marcello, B. Sonata in E Minor. Op. 1 No. 2, 2nd mvt. Dtach Sonata in E Minor. Op. 1 No. 2, 4th mvt. Slurred Staccato Popper, D. Gavotte in D Major Flying Spiccato Hungarian Rhapsody Op. 68 Sautill Rimsky-Korsakov, N. Sheherazade, Op. 35, 1st mvt. Arpeggiated Chords Saint-Sans, C. Allegro Appassionato Op. 43 Flying Spiccato The Swan Position Changes Sammartini, G. B. Sonata in G Major, 1st mvt. Arpeggios Sonata in G Major, 1st mvt. Double Stops Schrder, C. Etude, Op. 44, No. 5 Sautill Shostakovich, D. Sonata in D Minor, Op. 40, 1st mvt. Intonation Squire, W.H. Danse Rustique Op, 20, No. 5 Scales Starker, J. An Organized Method of String Playing Position Changes (p. 33) Schumann, R. Fantasy Pieces Op. 73, 1st mvt. Arpeggios Tchaikovsky, P. I. Chanson Triste Op. 40, No. 2. Vibrato Vivaldi, A. Concerto in G Minor for 2 Cellos, RV 531, Scales 1st mvt. Sonata in E Minor, No. 5, 2nd mvt. Martel

PAGE 152

152 APPENDIX J THE STUDENT SELF-ASSESSMENT PROFILE Name_______________________________ Status (year/college)___ _____________ Major__________ Minor___________ Years of study on the Cello_____ Ot her instrument(s) played ___________________ Repertoire previously studied: Methods/Etudes_________________________________________________________________ _________________________________________________________________ ________________________________________________________________________ Solo Literature__________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ Orchestral Experience: ________________________________________________________________________ ________________________________________________________________________ How interested are you in each of these areas of performance? I am interested in solo performance. Strongly agree Agree Disagree Strongly disagree I am interested in chamber music performance. Strongly agree Agree Disagree Strongly disagree I am interested in orchestral performance. Strongly agree Agree Disagree Strongly disagree Other areas of performance interest? _______________________________________ ________________________________________________________________________ What are your personal goals for studying the cello?___________________________ ________________________________________________________________________ ________________________________________________________________________ What areas of cello technique do yo u feel you need the most work on? ________________________________________________________________________ ________________________________________________________________________

PAGE 153

153 Summarize your goals in musi c and what you need to do to accomplish these goals. 6 months:_______________________________________________________________ ________________________________________________________________________ 1 year:__________________________________________________________________ ________________________________________________________________________ 2 years:_________________________________________________________________ ________________________________________________________________________ 4 years:_________________________________________________________________ ________________________________________________________________________ 10 years:________________________________________________________________ ________________________________________________________________________

PAGE 154

154 APPENDIX K DESCRIPTIVE STATISTICS FOR RAW DATA Table K-1. Raw Scores of the Written Test Items, and Composite Means and Standard Deviations Student Fingerboard Interval Pitch Single-Pos. Note Total Geography Id. Location Fingering Id. Score ______________________________________________________________________________ 1 11 7 30 32 12 92 2 11 7 30 28 11 87 3 11 7 32 31 12 93 4 11 7 31 30 12 91 5 11 8 32 32 12 95 6 11 7 0 32 12 59 7 5 3 16 25 10 59 8 11 6 32 32 12 93 9 11 6 29 32 12 90 10 11 8 29 32 12 92 11 11 5 31 32 12 91 12 11 4 12 30 11 68 13 11 8 22 12 12 65 14 11 5 31 32 11 90 15 11 7 29 27 12 86 16 11 3 2 32 11 59 17 11 7 32 30 8 88 18 11 8 32 31 12 94 19 11 8 32 32 12 95 20 11 8 31 30 12 92 21 11 3 30 30 11 85 22 11 6 14 32 10 73 23 11 5 28 32 12 86 24 11 6 32 26 12 87 25 11 5 27 31 12 86 26 11 8 32 30 12 93 27 11 8 31 32 12 94 28 11 8 32 32 12 95 29 11 6 29 23 12 81 30 11 8 31 32 12 94 M 10.80 6.40 26.70 29.80 11.57 85.20 SD 1.10 1.63 8.82 4.11 0.90 11.38

PAGE 155

155 Table K-2. Raw Score, Percent Score, Frequency Distribution, Z Score, and Percentile Rank of Written Test Scores Raw Percent Frequency Z Percentile Score Score Score Rank ______________________________________________________________________________ 59 62.00 2 -2.30 1.67 62 66.00 1 -2.04 8.33 65 68.00 1 -1.78 11.67 68 72.00 1 -1.51 15.00 73 77.00 1 -1.07 18.33 81 86.00 1 -0.37 21.67 85 89.00 1 -0.02 25.00 86 91.00 3 .07 28.33 87 92.00 2 .16 38.33 88 92.00 1 .25 45.00 90 95.00 2 .42 48.33 91 96.00 2 .51 55.00 92 97.00 3 .60 61.67 93 98.00 3 .69 71.67 94 99.00 3 .77 81.67 95 100.00 3 .86 91.67

PAGE 156

156 Table K-3. Raw Scores of the Pl aying Test Items, Composite M eans, and Standard Deviations Student Scales Arpeggios Broken Double Position Arpeggiated Thirds Stops Changes Chords _____________________________________________________________________________________ 1 10 10 8 10 10 10 2 10 10 8 8 6 6 3 10 10 8 8 10 10 4 10 8 10 8 8 10 5 8 8 6 6 8 10 6 8 10 10 8 8 8 7 10 10 8 8 10 8 8 8 10 8 6 8 8 9 8 10 8 4 8 6 10 8 10 8 8 10 8 11 6 8 8 8 10 4 12 8 10 8 6 10 10 13 8 8 6 8 6 8 14 6 6 6 4 6 6 15 6 6 6 4 6 8 16 6 4 4 4 6 6 17 6 6 6 8 6 2 18 8 8 4 6 4 4 19 8 8 8 8 8 10 20 6 10 6 4 8 8 21 4 6 6 6 10 8 22 6 6 4 4 6 4 23 0 2 2 2 6 2 24 6 6 4 4 4 4 25 8 10 6 4 6 2 26 8 8 6 8 8 8 27 8 8 8 8 10 10 28 10 10 8 6 8 10 29 10 8 8 8 8 10 30 10 10 8 8 8 8 M 7.6 8.13 6.8 6.4 7.67 7.2 SD 2.19 2.10 1.86 1.99 1.83 2.66 (Table K-3 continues on next page)

PAGE 157

157 Table K-3. (continued) Student Thumb Vibrato Intonation Slurred Dtach Martel Position Legato _____________________________________________________________________________________ 1 8 10 8 8 8 8 2 6 8 8 8 8 10 3 10 10 6 10 10 10 4 10 8 8 10 8 4 5 10 10 8 10 8 10 6 8 8 6 10 8 8 7 6 10 6 10 10 10 8 4 8 6 10 8 6 9 8 4 6 4 10 6 10 6 10 8 10 10 10 11 6 8 6 8 10 8 12 6 10 8 8 8 8 13 8 10 6 8 10 2 14 6 4 4 8 8 4 15 6 8 4 6 6 8 16 4 6 2 8 8 2 17 6 4 4 8 8 2 18 4 6 8 8 8 4 19 6 8 8 8 10 8 20 10 8 6 9 10 10 21 2 8 6 6 2 2 22 8 6 6 4 8 6 23 6 8 4 8 8 4 24 8 10 4 8 10 4 25 6 4 4 4 8 4 26 8 8 6 10 6 6 27 8 10 8 8 10 10 28 10 8 8 10 10 8 29 10 8 6 10 10 8 30 6 10 8 10 8 10 M 7.0 7.93 6.2 8.23 8.47 6.67 SD 2.08 2.00 1.69 1.85 1.72 2.84 (Table K-3 continues on next page)

PAGE 158

158 Table K-3. (concluded) Student Portato Staccato Spiccato Sautill Pizzicato Total Score _____________________________________________________________________________________ 1 10 8 8 10 8 152 2 8 10 8 4 10 136 3 8 8 10 10 8 156 4 8 8 6 10 10 144 5 8 8 8 4 4 134 6 10 10 8 8 10 146 7 10 10 10 8 8 152 8 10 10 8 6 4 128 9 4 6 8 10 6 116 10 10 10 8 10 8 152 11 6 4 4 4 6 114 12 10 8 8 8 6 140 13 10 6 2 8 6 120 14 10 8 2 4 4 96 15 10 8 8 10 6 116 16 4 6 2 2 2 76 17 10 8 6 8 4 102 18 4 10 6 6 2 100 19 8 10 8 10 10 142 20 10 10 3 8 8 134 21 6 6 2 4 2 86 22 0 6 2 2 4 82 23 8 8 4 0 4 76 24 4 8 8 8 4 104 25 0 6 6 10 4 92 26 10 6 4 10 4 124 27 10 8 10 10 8 152 28 8 8 8 4 6 140 29 10 8 6 10 10 148 30 6 8 8 8 6 140 M 7.67 7.93 6.3 7.13 6.07 123.33 SD 2.97 1.62 2.61 3.00 2.55 25.18

PAGE 159

159 LIST OF REFERENCES Abeles, H.F. (1973). Development and va lidation of a clarin et performance adjudication scale. Journal of Research in Music Education, 21 246-255. Aristotle, trans. 1943, Jowett, B. Politics (1340b24) New York: Random House. Aristotle, Nichomachean Ethics Bk. 2 (1103a26-1103b2) as para phrased by Durant, W. (1967). The Story of Philosophy New York: Simon and Schuster. Asmus, E.P. & Radocy, R.E. (2006). Quantita tive Analysis. In R. Colwell (Ed.), MENC handbook of research methodologies (pp.95-175). New York: Oxford University Press. Bergee, M. J. (1987). An application of th e facet-factorial approach to scale construction in the development of a rating scale for euphonium and tuba music performance. Doctoral dissertation, Un iversity of Kansas. Berman, J., Jackson, B. & Sarch, K. (1999). Dictionary of bowing and pizzicato terms. Bloomington, IN: Tichenor Publishing. Blum, D. (1997). Casals and the art of interpretation. Berkeley and Los Angeles, CA: University of California Press. Boyle, J. (1970). The effect of prescribed rhythmical movements on the ability to read music at sight. Journal of Research in Music Education, 18 307-308. Boyle, J. (1992). Evaluation of music ability. In D. Boyle (Ed.), Handbook of research on music teaching and learning (pp. 247-265). New York: Schirmer Books. Boyle, J. & Radocy, R.E. (1987). Measurement and evaluation of musical experiences New York: Schirmer Books. Brophy, T. S. (2000). Assessing the developing child mu sician: A guide for general music teachers Chicago: GIA Publications. Colwell, R. (2006). Assessments potential in music education. In R. Colwell (Ed.), MENC handbook of research methodologies (pp.199-269). New York: Oxford University Press. Colwell, R. & Goolsby, T. (1992). The teaching of instrumental music. Englewood Cliffs, NJ: Prentice Hall. Eisenberg, M. (1966). Cello playing of today. London: Lavender Publications.

PAGE 160

160 Ekstrom, R., French, J., Harman, H., & Dermen, D. (1976). Kit of factor-referenced cognitive tests. Princeton: Educational Testing Service. Elliott, D. J. (1995). Music matters: A new philo sophy of music education New York: Oxford University Press. Epperson, G. (2004). The Art of string teaching. Fairfax, VA: American String Teachers Association with National School Orchestra Association. Farnum, S. E. (1969). The Farnum string scale. Winona, MN: Hal Leonard. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books. Gillespie, R. (1997). Rating of violin and viola vibrato performance in audio-only and audiovisual presentations. Journal of Research in Music Education, 45, 212-220. Gromko, J. E. (2004). Predictors of mu sic sight-reading abilit y in high school wind players. Journal of Research in Music Education, 52, 6-15. Hoover, H., Dunbar, S., Frisbie. D., Oberley, K., Bray, G., Naylor, R., Lewis, J., Ordman, V., & Qualls, A. (2003). The Iowa tests. Itasca, IL: Riverside. Jensen, H. J. (1994). The Ivan Galamian scale system for violoncello. Boston MA: ECS Publishing. Jensen, H.J. (1985). The four great families of bowings. Unpublished manuscript, Northwestern University. Katz, M. (1973). Selecting an achievement test : Principles and procedures. Princeton: Educational Testing Services. Kidd, R.L. (1975). The construction and validation of a scale of trombone performance skills Doctoral dissertation, University of Illinois at Urbana-Champaign. Lehman, P.B. (2000). The power of the nati onal standards for music education. In B. Reimer (Ed.), Performing with understanding: The challenge of the national standards for music education (pp. 3-9). Reston, VA: MENC. Magg, F. (1978). Cello exercises: A comprehensive surv ey of essential cello technique. Hillsdale, NY: Mobart Music. Mooney, R. (1997). Position pieces. Miami, FL: Summy-Birchard Music. Mutschlecner, T. (2004). The Mutschlecner diagnostic test of cello technique: Pilot study Unpublished manuscript, University of Florida.

PAGE 161

161 Mutschlecner, T. (2005). Development and validation of a diagnostic test of cello technique. Unpublished manuscript, University of Florida. Reimer, B. (1989). A philosophy of music education (2nd ed.) Englewood Cliffs, NJ: Prentice Hall. Reimer, B. (2003). A philosophy of music educa tion: Advancing the vision. (3rd ed.) Upper Saddle River, NJ: Pearson Education. Renwick, J. M. & McPherson, G. E. (2002). Interest and choice: student selected repertoire and its effect on practicing behavior. British Journal of Music Education, 19 (2) 173-188. Sand, B. L. (2000). Teaching genius: Dorothy dela y and the making of a musician Portland, OR: Amadeus Press. Saunders, T. C. & Holahan, J. M. (1997). Cr iteria-specific rating sc ales in the evaluation of high-school instrumental performance. Journal of Research in Music Education, 45, 259-272. Spiro, R. J.; Vispoel, W. P.; Schmitz, J. G.; Samarapungavan, A.; & Boeger, A. E. (1987). Knowledge acquisition for applica tion: Cognitive flexib ility and transfer in complex content domains. In B. K. Britten & S. M. Glynn (Eds.). Executive control processes in reading (pp. 177-199). Hillsdale, NJ: Lawrence Erlbaum Associates. Starer, R. (1969). Rhythmic training New York: MCA Music Publishing, Starker, J. (1965). An organized method of string playin g: Violoncello exercises for the left hand. New York: Peer Southern Concert Music. Watkins, J., & Farnum, S. (1954). The Watkins-Farnum performance scale. Milwaukee, WI: Hal Leonard. Warren, G. E. (1980). Measurement and eval uation of musical behavior. In D. Hodges (Ed.), Handbook of music psychology (pp. 291-392). Lawrence, KS: National Association for Music Therapy. Zdzinski, S.F. (1991). Measurement of solo instrumental music performance: A review of literature. Bulletin of the council for Re search in Music Education, no. 109, 47-58. Zdzinski, S. F., & Barnes, G. V. (2002). Development and validation of a string performance rating scale. Journal of Research in Music Education 50, 245-255.

PAGE 162

162 BIOGRAPHICAL SKETCH Timothy Miles Mutschlecner was born on N ovember 17, 1960 in Ann Arbor, Michigan. A middle child with an older brother and younge r sister, he grew up mostly in Bloomington, Indiana, but finished high sc hool in Los Alamos, New Mexico, graduating in 1979. He earned his Bachelors in Music from Indi ana University in 1983 where he studied cello with Fritz Magg. In 1992 Tim graduated from the Cleveland Inst itute of Music with a Masters degree in Performance and Suzuki Pedagogy. He taught in the Preparatory Department at The Cleveland Institute of Music from 1992 to 1995 before accep ting the position of director of the cello program at the Suzuki School of Music in Johnson City, Tennessee. In Tennessee, Tim taught a large cello studio and played in two re gional orchestras. He taught students as well through Milligan College and East Tennessee State University. Along with giving recitals and playing in the Meadowlark Trio, Tim was a featured soloist with the Johnson City Symphony. In 2003, Tim began work on his Ph.D. in Music Education at the University of Florida in Gainesville. During the next four years he taught university cello students as a graduate assistant while completing his degree. Tim remained an active performer while st udying at the University of Florida, serving as principal cellist in th e Gainesville Chamber Orchestra from 2003 to 2007, and performing with the Music Schools New Mu sic Ensemble. He maintained a private studio with students of all ages and levels. Upon the completion of his Ph.D. program, Ti m will begin teaching at the University of Wisconsin-Stevens Point in the Aber Suzuki Ce nter. This position will provide opportunities to work with beginning and intermediate level cello students, as well to offer cello pedagogy for university-level students. He an ticipates continuing to do research in the field of string music

PAGE 163

163 education, particularly in the areas of string pe dagogy and assessment. Tim has been married to Sarah Caton Mutschlecner, a nurse practitioner, for 18 years. They have three daughters: Audrey, age 16; Megan, ag e 14; and Eleanor, age 10.