Citation
Testing a model of prosody production in brain-intact individuals using the dual task paradigm

Material Information

Title:
Testing a model of prosody production in brain-intact individuals using the dual task paradigm
Creator:
Haak, Nancy Jeanne, 1957-
Publisher:
[s.n.]
Publication Date:
Language:
English
Physical Description:
vii, 93 leaves : ill. ; 28 cm.

Subjects

Subjects / Keywords:
Comprehension ( jstor )
Emotional states ( jstor )
Fingers ( jstor )
Hands ( jstor )
Hemispheres ( jstor )
Histological shadowing ( jstor )
Language ( jstor )
Linguistics ( jstor )
Paradigms ( jstor )
Spoken communication ( jstor )
Dissertations, Academic -- Speech -- UF
Language and languages -- Rhythm -- Effect of laterality on ( lcsh )
Prosodic analysis (Linguistics) ( lcsh )
Speech thesis Ph. D
Genre:
bibliography ( marcgt )
non-fiction ( marcgt )

Notes

Thesis:
Thesis (Ph. D.)--University of Florida, 1987.
Bibliography:
Bibliography: leaves 86-91.
General Note:
Typescript.
General Note:
Vita.
Statement of Responsibility:
by Nancy Jeanne Haak.

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
The University of Florida George A. Smathers Libraries respect the intellectual property rights of others and do not claim any copyright interest in this item. This item may be protected by copyright but is made available here under a claim of fair use (17 U.S.C. §107) for non-profit research and educational purposes. Users of this work have responsibility for determining copyright status prior to reusing, publishing or reproducing this item for purposes other than what is allowed by fair use or other copyright exemptions. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder. The Smathers Libraries would like to learn more about this item and invite individuals or organizations to contact the RDS coordinator (ufdissertations@uflib.ufl.edu) with any additional information they can provide.
Resource Identifier:
021589520 ( ALEPH )
18130871 ( OCLC )

Downloads

This item has the following downloads:


Full Text















TESTING A MODEL OF PROSODY PRODUCTION
IN BRAIN-INTACT INDIVIDUALS USING THE DUAL TASK PARADIGM








BY

NANCY JEANNE HAAK


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY






UNIVERSITY OF FLORIDA


1987

































Copyright 1987 by

Nancy Jeanne Haak

















TO MY FATHER FOR HIS BIRTHDAY














ACKNOWLEDGMENTS


The constant love, support, and encouragement given to me by my parents made the completion of this project possible. A special thank you is given to my dear friend and

colleague, Janet Harrison, who devoted much of her time and effort to help me through the dark days and to celebrate the

victories. Many thanks are also given to my committee members, Dr. Fennell, Dr. Gonzalez-Rothi, Dr. Lombardino, and Dr. Rothman. They have been there from the begining always willing to guide, advise, and encourage. Words of gratitude seem so inadequate when it comes to acknowledging

the efforts of my chairman, Michael Crary. He has been my friend and made me feel welcome in his home and with his family. He has been my colleague and allowed me to learn along with him. He has been my professor and shown me how to mature as a clinician and as a researcher. For these and the countless times he has had more than 'a minute', I offer my warmest thanks.


iv















TABLE OF CONTENTS


Page


ACKNOWLEGMENTS .


. . . iv . . . vii


ABSTRACT CHAPTERS


I INTRODUCTION . . . . . . . II REVIEW OF THE LITERATURE .


5


Prosody . . . . . . . . . . . .
Comprehension of Prosody
in Brain-Damaged Persons.
Production of Prosody
in Brain-Damaged Persons.
Comprehension of Prosody
in Brain-Intact Persons
Production of Prosody
in Brain-Intact Persons


Dual Task Paradigm. . . .
Finger Tapping. . . .
Speech Shadowing. . . Statement of the Problem. Research Hypotheses . . .


III METHODS. . . . . . . . . .

Subjects. . . . . . .
Apparatus . . . . . .
Finger Tapping Speech Stimuli
General Procedure .
Data Analysis . . . .
Statistical Analysis.

IV RESULTS . . . . . . . . . .

Finger Tapping Data. .
Speech Data. . . . . .
Summary . . . . . . .


5 8 9 . . . 13


14 15
20 22 27 28

31

31 31 31 33 36 38
41


. . . . . 43

. . . . . 43 . . . . . 46 . . . . .65


v


. . . . .
. . . . .
. . . . .
. . . . .
. . . . .


. . .


. . . .
. . . .
. . . .











V DISCUSSION. . . . . . . . . . . . . . . . . 69

Procedural Explanations. . . . . . . . 69 Prosody Model Explanations . . . . . . 72 Dual Task Paradigm Explanations.. . . .77 Concluding Remarks . . . . . . . . . . 79 Implications for Future Research . . . 80 APPENDICES

A QUESTIONNAIRE. . . . . . . . . . . . . . . 82

B INSTRUCTIONS . . . . . . . . . . . . . . . 83

C SAS STATISTICAL PROGRAM. . . . . . . . . . 84 REFERENCES . . . . . . . . . . . . . . . . . . . . 86

BIOGRAPHICAL SKETCH. . . . . . . . . . . . . . . . 92


vi












Abstract of Dissertation Pr'asented to the Graduate
School of the University of Florida in Partial
Fulfillment of the Requirements for the Degree of Doctor of Phit.losophy


TESTING A MODEL OF P-OSODY PRODUCTION
IN BRAIN-INTACT INDIVIDUALS USING THE DUAL 'SK PARADIGM By

Nancy Jeanne Haak

August, '.987

Chairman: Michael A. Crary, Ph D. Major Department: Speech

A current model of prosody production suggests that the

right cerebral hemisphere processes emotional prosody and that both hemispheres may be responsible for the production of linguistic prosody. The primary basis for this model has

involved brain-damaged patients. The current investigation was conducted to assess this mcdel in brain-intact persons. A dual task paradigm was use: to designate hemispheric laterality through interference effects in the performance of competing tasks. The task selected were single index finger tapping and speech shadowing.

Twenty right-handed males with no history of

neurological problems participated in the study. Four

prosodic contours representing linguistic and emotional prosody were imitated in the E2adowing task. Tapping was performed as rapidly and consistently as possible.

vii








Analyses of the concurrent performances and their comparison to baseline were made for both tasks. Ten

dependent variables were analyzed, three describing tapping and seven acoustical characteristics of prosody. No

significant interactions between tapping and shadowing were found. However, descriptive analysis revealed a bimanual and

symmetrical effect of speech upon finger tapping. Tapping did not have an effect upon the speech (prosody production) characteristics.

The model of prosody production under investigation in

this study suggested that both hemispheres contributed to the production of linguistic prosody but one hemisphere was

responsible for the production of emotional prosody. The results of this study did not support this notion of a hemispheric split in prosodic production. Two possible explanations are discussed. First, the production of prosody

may be a bihemispheric event, and second, linguistic and emotional contexts may influence the hemisphericity of the productive control of prosody.


viii














CHAPTER I
INTRODUCTION



If we want to know how the normal brain works, the best subjects may be normal human beings.
(Heilman, 1983, p.2)


The neuropsychological models for the normal production

of prosodic features have been based primarily on braindamaged populations. From such lesion data the traditional model has allocated the control of linguistic prosody to the left hemisphere and the control of emotional prosody to the

right hemipshere. Recent models, however, suggest that the right hemisphere may play a more global role in the

production of emotional and linguistic prosody (Shapiro and Danly, 1985).

Studies examining the neurological basis of prosody in

brain-intact individuals have focused upon comprehension. Laterality for the perception of prosody has been tested using dichotic listening paradigms. Recent results (ShipleyBrown & Dingwall, 1986) revealed a left ear superiority effect for both emotional and linguistic prosody. The left ear advantage was less strong for the linguistically based prosodic contours. From these and similar data, ShipleyBrown and Dingwall (1986) have proposed the "Attraction"


1








2

model. This model places the right hemisphere in the dominant role for processing both emotional and linguistic prosody. The left hemisphere is placed in a supporting role,

as it "attracts" processing of the more segmental features signaling linguistic prosody. This model supports the majority of the literature on comprehension of prosody in brain-intact populations (and in large part the comprehension of prosody in studies of brain-damaged persons

as well). The question remains, can the laterality effects of comprehension be equally applied to the laterality of production? Some studies, involving brain-damaged

populations, have reported laterality effects in production to be similar to those found in comprehension. Data from brain-intact subjects are needed to clarify such a model for normal functioning.

This investigation was designed to meet this need. A dual task paradigm was used as the method by which to collect data from normal subjects. Based on the principle of

interference or "something's gotta give" (Bryden, 1982, p. 112), this paradigm required the concurrent performance of two tasks. A decrement in performance on at least one of the

tasks was expected if the tasks required the same brain space. Bryden (1982) has described the effects of this paradigm as being analogous to the creation of "a 'temporary

lesion' in a perfectly normal subject" (p. 112). Such interference effects therefore, permited testing of a








3

neuropsychological model of prosodic feature production in brain-intact subjects.

A current model for the production of prosody based on

brain-damaged subjects, suggests that the right hemisphere dominates the production of emotional prosodic features. The

right and the left hemisphere are viewed as being more equally involved in their contribution to the production of linguistic prosodic features. The purpose of this

investigation was to test this neuropsychological model in brain-intact subjects using a dual task paradigm.

Two tasks, one verbal and one motor, were performed concurrently and individually for comparison of interference effects. Speech shadowing of a short paragraph, was the verbal task. A target sentence with a specific prosodic contour was embedded in an otherwise neutral paragraph. The prosodic features of the target sentence shadowed by the subject were acoustically analyzed. Each target sentence represented one of four main prosodic contours. Two of the contours represented emotional prosodic features (happy and sad), and two of the contours represented linguistic prosodic features (statements and questions). Each hand's performance of single index finger tapping was the motor task. The temporal characteristics of the finger tapping were assessed. These included tapping rate and variability. The tapping characteristics in the dual condition were








4

measured for the exact same time period as the shadowed target sentence.

Interference effects were expected when the "alone" trials were compared with the "dual" trials. The differences

expected were decrements in hand tapping performance and/or alteration in prosodic shadowing. For example, according to

the suggested model, the greatest difference in performance would have been observed when shadowing of emotional prosodic features (an hypothesized right hemisphere task) was paired with left hand finger tapping (a right hemisphere

task). A bidirectional analysis of the interference effects was necessary since prediction of a unidirectional effect would limit observation of the data to either deficits or facilitations in performance. This direction of the effect would have been difficult to predict a priori.














CHAPTER II
REVIEW OF THE LITERATURE




Prosody

The term "prosody" was first used by Monrad-Krohn in 1947. He defined prosodic qualities as including "correct inflection of words, correct placing of stress upon syllables and words in sentences; natural rhythm, pauses and rate of speaking, natural shifting of pitch from syllable to

syllable" (1947, p. 406). This definition can be condensed into three key perceptual features, pitch, rate and syllabic stress. These perceptual features correspond to the acoustic

measures of fundamental frequency, duration and intensity. The measurement of such acoustic parameters allows objective

and quantifiable assessment of the prosodic contours of speech. This instrumental or experimental approach to the study of prosody has been described by Cutler and Ladd (1983) as the "concrete" approach. They contrast this with the more descriptive and theoretical studies that have taken

the "abstract" approach. The literature from the concrete approach is germane to the purpose of this proposal.

Prosody may convey linguistic information and/or

emotional information. It has traditionally been divided


5








6


into these two information types. In this respect,

linguistic or propositional prosody signals syntactic distinction as in designating questions from statements. Emotional or affective prosody signals mood as in happy or sad. Particular patterns of acoustic features have been identified which serve to differentiate the subtypes of prosodic contours within linguistic and emotional prosody.

Linguistic patterns of prosody have been described in terms of fundamental frequency declination (topline slope),

sentence-final segmental lengthening and sentence-terminal intonation contour (Danly and Shapiro, 1982; Danly, Cooper and Shapiro, 1983). The terminal intonation contour has been

identified as "the most important feature in distinguishing simple declarative statements from yes/no questions in a number of languages" (Vaissiere, 1983). In English, a rising

terminal contour signals a question and a falling terminal contour signals a statement. Emotional patterns of prosody have been described in some studies as those acoustical properties of intensity and duration that are typically perceived as "stress" variation (Tucker, Watson and Heilman,

1977; Heilman, Bowers, Speedie and Coslett, 1984). Williams and Stevens (1972), however, have found that the contour of fundamental frequency versus duration most clearly indicates the emotionality of an utterance. For example, happy

sentences are characterized as being more highly variable, higher in pitch and of greater intonational range than sad.








7


Sad sentences are characterized by more frequent pauses, longer durations of long vowels and consonants and more restriction intonational range (Williams and Stevens, 1972).

For the most part, fundamental frequency and duration measures have been preferred in the acoustical descriptions

of both linguistic and emotional prosody (Rice, Abroms and Saxman, 1969; Williams and Stevens, 1972; Olive, 1975; Cooper and Sorensen, 1981; Eady and Cooper, 1986).

When considering those structures within the brain crucial to the processing of prosodic information, MonradKrohn (1947) concluded that, "co-operation of the whole brain is probably needed" (p. 415). Prosody has been connected with right-hemisphere processing by its

association with the pragmatic aspects of language (Cutler and Ladd, 1983). Ross (1981) has asserted that prosody is an

element of language in and of itself and as such is a dominant linguistic feature of the right hemisphere. Not all laterality studies have shown such exclusive support for the lateralization of prosody to the right hemisphere. There are

studies for example that have identified disruption of prosodic abilities in aphasic patients as a result of lefthemisphere damage (i.e., Danly and Shapiro, 1982; Ryalls, 1982) and in dysarthric patients (i.e., Kent and Rosenbek, 1982).

These studies of brain-damaged patients can be separated into assessment of comprehension and assessment of production of prosody.








8


Comprehension of Prosody in Brain-Damaged Persons

In general the literature on the comprehension of prosody among brain-damaged patients supports the dominance of the right hemisphere. (Tucker et al., 1977; Weintraub et

al. , 1981; Lonie and Lesser, 1983; Denes, Caldonetto, Semenza, Vaggnes and Zettlin, 1984; Tompkins and Mateer, 1985; Tompkins and Flowers, 1985). Most of the studies of prosodic comprehension assessed emotional prosody only or did not specify a division of prosody into its emotional and linguistic components. The Heilman et al. (1984) study did separate comprehension of affective/emotional from

linguistic prosody and found the right hemisphere dominant for affective and the left and right hemispheres to have equal difficulty with the linguistic prosody. Hartje, Willmes, and Weniger (1985) also separated emotional and linguistic prosody and tested their brain-damaged subjects using CV syllables in a dichotic task. The results of their

first experiment suggested that the left hemisphere could process intonation if required to do so without relying exclusively upon right hemisphere processing. The results of

their second experiment revealed that the right hemisphere will consult the left hemisphere whenever it is uncertain about the phonetic information contained in a stimulus; however, "there seems to be no significant tendency of the left hemisphere to delegate the processing of intonation to

the right hemisphere when phonetic [information] and








9

intonational information are combined. . . " (p. 98). This would suggest that while the right hemisphere may be implicated in many contexts, the left hemisphere has some capability in processing intonation and may "attract" such processing when phonetic information is involved.

The comprehension studies of brain-damaged subjects have generally supported the dominance of the right hemisphere for emotional prosody, and they have acknowledged

the participation of the left hemisphere in the processing of more linguistic aspects. Hartje et al. (1985) summarized this viewpoint well as they concluded, that while the right hemisphere may lead the left hemisphere in intonation processing, "it cannot be excluded that in normal conditions

of auditory language perception the entire processing is done by the left hemisphere" (p. 97). Production of Prosody in Brain-Damaged Persons

The traditional model of prosody production was

proposed by Hughlings Jackson (1932, cited in Heilman, Bowers, Speedie, and Coslett, 1984). Jackson suggested that the left hemisphere was dominant in mediating linguistic prosody and the right hemisphere was dominant in mediating emotional prosody. The following studies have supported the traditional model.

In 1977, Tucker, Watson and Heilman studied the production of affective prosody in eleven patients with right temporoparietal lesions. In the production task an








10

emotionally indifferent sentence was provided and the subjects were asked to repeat the sentence with one of the selected affective tones. Patients with such righthemisphere lesions were found to be deficient in their ability to produce affective speech. Tucker et al. noted that these patients "would often use propositional speech to express an emotion" (1977, p. 950).

Weintraub, Mesulam and Kramer (1981) looked at the effect of right-hemisphere damage on prosody in spontaneous production tasks. No clear differentiation between emotional

and linguistic features was made. They found their nine right-hemisphere-damaged patients to have had significantly more difficulty than their control group; however, no lefthemipshere-damaged patients were included for comparison in the study.

Ross (1981) observed ten patients with focal righthemisphere lesions and described their disruption in the production of prosody as "aprosodia." He suggested that these deficits mirrored the aphasias of the left hemipshere. However, no objective measures were used in the analyses.

Studies of the disruption of prosody in lefthemipshere-damaged patients with aphasia have supported the traditional model. Ryalls (1982) compared the production of prosody in Broca's aphasia patients with normal controls and

found a significantly restricted intonational range among the Broca's aphasia patients. Danly and Shapiro (1982) also








11

assessed the production of prosody in five Broca's aphasia patients and found abnormalities in the acoustical characteristics of linguistic :rosody. Findings similar to those for the Broca's patients were revealed by Danly, Cooper and Shapiro (1983) in their study of five patients with Wernicke's aphasia.

Cooper, Soares, Nicol, Michelow and Goloski (1984) assessed the acoustical features of fundamental frequency and speech timing in four patients with unilateral lefthemisphere damage, four patients with unilateral righthemisphere damage and four brain-intact controls. The subjects read aloud a series of "non-emotive" sentences. While abnormalities were observed in both left- and rightbrain-damaged subjects, the left-hemisphere patients exhibited a greater impairment in both the speech timing and the fundamental frequency.

Crary and Haak (1986) have assessed the acoustical features of linguistic prosody ?roduction in eight patients with left- versus right-hemispaere brain damage. Several of

their patients were aphasic. Their data suggested that a basic deficit in prosodic production resulted from either right- or left-hemisphere damage. The linguistic aspects of prosodic production however, were impaired only for those subjects with aphasia.

Some investigators have offered the argument that lefthemisphere damage results iz a distortion of prosodic








12

features or a "dysprosodia" while right-hemipshere damage results in a loss of prosodic features or "aprosodia". The results of the Kent and Rosenbek (1982) study support this argument. They examined the prosodic contours of dysarthric patients with lesions of the right hemisphere, left

hemisphere, cerebellum and basal ganglia. They described aprosodias in the patients with right-hemisphere and basal ganglia lesions and dysprosodias in the patients with lefthemisphere and cerebellar lesions.

In contrast, the results of Shapiro and Danly (1985) challenge the traditional model of prosody production. They compared the production of emotional (happy and sad) and linguistic prosody (questions and statements) in patients with anterior right-hemisphere damage, posterior righthemisphere damage and posterior left-hemisphere damage with normal controls. The results of their study suggested that right-hemisphere damage alone "may result in a primary disturbance of speech prosody that may be independent of the

disturbances in affect often noted in right-brain-damaged populations" (1985, p. 19). The prosody of the patients with left-posterior brain damage was found to be similar to that

of the normal controls. The study did not include leftfrontal lesions, and therefore, while the results implicate a more global role for the right hemisphere in both emotional and linguistic prosody, the participation of the left hemipshere in linguistic prosody can not be dismissed.








13

In summary, production of emotional prosody in braindamaged patients suggests a neuropsychological model of right-hemisphere dominance. This finding is consistent with the traditional model and is similar to the literature on comprehension. More equivalent participation of the left and

the right hemispheres in the production of linguistic prosody has been suggested recently and differs from the traditional viewpoint of left dominance. Comprehension of Prosody in Brain-Intact Persons

Studies of comprehension of prosody in brain-intact subjects have most often employed the use of a dichotic listening paradigm.

Right-hemisphere superiority for comprehension was found in studies of normal subjects for prosody in general (Blumstein and Cooper, 1974); for emotional prosody (Bryden and Ley, 1983 [adults and children]; Papanicolaou, Levin, Eisenberg and Moore, 1983); and for emotional and linguistic prosody (Shipley-Brown and Dingwall, 1986).

In the Shipley-Brown and Dingwall (1986) dichotic study both types of prosody elicited a left ear (right hemisphere)

superiority effect. The investigators noted that the advantage was greater for the emotional prosody than for the

linguistic prosody. Shipley-Brown and Dingwall advanced a model which seemed to account for these results. The 'Attraction Model' hypothesizes that the more non-segmental the feature (e.g., intonation) the more it is lateralized to








14

the right hemisphere (nondominant) . Conversely, the more segmental the prosodic feature (e.g., tone) the more lateralized the processing is to the left hemisphere (dominant). They suggest that pitch when produced in isolation or used to signal emotion, produces strong righthemisphere laterality effects. However, pitch used to signal

linguistic prosody is drawn to the left hemisphere where most sequential processing takes place. In this respect, the right hemisphere is viewed as processing both emotional and

linguistic prosody but the left hemisphere processes linguistic prosody only.

Production of Prosody in Brain-Intact Persons

Shipley-Brown and Dingwall (1986) add that this

Attraction Model accounts well for the results of their experiments and for the majority of the experiments that deal with comprehension of prosody. However, they conclude that the production of prosodic features is more problematic. They suggest that the comprehension and production of affective prosody appear to be mediated by the right hemisphere. It is plausible, they note, that the right hemisphere also directs the production of linguistic

prosody. Shipley-Brown and Dingwall cite as support that left-hemisphere damage does not eliminate prosody, it only changes its characteristics; whereas, right-hemisphere damage has been noted to produce aprosodia.








15

Neuropsychologically based models of prosody production from brain-intact populations are virtually nonexistent. In order to achieve a valid model of normal function such data should be collected. The current study has been designed to

collect such data using a dual task paradigm, and the remainder of the review of the literature will define and delineate the use of this procedure.

Dual Task Paradigma

"Dual," "concurrent" and "interference" are terms used to define a task design which was popularized in the "1960's and 1970's" psychology literature. Its underlying premise is defined as follows:

When two or more discriminations are processed together, either they proceed in parallel with the same efficiency as when either is processed alone,
or they may interfere with one another, and share the total available processing capability.
(Taylor, Lindsay & Forbes, 1967, p. 227)


Kinsbourne and Cook (1971) are often credited with being the first to use this method to assess behavioral asymmetries in brain function (Green & Vaid, 1986). Since its first use, the laterality-based dual task paradigm has incorporated a variety of verbal tasks. These have included

spontaneous speech, sentence, phrase and word repetition, nursery rhymes, tongue twisters, verbal routines (i.e.,

counting), reading and object naming. Nonverbal tasks have included humming, viewing pictures of faces and construction of block designs. Manual tasks have typically included dowel








16

balancing in the early studies, followed by finger-tapping sequences and single-finger tapping. The data generated have been complex and variable. Analyses of performance deficits have focused on left- versus right-hand performance. In the case of tapping, errors have been evaluated in terms of the

errors in sequence or in rate of tapping or number of taps per trial. Analysis of the speech task was completed rarely

in the early studies. When the speech was assessed, it was in terms of the number of words produced or the number of omitted or incorrectly produced (i.e. , misarticulated) words. Recent studies more frequently address the analysis of speech but with little more precision in analysis; the task of interest has continued to be the effect on tapping.

Green & Vaid (1986) advance support for the dual paradigm over dichotic and/or tachistoscopic methods for the

following reasons: 1) it considers the lateralization of function mainly at the output level (i.e., laterality of speech production may be assessed in addition to, or instead

of speech perception); 2) it is not subject to such brief stimuli and thus may allow more naturalistic linguistic tasks to be examined; and 3) the dual task design supports the view of the brain as an "integrated neural network where activation of one region may influence normal functioning of another region" (p.466).








17

Bryden (1982) suggests that the procedure is complex and requires careful control observations, judicious selection of tasks, and some theoretical notion as to the nature of the interference (i.e., structural or capacitive).

Critical variables must be controlled in a dual task paradigm. A minimum of four dependent measures is required, the speed and/or accuracy of both tasks alone and both tasks

together. These measures are crucial for the determination of how adequately the subject is maintaining performance in

the dual condition. Otherwise the subject could be dividing attention/processing resources between tasks and this would lead to an underestimate of the amount of interference (Bryden, 1982).

In terms of task selection, the notion of "structural interference" implies that two different tasks will interfere with each other only to the degree that they require similar brain structure. Therefore, two different tasks requiring nonoverlapping brain structures should cause no interference.

Attempts to explain the effects of the dual task procedure have been described in terms of models. The "Kinsbourne Model" also called the "Functional Cerebral Distance Model" (Hellige, 1985) or the "Intrahemispheric Competition Model" (Bowers, Heilman, Satz and Altman, 1978) is a thought to be a neuropsychologically based model. It








18

has been contrasted with the cognitive psychology models of Resource Allocation.

Hellige (1985) reviewed two of the more current forms of these models, a Multiple Resource Model of Friedman and Polson (1981) and the Functional Cerebral Distance Model of Kinsbourne and colleagues. In discussing the resource model,

Hellige describes the position taken by Navon and Gopher (1979). They were said to have suggested that the originally proposed single resource pool was inadequate in its

explanation of recent data. In its stead the multiple resource models have been proposed. Freidman and Polson's model has two separate pools which are hemisphere specific.

They propose that two tasks will interfere with each other to the extent that they require resources from the same limited-capacity pool. In addition, each pool is completely

independent and within a pool for a single hemisphere the resources are completely undifferentiated. Such resource allocation models would appear to oppose the current concept

of many neuropsychologically described systems (including prosody) that propose more interactive systems.

The Functional Cerebral Distance Model states that disruption occurs to the extent that two
activities place overlapping demands on spatially adjunct neural systems. The amount of functional overlap is presumably inversely related to the actual distance between areas (either within or between hemispheres responsible for mediating
these activities. (Bowers et al., 1978, p. 540)








19

According to Hellige (1985), such an interaction could be facilitative (produce a priming effect) if the two tasks

are compatible or to the extent that the two tasks involve cortical regions which are sparsely interconnected. It could, on the other hand, be inhibiting (produce interference effects) if the two tasks are incompatible. In other words interference occurs to the extent that there is "maladaptive cross-talk". Hellige likens this to what Navon and Gopher have termed 'concurrence costs'. These may be described as a summation effect whereby the dual condition requires more resources than the sum of the separate task resources. It is therefore the concurrent costs, rather than multiple resources which are responsible for the lateralized interference effects in this model.

Bowers et al. (1978) note three aspects of ambiguity in

the Kinsbourne model: 1) the underlying mechanisms which account for disruption are unclear; 2) the concept of neural

overflow rather loosely accounts for the facilitation as well as the disruption of performance (they suggest that the

experimental procedures of two target tasks versus one target task may account for this), and 3) prediction of onesided interference effects or mutual interference effects is

not permitted by the model. They recommend a bidirectional analysis to rectify this final point.

Evidence in the literature suggests that interference effects may be most noticeable when a simple repetitive task








20

requiring minimal attention is paired with a difficult task. This conclusion is pragmatically based on the reasoning that

asymmetry/interference effects may be masked if both tasks are difficult and that a "ceiling effect" may occur if both tasks are too simple (Green and Vaid, 1986). Finger Tapping

Single finger tapping, an elegantly lateral and simple task, requires minimal attention. General support for this comes from the neuropsychological assessment literature. The

Finger Tapping Test portion of the Halstead-Reitan Battery has been widely used and has been reported to demonstrate laterality, as the tapping rate of the hand contralateral to

the lesioned hemisphere typically shows a slowed rate (Lezak, 1983). The body's inability to control the fine distal movements of the ipsilateral hand have been recounted by Bryden (1982). This has been documented in both human and

animal studies. One such investigation was conducted by Brinkman and Kuypers (1972) with split-brain monkeys. Their findings supported the contention that "distal motor control

apparently is not available to the ipsilateral hand and fingers" (p. 538). The implication here is that single finger tapping can be said with confidence to be a task which involves the contralateral hemisphere exclusively.

In comparing finger tapping with a verbal task, some a

priori account for anticipated "cerebral distance" is necessary for the proposed model. Mary Wyke (1968) examined








21

the effects of arm-hand precision in the presence of brain lesions. She observed "impairment in the ability to make movements of the arm and hand requiring accuracy in timing and precision" were more often associated with frontal lobe lesions than with the temporal or parietal lobes (p. 125). A

"frontal lobe" manual task would therefore be desirable as it has been well documented that motor-speech programing and control are also a frontal lobe functions.

Kimura and Vanderwolf (1970) compared hand performance on a "very simple, but demanding motor task" (p. 769). They selected individual finger movement. Their results suggested

a left-hand superiority for most right handers. This was later contradicted by a study conducted by Peters (1981) who

demonstrated that preferred hand superiority would persist even after prolonged periods of practice in tapping. Such data would dictate that baseline measurements of finger tapping rate be taken for each hand individually.

Finally, dual task paradigms employing finger tapping have shown stronger interference/laterality effects than those using sequential finger tapping. Kee, Morris,

Bathurst & Hellige (1986) studied an alternating two-key tapping sequence and compared the results to previous single finger tapping studies. They found limited verbal laterality effects in the two-key tapping and concluded that the method of choice would be single finger tapping.








22


Speech Shadowing

The verbal task to be paired with the simple repetitive motor task should be complex, and continuous (Green & Vaid, 1986). Individual variability would be lessened if the task

required an exact repetition of a model stimulus rather than spontaneous generation of items. Shadowing of speech would appear to meet these criteria. Shadowing requires the

subject to repeat as rapidly as possible everything they hear, as they listen to it (Shattuck and Lackner, 1975). While similar to reading in terms of providing continuity, shadowing appears to be more difficult as it does not permit the scanning preview of information to follow and relies on

the subject's constant attention when required to imitate online.

Some of the original work using the shadowing technique was conducted by Cherry and Taylor (1954) to demonstrate the

limitations of secondary recognition. Their hypothesis was that a performance decrement should be found if secondary recognition processes are required to monitor two channels instead of one.

Marslen-Wilson (1975) used shadowing to support the model of sentence processing in which the listener analyzes

incoming stimuli at all available levels of analysis (therefore information at each of the levels would have the

potential to constrain or to guide simultaneous processing at other levels).








23

Shattuck and Lackner (1975) used shadowing to delineate the contribution of syntactic structure to speech

production. They found evidence to support the notion that speaking a sentence involves planning farther ahead than the next word.

Allport, Antonis and Reynolds (1972) used shadowing in a dual task paradigm to disprove the single-channel

hypothesis. They found that their third-year music students could sight read music and shadow speech without compromise of either task. However, they did find that proficiency of piano playing influenced the subject's ability to answer questions regarding the content of the speech shadowed. They used this to advocate a multichannel hypothesis.

Lackner and Shattuck-Hufnagel (1982) selected the shadowing procedure to assess the long-term subtle linguistic deficits of Korean War veterans who had sustained

penetrating head injuries. They chose a task that would be differentially sensitive to a range of syntactic and semantic factors and that would tax to the utmost speech comprehension and production skills" (p.709). They found shadowing to be "extraordinarily sensitive in detecting residual language dysfunction" (p.712).

From the above review it can be seen that shadowing has

the potential to be an effective task which can be used successfully in the dual paradigm. The typical focus of the dual paradigm literature has been on the interference of the








24

finger tapping rates and total number of taps. Studies that

have contained a bidirectional analysis have selected word distortion and omission as the parameters by which to assess

the interference effects on the speech (and have found limited effects). Perhaps assessment of the more subtle aspects of speech (which would be possible using a shadowing task) would reveal interference effects.

A need for more precise analysis of the performance patterns of both speech and tapping has been indicated by Green & Vaid (1986). In agreement with the proposed study, they suggest that one avenue of assessment would be the comparison of temporal characteristics of the tapping with those of speech.

A foundation to support such analyses can be found in a

study conducted by Peters (1977). This dual task study was designed to replicate the findings of Lomas and Kimura (1976). They found that speaking during single finger tapping produced equal tapping performance by both hands. Peters used single finger tapping (as fast as possible) as task one and recitation of a nursery rhyme (Humpty Dumpty) as task two. His results showed a small degree of bilateral

disruption. Not all subjects showed the disruption effects (2 of the 48 subjects revealed priming effects). However, further investigation (beyond that of tapping rate effects)

gave Peters the impression that the finger tapping was integrated with the speaking as the taps and the stressed








25

words seemed to Peters to coincide. He hypothesized that the concurrent tasks produced relatively diminished interference because of the flexibility of the rhythmic patterns

required. Two subsequent experiments were designed to test this hypothesis.

In experiment 2 the subjects were asked to tap continuously with one hand and to tap a specific rhythm with the other hand. Only 15 of his 150 subjects could

successfully execute this task. Of these fifteen subjects, the right-hand-tap, left-hand-rhythm condition was performed with more ease. All those who passed had musical

backgrounds. It was again hypothesized by Peters that the single finger continuous tapping task permitted some flexibility in rhythm.

In experiment 3 Peters further reduced the flexibility as he required the subjects to recite Humpty-Dumpty while beating the same specific rhythm with either the right or the left hand. The subjects were instructed to recite the nursery rhyme at normal speed and with the proper rhythmic intonation. They were asked to not use one rhythm to pace the other and -qne_ of the 100 subjects could perform the task flawlessly. Peters went on to posit a working

hypothesis that "the CNS [central nervous system] in the voluntary guidance of movement can produce only one basic rhythm at a time" (p. 463). He offered five conditions under

which successful concurrent activity of two motor systems








26

would be possible. In abreviated version these included 1) flexible rhythms as in experiments 1 & 2, 2) one motor

system producing a rhythm and the other being based on preformed species specific patterns such as walking, 3) one rhythm following another, 4) one producing a rhythm and the other coming in on the pauses, and 5) one motor system

dominating so that the rhythms produce predictable and interlocking patterns of stresses and pauses.

The hypotheses of Peters could be reinterpreted to suggest that the rigidly controlled properties of the nursery rhyme paired with a continuous tapping condition would impose their temporal characteristics upon the more flexible, continuous single finger tapping. If the prosodic

production task selected was more complex than a nursery rhyme, temporal differences in finger tapping might result between the hands. This supposed interference in hand performance might then imply lateralizing effects of prosody

within the brain. A bidirectional effect might also be achieved if Peters's conditions were altered one step further. Having the person tap as fast and as consistently as possible might add more temporal rigidity to the finger tapping task.

Following a thorough review of methodological issues in the use of the concurrent activies paradigm, Green and Vaid (1986) make the following recommendations:








27


Finally, further research using the concurrent activies paradigm should be directed at exploring the temporal relationship between the manual task.
A microanalysis of the temporal relations
between speech and tapping during concurrent task performance would undoubtedly provide more precise
information about the allocation of attention and possible attentional tradeoffs than do the procedures currently adopted. . . . In order for this procedure to be informative, it would be advisable that the linguistic tasks selected involve continuous, rather than discrete, output.
(p. 473)



Statement of the Problem

The preceding literature review has demonstrated that neuropsychological models of the production of prosody have emerged from the study of brain-damaged patients.

Neuropsychological models of prosody involving brain-intact populations have excluded aspects of production, addressing the hemispheric control of comprehension instead. The purpose of this study is to investigate an existing

neuropsychological model of production of prosodic features (both emotional and linguistic) using brain-intact persons.

The literature on the dual task paradigm has been reviewed and this paradigm has been found to meet the purpose of this investigation. The basic components will involve the use of a laterality-specific motor task, single finger tapping, to be performed concurrently with a complex speech shadowing task.

The neuropsychological model of prosody production which was tested suggested that the right hemisphere








28

dominates the control of emotional prosody, whereas the left

hemsiphere and the right hemisphere share control of

linguistic prosody. Imposed upon this model was the dual

paradigm model of Functional Cerebral Distance. The well

lateralized characteristics of single finger tapping were

paired with prosodically shadowed speech and the expected

interference effects formed the research hypotheses.



Research Hypotheses

The following null hypotheses were suggested:

Ho: 1. Right-hand tapping in the dual task
conditions involving emotional prosody would not differ, in rate and/or variability, from the
right-hand-tapping-alone condition.

Ho: 2. Right-hand tapping in the dual task
conditions involving linguistic prosody would not differ, in rate and/or variability, from the
right-hand-tapping-alone condition.

Ho: 3. Left-hand tapping in the dual task
conditions involving emotional prosody would not differ, in rate and/or variability, from the lefthand-tapping-alone condition.

Ho: 4. Left-hand tapping in the dual task
conditions involving linguistic prosody would not differ, in rate and/or variability, from the lefthand-tapping-alone condition.

Ho: 5. Neither the right-hand-dual nor the lefthand-dual performances would differ from their
baseline performance; therefore, the relative performances of the right and the left hands would
be equivalent.

Ho: 6. The linguistic prosodic contours for the utterance shadowed in the dual condition of righthand tapping would not differ, in the acoustical parameters measured, from the linguistic prosodic
contours shadowed alone.








29


Ho: 7. The linguistic prosodic contours for the utterance shadowed in the dual condition of lefthand tapping would not differ, in the acoustical parameters measured, from the linguistic prosodic
contours shadowed alone.

Ho: 8. Neither the linguistic prosody with the left-hand tapping nor the linguistic prosody with the right-hand tapping would differ from the linguistic prosody without tapping; therefore, the
relative performances would be equivalent.

Ho: 9. The emotional prosodic contours shadowed in the dual condition of right-hand tapping would not
differ, in the acoustical parameters measured, from the emotional prosodic contours shadowed
alone.

Ho: 10. The emotional prosodic contours shadowed in the dual condition of left-hand tapping would not differ, in the acoustical parameters measured, from the emotional prosodic contours shadowed
alone.

Ho: 11. Neither the emotional prosody with the left-hand tapping nor the emotional prosody with the right-hand tapping would differ from the emotional prosody without tapping; therefore, the
relative performances would be equivalent.

As implied by the above hypotheses, a bidirectional

analysis was conducted. For example, a difference in finger

tapping performance from baseline was viewed as an

interference effect whether it was detrimental or

facilitating. There were however, some expectations which

could be predicted in terms of the model of prosody

production under investigation. These were as follows:

1) Interference effects would be found for the right-hand-dual task performance with lingusitic prosody that would be equivalent to the
interference effects of the left-hand-dual task
performance with linguistic prosody.








30
2) Interference effects would be found for the left-hand-dual task performance with emotional prosody that would be greater than the
interference effects shown with right-hand-dual task performance with emotional prosody.

3) Interference effects in linguistic prosody production would be found in the total utterance and would be equivalent for right- and left-handdual task conditions.

4) Interference effects in emotional prosody production would be found to a greater or more frequent degree when the dual task involved lefthand tapping than when it involved right-hand tapping.














CHAPTER III
METHODOLOGY


Subjects

Twenty right-handed males between 18 and 23 years of age served as subjects. All had English as their native language and did not consider themselves to be bilingual (Green, 1986). They were University of Florida undergraduate or graduate students. Before acceptance as a subject, each person was required to complete a confidential questionnaire (See Appendix A). The questionnaire was designed to disqualify persons who had a history of neurological

problems or an identified hearing loss, or who were Morse code users, ham radio operators or musicians. The Edinburgh Handedness Inventory (Oldfield, 1971) was included as part of the questionnaire to verify right handedness. A score of 90 or better was required.

Apparatus

Finger Tapping

A solid-state device was constructed to register the subject's taps as a tone of 1000Hz. This tone was produced each time and for as long as the subject's finger made contact with the device. The tone was recorded directly on the left channel of a reel-to-reel audio tape recorder. It


31








32

was monitored on the recorder's VU meter prior to each experimental session and adjusted to an acceptable recording level on the recorder.

The tapping apparatus used a three- by four-inch circuit board. A designated tapping zone was etched in the center of this board. This rectangular area, which measured

1 and 1/16-inches long by 1 and 3/4-inches wide.

The subject's thumb, and middle finger remained in constant contact with the outer area of the copper circuit board (forming one electrode). The central tapping zone on the plate formed the second electrode. The central zone remained charged up to a 9-volt supply (with a high current

limit to prevent any shock) until contact was made by the subject's index finger. The contact produced a voltage drop

which was detected by the device and equated to on/off switching. This served to gate the tone production onto the

tape recorder as taps. The performance of the device was checked prior to each subject's use.

This device complied with the subtle tapping

requirements outlined by Green and Vaid (1986) as they specified that "electronic or solid state counters used to register taps are preferable to mechanical and

electromechanical counters since the latter devices have a greater time delay in registering taps" (p. 472). The microswitch has been used in some studies but Green and Vaid suggested that this device may be too sensitive as quivering








33
of the finger may register as a tap. A computer program was designed to effectively measure the tapping rate, as well as the temporal characteristics of the tapping (i.e,

variability, and peak-to-peak distance). This program was analogous to the equipment design suggested by Green & Vaid (1986) of a distance transducer attached to an oscillograph.

They noted that such computer usage would "provide additional analytical power" (p. 473).

Subjects were seated at a table in a quiet room. The tapping device was positioned at a comfortable distance for the subject's hands. The subjects were permitted to swivel the copper plate toward each hand as the tapping tasks were

introduced so that the counter was at a comfortable angle. Subjects were instructed to use only their index finger to tap and to tap in an up-and-down, not side-to-side manner. (See Appendix B.) They were required to keep their thumb and

middle finger in constant contact with the copper sheet and outside of the central rectangle during the tapping trials. The ring and little fingers were positioned on the copper or on the table. The wrist and forearm were in constant contact

with the table for the duration of each trial. The same tapping device was used by all subjects. Speech Stimuli

The same target sentence "He will be here tomorrow" was embedded in a short paragraph of neutral context. Only the target sentence conveyed the differing prosodic








34
contour. In this respect the only variable was the prosodic

contour as the segmental aspects were identical across all paragraphs.

The following prosodic conditions were modeled:

He will be here tomorrow. (Declarative)

He will be here tomorrow? (Interrogative)

He will be here tomorrow. (Happy)

He will be here tomorrow. (Sad)



These were embedded in the following neutral paragraph:

Last week Rick had written that he would be back from Denmark on Saturday. Alice wondered what she
should do. When the phone rang, Alice recognized the voice of Rick's friend. He will be here tomorrow she said and they planned what they would
do.


These stimuli were patterned after the Shapiro and Danly (1985) study. They assessed these same target

sentences in a reading task. However, their paragraphs permitted the anticipation of the type of target through appropriate content and prosody, whereas the stimuli in this study were all of neutral context.

In the making of the stimulus tape, five paragraphs were read by the same adult male. These original recordings

were made using a TEAC X-7 RmkII reel-to-reel audio tape recorder. All the audio tapes used were professional quality Maxell UD 50-60, Hi-output, extended range, low noise, polyester base tapes. Each of the five original








35

paragraphs was coded semantically and was read with the intention of conveying a specific prosodic contour, one declarative, one interrogative, one happy, one sad and one neutral. These paragraphs were modified and shortened versions of the Danly and Shapiro (1985) stimuli. The tape was taken to a professional broadcast studio where the splicing and dubbing were completed to create the master stimulus tape. The neutral paragraph (as cited above) was dubbed four times. The target sentence "He will be here tomorrow" was carefully spliced out of each of the four dubbings of the neutral paragraph so that the same length of

tape was removed each time. The target sentence from the declarative paragraph was spliced out of that paragraph and inserted into one of the neutral paragraphs. The same procedure was conducted for interrogative, happy and sad targets. This resulted in the creation of four stimulus paragraphs, each having a different prosodically intoned target sentence embedded in an otherwise identically neutral content. These created paragraphs were then dubbed onto both

channels of a new master tape in a designated blockrandomized order to permit five repetitions of each of 12 conditions requiring shadowing. A total of sixty paragraphs was recorded. The tape was played for ten listeners who were

asked to match the target sentence with the appropriate prosodic contour type. The average percent of correct contour identification was 94.3. When confusions were made,








36

the primary error was between sad and statement contours. This was understandable as the major difference between these two is duration of the utterance.

Each subject heard the paragraphs to be shadowed through both earphones of a light-weight NOVA 33-976 headset by Realistic. The presentation sound level was adjusted to a

comfortable listening level for each subject by adjusting the output dial on the TEAC X-7rmKII reel-to-reel tape recorder playing the master stimulus tape.

The shadowed response was recorded on the right channel

of the second TEAC X-7rmKII reel-to-reel tape recorder (tapping was recorded on the left channel of this recorder)

via a Realistic 33-1052 condensor lapel microphone. A plastic headband with an extension arm held the microphone at a fixed microphone-to-mouth distance of five inches. The recording level was adjusted for each patient prior to the experimental trials. All responses were recorded on one Maxell UD 50-60 professional quality polyester base reel-toreel tape.

General Procedure

All tasks were performed alone and in the dual

conditions. Equal priority was assigned to the performance of both tasks in the dual condition. The subjects were instructed to tap as rapidly and as consistently as possible

and to shadow the paragraphs using exactly the same words and in exactly the same manner.








37

Initial pilot testing was conducted to design the protocol. Further pilot testing was done to test the final

protocol, to establish the specific instructions (see Appendix B) and to determine the time commitment for each subject. The entire protocol (including questionnaire) required one session of approximately one hour and fifteen minutes. The total series of tasks included:

1) Six 15-second trials of tapping (alternating three with the left; three with the right hand) were performed as practice. Reinstruction on hand positioning, rate, and consistency of tapping were
given as needed. The tapping device was used but
the responses were not recorded.

2) At least four practice trials of shadowing were
conducted with re-instruction and repetition of paragraphs performed as needed for compliance with the task requirements (same words in same manner).


The stimuli shadowed in the practice trials were modifications of Danly and Shapiro's (1985) paragraphs as were the test stimuli. These practice paragraphs however, included the target sentence "You wrote it last night". They

were not neutrally worded. Instead, they conveyed the intended prosodic contour (one declarative, one interrogative, one happy and one sad). The four paragraphs were presented via cassette recording and were read by the same male speaker who recorded the stimulus paragraphs. The subject listened to the paragraphs from a TEAC V-350C Stereo Cassette Deck via the same earphones. The microphone was not used, as the practice trials were not recorded.








38

Observation from the pilot testing and review of the designs of previous tapping studies added the provision that

no two series of tasks required the same hand to tap. Therefore the following series of tasks were presented in a

Modified Randomized Block Design. The same order of blocks was presented to each subject.

A total of seventy trials were performed as each block contained fourteen trials, one for each condition.

Repetitions were needed to ensure an adequate sample of performance from each subject and five trials was the largest number of repetitions found in a dual task study involving speech (Dalen and Hugdahl, 1986)

3) Five 15-second trials of right-hand indexfinger tapping alone.

4) Five 15-second trials of left-hand index-finger
tapping alone.

5) Twenty 15-second trials of speech shadowing alone. One for each of the prosodic conditions (statement, question, happy, sad) times five
repetitions.

6) Twenty 15-second pairings of right-hand tapping with speech shadowing. One for each of the
prosodic conditions times five repetitions.

7) Twenty 15-second pairings of left-hand tapping with speech shadowing. One for each of the
prosodic conditions times five repetitions.

Data Analysis

The raw data for each subject was derived from the audio tape recordings of the patient's verbal shadowing of the target sentence on the right channel, with the alternate or simultaneous tapping performance on the left channel.








39

Data from the finger-tapping-alone conditions were obtained from the first two seconds of tapping in the fifteen-second sample. The data for the shadowing alone conditions were collected from the target sentence only. In the dual conditions, tapping and shadowing were measured in

an approximately simultaneous time frame (maximum skew of time frame was estimated to equal 1.5 milliseconds).

The analyses of both tapping and prosody were computer

generated. The tapping information was sent from the left channel of the tape recorder into an analog-to-digital converter within the hardware of an IBM XT personal

computer; the speech signal was sent through a PM Pitch Analyzer (Program 201) prior to its connection to the analog-to-digital converter. The PM Pitch Analyzer served as a filtering and initial data gathering device for the measurement of intensity and frequency information on the speech targets selected. As the tape was played for each target sentence, the PM Pitch Analyzer triggered the

computer for frequency information, and the computer then gathered the data over four channels at 8KHz (or 2KHz per channel). Six hundred points of data were collected for a 2 second time frame by the computer (providing three times the

resolution available from the Pitch Analyzer alone). These data were stored in memory as raw data and then converted to

a fundamental frequency trace and waveform envelope for speech data, or converted to square waves for tapping data.








40

In the data collection for the tapping alone condition the system was triggered by the input from the left channel of the tape recorder. A two-second time window of square waves was traced across the video screen. The cursors were consistently set at the beginning and end of the two-second

time window. At the final setting of the second cursor the predetermined calculations were performed on the data between the cursors and sent to the data file for that condition code. The calculations included rate of tapping, mean peak-to-peak distance between taps, and tapping variance (measured as standard deviation).

In the data collection for the shadowing-alone

condition the system was triggered from the frequency data entering the PM Pitch Analyzer and two seconds of speech data were traced across the video screen as fundamental frequency trace and waveform envelope. The first cursor was placed at the initial nonzero portion of the fundamental frequency trace for the target utterance and the second cursor at the last nonzero segment of the fundamental frequency trace for the target utterance. The predetermined measures and calculations were then made by the computer. These measures included mean fundamental frequency, utterance duration, variance of the frequency (standard deviation), minimum frequency, maximum frequency, frequency range and slope of the utterance.








41

In the dual conditions both the fundamental frequency trace and the square-wave tapping trace were on the video screen. The cursor manipulation was made for duration of the target utterance and the corresponding tapping (between the cursors) was also measured.

Fifteen of the expected 1400 datalines were missing as

a result of unmeasurable traces. Factors which contributed to the unmeasurable traces were inability to place the cusor

at a definite begining and/or ending of the trace, omission of the target by the subject, and interference in the signal from the recorder.

Statistical Analysis

The statistical procedures were calculated using SAS PROC statements (SAS Institute Inc., 1985). See Appendix C.

First, these data were sorted by the three independent variables or factors: subject, hand and condition. Then the

raw data file was condensed into mean data creating a new data set of 280 observations (20 subjects with 14 conditions each) with no missing cells. The three independent

variables were arranged in a 20 X 3 X 5 factorial ANOVA to assess the validity of the null hypotheses in this

investigation (Huck, Cormier & Bounds, 1974). The first variable had 20 levels, one for each subject. The second variable, hand, had three levels, right-, left- and no-hand levels. The third variable, condition, had five levels, tapping only, questions, statements, happy and sad. There








42

were two groupings of dependent variables. The dependent variables for tapping included rate of taps, mean peak-topeak distance between taps and the variance of tapping. The

dependent variables for shadowing included mean fundamental frequency, utterance duration, variance of the frequency, minimum frequency, maximum frequency, range of frequency and slope of the utterance. Main effects for hand and conditions were obtained as well as interactions.

The hypotheses generated by this study revolved around

the the question of whether there would be an interaction between the levels of hand performance and the prosodic condition levels. The ANOVA statistic permitted the

assessment of main effects and of this interaction of hand by condition which was of primary interest in the design. Hypotheses testing for hand and for condition effects were conducted using hand-by-subject and condition-by-subject as the respective error terms.

The mean data were then collapsed across subjects and sorted by hand and condition. A second ANOVA permitted separate analyses of the observations within each hand level (left-, right- and no-hand). Post-hoc analyses were

performed using the Duncan New Multiple Range Test (Kirk, 1968).














CHAPTER IV
RESULTS


The statistical results for each of the eleven dependent variables will be presented in the following text. The measures of interest will include means, standard deviations, the ANOVA results for the interaction term handby-condition and for main effects, and the results of the post-hoc testing as indicated.

Finger Tapping Data

Means and standard deviations depicting the rate of tapping in each prosodic condition are presented in Table 1.

The means are displayed graphically in Figure 1. The right hand tapped faster than the left hand (F = 13.10, df = 1/19, PR>F = .0018). The main effect for condition was significant (F = 71.22, df = 4/76, PR>F = .0001). Post-hoc testing showed the rate of tapping to be significantly slower for all the speech conditions when compared to the baseline tapping-alone condition. The interaction term of hand-byprosodic-condition was nonsignificant (F = 1.74, df = 4/76, PR>F = .1493). Both hands were slower during speech, with the right hand tapping faster than the left in all conditions.


43








44


Table 1.

Mean data across subjects for
dependent variable: rate as taps per second.


Left Hand Right Hand
Condition
Mean S.D. Mean S.D.


Grammatical:

Questions 4.17 .59 4.46 .53

Statements 4.11 .57 4.43 .57


Emotional:

Happy 4.10 .67 4.7 .41

Sad 3.99 .67 4.35 .5


Tap Alone:
5.22 .54 5.71 .63

















---- --------
xXX


P


TA = Tapping Alone Q = Questions P = Statements H = Happy S = Sad
0 = Right Hand X = Left Hand


Conditions


Figure 1.
Plot of mean rate of taps by condition for each hand.


45


Means for the
Rate of Taps


8

7

6

5

4

3

2

1


0-


TA


Q


H


S








46

Means and standard deviations for the variable the peak-to-peak distance between taps in each prosodic condition are presented in Table 2. The means are displayed graphically in Figure 2. The right hand had a significantly shorter peak-to-peak distance than the left (F = 17.0, df = 1/19, PR>F = .0006). The effect for prosodic condition was significant (F = 41.13, df = 4/76, PR>F = .0001). Post-hoc testing revealed the peak-to-peak distance to be

significantly longer for all the speech conditions when compared to the baseline, tapping-alone condition. No handby-prosodic-condition interaction was shown (F = 1.09, df = 4/76, PR>F = .3682). Both hands were slowed in the speaking conditions and the left hand was slower than the right.

Means and standard deviations representing the tapping variance in each prosody condition are presented in Table 3. The means are displayed graphically in Figure 3. No significant hand effect was shown (F = 2.14, df = 1/19, PR>F = .1602). The main effect for condition was significant (F = 5.46, df = 4/76, PR>F = .0006). Post-hoc testing indicated that the sad condition had greater variability than the rest of the prosodic conditions and the tapping-alone condition.

The interaction term, hand-by-prosodic-condition, was nonsignificant (F = .49, df = 4/76, PR>F = .7441).

Speech Data

Means and standard deviations depicting the mean fundamental frequency in each prosody condition are








47


Table 2.

Mean data across subjects for dependent variable: peak-to-peak
distance between taps.


Left Hand Right Hand
Condition
Mean S.D. Mean S.D.


Grammatical:

Questions 46.0 5.54 42.36 4.05

Statements 47.35 6.4 42.71 4.09


Emotional:

Happy 45.87 7.4 42.83 3.98

Sad 49.91 7.62 44.42 3.32


Tap Alone:
37.29 3.83 34.04 3.32








48


Means for the Peak-toPeak Distance Between Taps


50
494847464544434241403938373635
3433323130-


x
x x
~~0


I
/ / / / / / )( /
/ /
/
/
C


TA


TA
Q
P
H
S
0
x


Q


P


H


S


Conditions Tapping Alone Questions Statements Happy Sad
Right Hand Left Hand


Figure 2. Plot of mean peak-to-peak distance between taps by condition for each hand.


I I I I I








49


Table 3.

Mean data across subjects for
dependent variable: standard deviation of taps.


Left Hand Right Hand
Condition
Mean S.D. Mean S.D.


Grammatical:

Questions 2.62 1.58 2.18 1.45

Statements 2.02 .87 2.02 .73


Emotional:

Happy 2.45 1.4 2.01 1.31

Sad 3.21 1.83 2.84 1.84


Tap Alone:
1.73 .67 1.65 .93


















6

5
4

3-


Means for the Variance of Taps


1


0-~


-a


TA


P


H


S


Q


= Tapping Alone = Questions = Statements = Happy = Sad
= Right Hand = Left Hand


Conditions


Figure 3.
Plot of mean variance of taps by condition for each hand.


50


TA
Q
P
H
S
0
x








51
presented in Table 4. The means are displayed graphically in Figure 4. The main effect for hand was not significant (F = .73, df = 2/38, PR>F = .4890). The main effect for prosody condition was significant (F = 43.38, df = 3/57, PR>F = .0001). Post-hoc testing for condition effects showed the following pattern in mean fundamental frequency, happy > question > sad = statement. No hand by prosody interaction was shown (F = 1.39, df = 6/114, PR>F = .2252).

Means and standard deviations representing the

utterance duration in each prosodic condition are presented in Table 5. The means are displayed graphically in Figure 5.

The main effect for hand was nonsignificant (F = .27, df = 2/38, PR>F = .7676). The main effect for prosody condition was significant (F= 178.16, df = 3/57, PR>F = .0001). Posthoc testing showed a significantly longer duration for the sad condition when compared to the other prosody conditions. The hand-by-prosodic-condition interaction was nonsignificant (F = 1.40, df = 6/114, PR>F = .2211).

Means and standard deviations depicting the variance in frequency for each prosodic condition are presented in Table 6. The means are displayed graphically in Figure 6. The main

effect for hand was nonsignificant (F = 1.14, df = 2/38, PR>F = .3290). The main effect for prosodic condition was significant (F = 33.97, df = 3/57, PR>F = .0001). Post-hoc analysis produced the following pattern of variabilty, happy > question > sad = statement. No hand-by-prosodic-








52


Table 4.

Mean data across subjects for
dependent variable: mean fundamental frequency.


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 110.93 111.73 113.71
S.D. 11.3 10.6 11.35

Statements
Mean 103.96 103.32 105.48
S.D. 9.08 9.66 11.39


Emotional:

Happy
Mean 121.75 122.77 122.5
S.D. 16.3 15.4 15.74

Sad
Mean 104.1 103.68 105.16
S.D. 8.81 8.0 10.21








53


Means for the
Mean Fundamental Frequency


125
124123
122121120119118117116115115
114113
112111110109108107106105
104103
102101100


0




\\
0 \


Q


P


H


S


Conditions


Questions Statements Happy Sad
Right Hand Left Hand No Hand


Figure 4.
Plot of means for the mean fundamental frequency
by condition for each hand.


Q
P
H
S
0
x
0








54


Table 5.

Mean data across subjects for
dependent variable: utterance duration
(in seconds).


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 1.027 1.014 1.026
S.D. .09 .08 .08

Statements
Mean 1.026 1.044 1.044
S.D. .08 .07 .09



Emotional:

Happy
Mean 0.991 1.003 1.007
S.D. .08 .08 .09

Sad
Mean 1.271 1.272 1.25
S.D. .13 .12 .13








55


Means for the Utterance Duration (in seconds)


1.3001. 2751.2501.2251.2001. 1751.1501.1251.1001.0751.0501 .0251.0000.9750.9500.925-


I/P


/
/
/
/
/
/
- -~ II
___________ /
--
0-~


Q


P


H


S


Conditions


Questions Statements Happy Sad
Right Hand Left Hand No Hand


Figure 5.
Plot of means for the utterance duration
by condition for each hand.


Q
P
H
S
0
x
0








56


Table 6.

Mean data across subjects for
dependent variable: variance of frequency
(standard deviation).


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 15.71 17.07 17.51
S.D. 5.15 6.03 6.29

Statements
Mean 9.97 9.87 11.09
S.D. 2.88 2.54 5.45


Emotional:

Happy
Mean 20.65 21.39 22.04
S.D. 8.46 7.5 7.98

Sad
Mean 10.6 10.67 11.08
S.D. 3.55 2.96 5.18






















Means for the Frequency Variance


25
2423
222120191817161515
1413
12
11100908
070605
0403
0201-


A

/,// ~>\
0 /
0> I
~
/
/
N /
I,,
/
/
n
N,


Q


P


H


Conditions = Questions = Statements = Happy = Sad
= Right Hand = Left Hand = No Hand


Figure 6. Plot of means for the frequency variance
by condition for each hand.


57


S








58

condition interaction was revealed (F = .41, df = 6/114, PR>F = .8717).

Means and standard deviations for the variable minimum fundamental frequency for each prosodic condition are presented in Table 7. The means are displayed graphically in Figure 7. The main effect for hand was not significant (F = .45, df = 2/38, PR>F = .6419). The main effect for prosody condition was significant (F = 6.10, df = 3/57, PR>F = .0011). Post-hoc testing showed higher minimum frequencies for the happy and question condtions than for the sad and statement conditions. The interaction term for hand-byprosodic-condition was nonsignificant (F = .35, df = 6/114, PR>F = .9057).

Means and standard deviations depicting the maximum frequency for each prosodic condition are presented in Table 8. The means are displayed graphically in Figure 8. The main effect for hand was not significant (F= .96, df = 2/38, PR>F = .3924). The main effect for prosodic condition was significant (F = 35.60, df = 2/38, PR>F = .0001). Post-hoc testing revealed the following pattern of significant differences in prosodic condition for maximum frequency, happy > questions > sad = statement. No hand-by-prosodiccondition interaction was shown (F = .33, df = 6/114, PR>F = .9207).

Means and standard deviations representing the

frequency range for each prosody condition are presented Table 9. The means are displayed graphically in Figure 9. No








59


Table 7.

Mean data across subjects for
dependent variable: minimum frequency.


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 85.81 85.95 86.02
S.D. 5.87 7.76 7.77

Statements
Mean 84.14 82.99 83.18
S.D. 6.4 6.87 7.91


Emotional:

Happy
Mean 85.67 85.01 86.36
S.D. 6.24 6.78 7.66

Sad
Mean 83.73 83.88 84.68
S.D. 6.3 5.6 7.08

















Means for the
Minimum Frequency


Q
P
H
S
0
x
C1


90.0
89.5
89.0
88.5
88.0
87.5
87.0
86.5
86.0
85.5
85.0
84.5
84.0
83.5
83.0] 82.5
82.0
81.5
81.0
80.5
80.0-


Questions Statements Happy Sad
Right Hand Left Hand No Hand


0


X, \

so-,


Conditions


Figure 7.
Plot of means for the minimum frequency
by condition for each hand.


60


P


H


S








61


Table 8.

Mean data across subjects for
dependent variable: maximum frequency.


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 158.54 162.63 162.54
S.D. 23.31 23.61 23.26

Statements
Mean 133.15 131.58 136.31
S.D. 14.27 12.33 23.28


Emotional:

Happy
Mean 173.57 177.21 184.3
S.D. 35.66 29.45 46.11

Sad
Mean 136.91 137.43 139.97
S.D. 21.0 13.33 22.92
















Means for the
Maximum Frequency


Q
P
H
S
0
x
0


185180175170165160155150

145140135130125


Questions Statements Happy Sad
Right Hand Left Hand No Hand


\

/






/ \\


Conditions


Figure 8.
Plot of means for the maximum frequency
by condition for each hand.


62


Q


P


H


S








63


Table 9.

Mean data across subjects for
dependent variable: frequency range.


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean 72.74 76.68 76.52
S.D. 23.23 22.18 21.55

Statements
Mean 49.01 48.59 53.14
S.D. 14.74 10.72 24.04


Emotional:

Happy
Mean 87.9 92.2 97.94
S.D. 33.31 26.17 46.04

Sad
Mean 53.18 53.55 55.29
S.D. 18.3 13.54 23.69














Means for the Frequency Range


Q
P
H
S
0
x
C


10095908580757065 605550

45-


Questions Statements Happy Sad
Right Hand Left Hand No Hand


/9 \






\ \


Conditions


Figure 9.
Plot of means for the frequency range
condition for each hand.


64


Q


p


H


S



=
=
=
=
=
=








65

main effect for hand was shown (F = .78, df = 2/38, PR>F = .4643). The main effect for prosody condition was

significant (F = 34.9, df = 3/57, PR>F = .0001). Post-hoc testing revealed the following pattern in frequency range, happy > questions > sad = statements. The interaction term for hand-by-prosodic-condition was not significant (F = .26, df = 6/114, PR>F = .9551).

Means and standard deviations depicting the utterance slope for each prosody condition are presented in Table 10.

The means are displayed graphically in Figure 10. The hand effects were nonsignificant (F = .81, df = 2/38, PR>F = .4539). The main effect for prosodic condition was

significant (F = 42.61, df = 3/57, PR>F = .0001). Post-hoc testing indicated that the happy and question contours with

positive slopes differed significantly from the sad and statement contours with negative slopes. The hand-byprosodic-condition interaction was not significant (F =

2.02, df = 6/114, PR>F = .0688).

Summary

Analyses of the concurrent performances and their comparison to the baseline performances were made for both the finger tapping and the prosodic condtions. None of the dependent measures demonstrated a hand-by-prosodic-condition interaction. However, descriptive analysis revealed a

bimanual and symmetrical effect of speech upon the finger tapping. The tapping was slower in the speaking conditions than in the baseline conditions. Tapping did not have an








66


Table 10.

Mean data across subjects for
dependent variable: slope of utterance.


Condition Left Hand Right Hand No Hand


Grammatical:

Questions
Mean .0875 .122 .113
S.D. .09 .1 1.0

Statements
Mean -.067 -.071 -.056
S.D. .04 .04 .07


Emotional:

Happy
Mean .052 .038 .058
S.D. .05 .07 .08

Sad
Mean -.049 -.054 -.05
S.D. .05 .03 .05



















Means for the
Slope of the Utterance


Q
P
H
S
0
x
13


.15
.14
.13
.12
.11
.10
.09
.08
.07
.06
.05
.04
.03
.02
.01
.00
-.01
-.02
-.03
-.04
-.05
-.06
-.07
-.08
-.09
-.10-


Questions Statements Happy Sad
Right Hand Left Hand No Hand


K









\/s


Conditions


Figure 10.
Plot of means for the slope of the utterance
by condition for each hand.


67


Q


P


H


S


'




=
=
=

=








68

effect upon the speech prosody production characteristics. These results require the acceptance of the null hypotheses

for the tapping, for the production of emotional prosody, and for the production of linguisitic prosody.














CHAPTER V
DISCUSSION


The results of this investigation do not support the notion of a hemispheric split in prosodic production. The lack of an asymmetric interference effect in the dual task paradigm suggests that both hemispheres may be processing aspects of linguistic and emotional prosody. There are at least two sides to this position. The first is that the production of prosodic features is a bihemispheric

phenomenon regardless of emotional or linguistic factors. The second possible explanation is that linguistic or emotional contexts influence the hemisphericity of the productive control of prosody. However, the procedures used in this investigation did not allow for the influence of linguistic or emotional factors in the speech produced.

Procedural Explanations

Whenever the expected results are not obtained, the potential influence of the procedures must be addressed. The

literature on dual task paradigms describes a number of procedural guidelines to maximize the likelihood of creating

interference effects (Bryden, 1982; Green and Vaid, 1986). The careful planning of this investigation took many of these precautions into consideration. In contrast with the


69








70

majority of studies reviewed in the literature on dual task paradigms, the bimanual results obtained in this investigation were symmetrical.

The subjects of the study did not appear to influence the production of symmetrical results. The number of subjects did not appear to be a factor. The original work of Kinsbourne and Cook (1971) produced significant asymmetrical

results with twenty subjects. However, the motor task in their study was dowel balancing rather than finger tapping.

Kee, Bathurst and Hellige (1983) did use single finger tapping as the motor task and obtained asymmetrical results with twelve right-handed and twelve left-handed subjects. Green and Vaid (1986) recommended that handedness, sex, and language experience be controlled in subject selection. These criterion were accounted for in the questionnaire for enrollment as a subject in the current investigation.

The use of single finger tapping had been shown in the literature to produce greater verbal laterality effects than

two-key tapping (Kee, Morris, Bathurst and Hellige, 1986). Distal finger movement has been well documented as an exclusively contralateral event (i.e., Brinkman and Kuypers, 1972). Therefore, the nature of the motor task itself would

not appear to be responsible for the symmetrical results. The sensitivity of the tapping device has come under close scrutiny in the literature (Green and Vaid, 1986). The device constructed for this study was desgined to meet the








71
most current requirements as outlined in the literature (Green and Vaid, 1986).

Analysis of the tapping has been the focus of the majority of verbal-manual dual task studies. Tapping rate has typically been measured for the duration of the trial (usually 15 to 20 seconds). In this study however, only that

portion of the tapping coinciding with the target sentence was analyzed. It is conceivable that since the paragraphs were always the same, the tapping may have been altered just

prior to the target. At that point in time the subject may have been adjusting his attentional set and preparing for the analysis of the novel segment of the stimuli.

Speech shadowing was selected to meet the

recommendation that the verbal task in the dual task paradigm be complex and continuous (Green and Vaid, 1986). Shadowing permited some control over individual variability of speech production as it required the exact repetition of

a model. The difficulty level of such online imitation may have forced the subjects to attend to the acoustic/phonetic trascoding of the information rather than to the linguistic and/or emotional information. This argument will be

addressed further in the discusion section on prosody models. It is also possible that the use of the same paragraph across all conditions may have served to reduce the attentional requirements of the subjects on all but the

target sentence. The neutrality of the stmuli, with the








72

exception of the target sentence, may have been a factor. Paragraphs carrying a greater degree of prosodic information and a greater variety of contours may not only have produced

stronger attentional demands, but may have produced more asymmetrical results in response to such prosodic demands.

Prosody Model Explanations

In understanding the neurologic basis for prosody, there are two interrelated issues that should be addressed.

One deals with the question of whether the left, right or both hemispheres are dominant for speech prosody. A related

issue concerns the various functions of prosody as part of the production of speech, as a mechanism for expressing emotions or as part of the linguistic code. Prosody has been called a third component of language along with grammar and

semantics (Weintraub et al., 1981). The left hemisphere's dominant role in language has been clearly established, and linguistic aspects of prosody have been attributed to the left hemisphere. Prosody (especially affective prosody) has been linked with emotional functions of the right hemisphere

(Tucker et al., 1977). Another aspect of production of prosody is that prosody is tied to the acoustic-phonetic process of speech and is involved in changes in pitch, intensity and duration. Studies of patients with right or left hemisphere lesions have documented impairment to the segmental and prosodic aspects of speech following lesions to either hemisphere or for that matter, brainstem








73

structures (Kent and Rosenbek, 1982). When discussing the production of prosodic features, the role of motor speech ability and the acoustic-phonetic aspects of speech and prosody should not be ignored. Siditis (1984) has suggested that deficits in the production of prosodic features may occur independently of emotional or linguistic content. However, he states further that

the existence of an expressive dyprosody for emotional and paralinguistic content with the control of vocal pitch otherwise intact remains to
be demonstrated. (1984, p. 110)

Obviously speech production is the mechanism by which humans express spoken language and/or emotional content. In this respect, the prosodic aspects of speech are influenced

by the context (linguistic and/or emotional) in which they are evoked. Emotional speech also has linguistic structure. Therefore, it seems that it is the strength of the relative

contributions of linguistic versus emotional content that determines the characteristics of speech prosody. The implication that prosody resides in one or the other hemisphere may reflect nothing more than the function served

by prosodic features to express specific linguistic or emotional content.

This is a similar argument to that offered by ShipleyBrown and Dingwall (1986) in advancing their Attraction Hypothesis. They suggested that the comprehension of prosody

for both emotional and linguistic content was primarily a right-hemisphere phenomenon. In contrast, they observed that







74

the left hemisphere also was active in comprehending linguistic but not emotional prosody. They hypothesized that the right hemisphere is dominant for prosodic comprehension;

however, the influence of linguistic content may attract certain prosodic characteristics to the left hemisphere.

One of the few studies to evaluate production aspects of linguistic and emotional prosody as a function of leftand right-hemisphere damage was conducted by Shapiro and Danly (1985). Their findings suggested dominance of the right hemisphere in the production of emotional prosody, but

bilateral contribution in the production of linguistic prosody. This is a similar position to that offered by Shipley-Brown and Dingwall in their study of prosodic comprehension.

The view, as stated above, that prosody is tied to speech and that the characteristics may be influenced by linguistic or emotional contexts, would predict different results from those obtained by Shipley-Brown and Dingwall (1986) and Shapiro and Danly (1985). Specifically, this modification of the Attraction Hypothesis, hereafter reffered to as the Investment hypothesis, would predict that the right hemisphere would have a greater investment in emotional speech prosody and that the left hemisphere would have a greater investment in linguistic speech prosody. The

investment is determined by the contextual nature of the








75

utterance. The prosodic feature is the outcome of the selective investment of emotional or linguistic input.

The results of the Shapiro and Danly (1985) study suggested that right hemisphere damage alone "may result in a primary disturbance of speech prosody that may be

independent of the disturbances in affect often noted in right-brain damaged populations" (1985, p. 19). Their conclusions may have been premature as they did not test patients with left-frontal lesions. Crary and Haak (1986) tested patients with right-and left-hemisphere lesions including left-frontal. Some of their patients were aphasic, while others were not. They observed that a basic deficit to

speech prosody resulted from either right- or left-frontalhemisphere damage. However, linguistic aspects of prosodic production (i.e., terminal contour direction and sentence fundamental frequency declination) were impaired only in those patients demonstrating an aphasic impairment. Based on the results obtained by Crary and Haak (1986), the question

is raised as to whether the linguistic prosody deficit in the right-hemisphere-damaged patients of Shapiro and Danly (1985) indicated the presence of linguistic prosody in the right hemisphere or represented a more direct impairment in the production of segmental and prosodic aspects of speech irrespective of language content. Further support for this position was offered by Ryalls (1986), in his reply to Shapiro and Danly. He cited studies describing prosodic








76

deficits with left-hemisphere damage and referred to Shapiro

and Danly's (1985) claim about right hemisphere damage producing a primary deficit in prosody as "premature" (1986, p. 187).

In summarizing the first argument, it is proposed that the production of speech prosody is bihemispheric only in as

much as it may be influenced by linguistic and emotional content of spoken utterances. This position is consistent with traditional views such as those of Monrad-Krohn who when considering the localization of prosody espoused, "cooperation of the whole brain is probably needed" (p. 415).

The results of the present study did not support this Investment Hypothesis. However, the model may still be useful in explaining the present results. The suspect area of influence is the shadowing technique used in the dual task paradigm. Shadowing was selected because current literature on verbal manual dual task paradigms suggested that the verbal task should be complex and continuous (Green

and Vaid, 1986). Retrospectively, it is possible that the difficulty of the shadowing task caused the subjects to focus on the acoustic-phonetic components of segmental and prosodic speech rather than on the linguistic or emotional context. In fact, the subjects were very accurate at reproducing the respective prosodic contours and showed no confusion among expected acoustic parameters associated with








77

the four types of utterances. Marslen-Wilson (1975) has suggested that during shadowing there are simultaneous levels of processing. He concluded that it would be possible

for information processing at one level to constrain information processing at another level. The procedures used

in this investigation did not permit the formation of inferences regarding whether or not the subjects attended to the linguistic or affective contexts of the shadowed stimuli.

The investment hypothesis would predict an asymmetrical hand effect for emotional versus linguistic context. However, if the contextual investment of emotion or

linguistic information was weakened as a result of task demands, no asymmetry attributed to emotion or to language would be expected. The effect of speech on hand performance would be expected, however.

Dual Task Paradigm Explanations

The history of the dual task paradigm is laden with studies showing bimanual effects. These effects however, are

most often asymmetrical. One argument used to explain bilateral and symmetrical interference effects has been directed toward the specific tasks involved. When both tasks are too difficult the interference effects are thought to be masked; when both tasks are too simple, a ceiling effect is

thought to occur (Green and Vaid, 1986). The symmetrical effects in this study do not appear to be the result of








78

either masked interference or ceiling effects as one task was designed to be simple, the other complex. McFarland and Aston (1978) have offered an explanation which may be better

suited to tasks used here. They studied sequential finger tapping and verbal memory load tasks. Their investigation used two levels of difficulty in the verbal task. The more difficult verbal task did not produce the differential interference effects. McFarland and Ashton suggested that increased difficulty in either of the tasks (cognitive/verbal or motor) leads to a loss of asymmetrical effects. They noted that "under the increased memory-span condition, verbal-task performance may involve more diffuse neural activity . . ." (1978, p.739). The issue in relating these findings to the current investigation is one of defining levels of difficulty. No levels of shadowing difficulty were included for comparision. It is conceivable however, that the demands of the shadowing task were such that the differential interference effects were lost. Summers and Sharp (1979) also found bimanual and symmetrical

results in their pairing of sequential finger tapping and verbal rehersal. However, their results using single finger

tapping were asymmetrical. Despite the fact that these studies used sequential finger tapping and neither used shadowing, the argument that increasing the difficulty of either task produces bilateral disruption should be considered. Previous use of speech shadowing in a dual task








79

have been limited. Allport, Antonis and Reynolds (1972) used

shadowing of speech and reading of music as dual tasks and did not find significant interference. It has been suggested

that this was because the participants were musicians and were proficient in their musical ability. This implied that

the tasks were too simple for the subjects and therefore produced a ceiling effect. Nonlateralizing results have also

been obtained by investigators who were attempting to replicate studies with lateralizing results (i.e., Lomas and Kimura, 1976).

The functional cerebral distance model used in this investigation suggested that no interference may be found if

both tasks are different and require brain structures that do not overlap. Single finger tapping and speech tasks have

been shown to produce interference effects. Few of the studies that have used speech as the interfering task have analyzed the speech. One of those that did was conducted by

Bowers et al. (1978). They did not find disruption of speech. They did find disruptions of the tapping. Bowers et al. suggested that this may have been because of a 'one-way

street' phenomenon which exists with language having priority over motor performance.

Concluding Remarks

Prosody may be a component of speech production. As such, prosody would appear to be influenced by the context in which it is produced. It was proposed that this influence








80


comes from varying degrees of investment in emotional and linguistic information. This Investment Hypothesis would predict an asymmetrical hand effect for emotional versus linguistic context. This asymmetrical effect may be weakened as a result of task demands, thereby producing no attributable asymmetry to either emotion or language. In the verbal-manual dual task paradigm, the verbal task appears to dominate the direction of the interference effects.

Implications for Future Research

The results of this study question the existing models

of prosody production. The findings by themselves are not sufficient to make an unequivocal statement about the production of prosody in brain-intact males. However, the strength of these results in their nonsignificance warrants further attention to this paradigm.

One aspect of future research would involve further analysis of the data already obtained in the current study. As suggested in the discussion of the procedures, disruption

in tapping performance may have ocurred just prior to the target sentence or just after the target. Comparison of the

tapping rate before, during and after the target sentence would permit verification of attentional shifting on the part of the subject as he responds to the novel segment of the paragraph.

A second area for future research would address the assessment of the subject's processing of the linguistic and








81

emotional content of their productions. The overall design would be the same as the current one with the inclusion of an additional step in the procedure. The subjects would be asked to select (from a list) the type of contour most closely resembling the one they just shadowed. This contour choice would be made after each paragraph and would allow the examiner to compare the acoustical accuracy of production with the perceptual processing of the contour.

Another potential study would address the influence of emotional or linguistic investment in production of prosodic contours. This would require practice sessions during which each subject would be instructed in how they were to produce

target sentences using specific prosodic contours. Then within the context of tapping, the subject would be shown a card indicating the contour type to be produced. The subject

would then be asked to produce the target sentence with as much emotional or linguistic investment as possible.













APPENDIX A QUESTIONNAIRE Subject Number: Date:

Sex:

Age:

Handedness: Native Language: Education Level: Do you have a history of any neurological problems? Do you have an identified hearing loss? Do you consider yourself an expert in:

Typing (faster than 40 wpm)?
Morse Code?
Ham Radio Operation? Are you a musician?


82













APPENDIX B
INSTRUCTIONS

Tasks
There will be seventy 15 second tasks and four paragraphs to
read. I will tell you which tasks to do. Some will be tapping your finger alone, some will be repeating paragraphs alone and at times you will be tapping and repeating at the same time. When you do the two at the same time, you should try to do our best on both. --- DO THEM EQUALLY WELL.

Tapping
You will be asked to tap your index finger on this copper plate. Use the pad of your finger ---- not the very tip. You will be tapping with your right hand and with your left hand. Demonstrate position.

(Checklist)
1. Tap within the square.
2. Thumb and middle finger stay in contact with the
rest of the copper plate.
3. You may arch your fingers.
4. Arm in contact with the table.
5. Use an up and down motion, not a rocking from sideto-side.

Go ahead and try the hand position. Tap as FAST and at as CONSISTENT a rate as you possibly can. PRACTICE TAPPING TRIALS. ANY QUESTIONS?

Shadowing
You will be listening to a tape recording of a man reading paragraphs (one at a time). You are to repeat the paragraph
by following along behind him. Try not to get too far behind, and as you become more familiar with the information, don't get ahead of the speaker. It is important to repeat EXACTLY WHAT the man says and in EXACTLY THE SAME WAY the man says it. PRACTICE SHADOWING TRIALS. ANY QUESTIONS?

Review
Remember: Say the Same words in the Same way as the speaker.
Remember: Tap as FAST and at as CONSISTENT a rate as you possibly can. ANY QUESTIONS? We will pause to take 2 short breaks. You may request to stop at anytime.


83















APPENDIX C SAS STATISTICAL PROGRAM

DATA DISS; INPUT HAND $ 1 COND $ 3-4 SENT 5-6 SUBJ 8-9 TAPSEC TAPMN TAPMAX TAPSD MEANHZ TESTDUR SDHZ MINHZ MAXHZ CHANGEHZ SLOPE;
IF COND = ' ' THEN COND = 0; IF COND = 'GQ' THEN COND = 1; IF COND = 'GP' THEN COND = 2; IF COND = 'EH' THEN COND = 3; IF COND = 'ES' THEN COND = 4; IF HAND = 'N' THEN DO; TAPSEC=.; TAPMN=.; TAPSD=.; END;
IF COND = 0 THEN DO; MEANHZ=.; TESTDUR=.;
SDHZ=.;
MINHZ=.; MAXHZ=.; SLOPE=.; END;
IF SENT = -2 THEN DELETE; CARDS;
LDGP-1 1 3.62 55.0 2.0 105 1.105 7.30 92 124 32 -0.082660 NAES 1 0.00 0.0 0.0 107 0.985 9.92 89 142 53 -0.102510

PROC SORT;
BY SUBJ HAND COND; PROC MEANS;
BY SUBJ HAND COND;
VAR TAPSEC TAPMN TAPSD MEANHZ TESTDUR SDHZ MINHZ MAXHZ CHANGEHZ SLOPE; OUTPUT OUT = AVES MEAN = TAP1 TAP2 TAP3 SP1 SP2 SP3 SP4 SP5 SP6 SP7; PROC ANOVA;
CLASSES SUBJ HAND COND;
MODEL TAP1-TAP3 SP1-SP7 = SUBJ HAND SUBJ*HAND COND COND*SUBJ HAND*COND; TEST H = HAND E = HAND*SUBJ;


84








85


APPENDIX C continued


TEST H = COND E = COND*SUBJ; MEANS HAND*COND;
PROC SORT;
BY HAND COND; PROC MEANS;
BY HAND COND;
VAR TAP1-TAP3 SP1-SP7;
OUTPUT OUT = AVES MEAN = MTAP1 MTAP2 MTAP3 MTAP5 MSP1 MSP2 MSP3 MSP4 MSP5 MSP6 MSP7; PROC PLOT;
PLOT (MTAP1--MSP7)*COND = HAND; PROC ANOVA DATA = AVES;
BY HAND;
CLASSES SUBJ COND; MODEL TAP1--SP7 = SUBJ COND; MEANS COND/DUNCAN; /*EOJ















REFERENCES


Allport, D.A., Antonis, B., & Reynolds, P. (1972). On the
division of attention: A disproof of the single channel hypothesis. Quarterly Journal of Experimental
Psychology, 24, 225-235.

Blumstein, S., & Cooper, W.E. (1974). Hemisphereic
processing of intonation contours. Cortex, LQ, 391-404.

Bowers, D., Heilman, K.M., Satz, P., & Altman, A. (1978).
Performance on verbal, nonverbal and motor tasks by
right-handed adults. Cortex, 14, 540-556.

Brinkman, J., & Kuypers, H.G.J.M. (1972). Splitbrain
monkeys: Cerebral control of ipislateral and contralateral arm, hand, and finger movements. Science,
1-1, 536-539.

Bryden, M.P. (1982). Laterality: Functional asymmerty in the
intact brain. New York: Academic Press.

Bryden, M.P., & Ley, R.G. (1983). Right-hemispheric
involvement in the perception of expression of emotion in normal humans. In K. M. Heilman & P. Satz (Eds.), Neuropsychology of Human Emotion (pp. 6-44). New York:
Guilford Press.

Cherry, E.C., & Taylor, W.K. (1954). Some further
experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America,
2., 554-559.

Cooper, W.E., Soares, C., Nicol, J., Michelow, D., &
Goloski, S. (1984). Clausal intonation after
unilateral brain damage. Language and Speech, aZ, 1724.


86








87


Crary, M.A., & Haak, N.J. (1986). A neurolinguistic basis
for propositional prosody. A poster presented at the annual meeting of the International Neuropsychological
Society in Denver, Colorado.

Cutler, A., & Ladd, D.R. (1983). Language and Communication:
Vol.: 14. Prosody: Models and Measurements. New York:
Springer-Verlag.

Dalen, K., & Hugdahl, K. (1986). Inhibitory versus
facilitory interference for finger-tapping to verbal and nonverbal, motor, and sensory tasks. Journal of
Clinical and Experimental Neuropsychology, 5, 627-636.

Danly, M., Cooper, W.E., & Shapiro, B.E. (1983). Fundamental
frequency, language processing and linguistic structure
in Wernike's aphasia. Brain and Language, L, 1-24.

Danly, M., & Shapiro, B.E. (1982). Speech prosody in Broca's
aphasia. Brain and Language, !_, 171-190.

Denes, G., Caldonetto, E.M., Semenza, C., Vaggnes, K., &
Zettlin, M. (1984). Discrimination and identification of emotions in human voice by brain-damaged subjects.
Acta Neurologica Scandinavia, U9, 154-162.

Eady, S.J., & Cooper, W.E. (1986). Speech intonation and
focus location in matched statements and questions.
Journal of the Acoustical Society of America, 8Q, 402415.

Friedman, A., & Polson, M.C. (1981). Hemispheres as
independent resource systems: Limited-capacity processing and cerebral specialization. Journal of Experimental Psychology: Human Perception and
Performance, Z, 1031-1058.

Green, A. (1986). A time sharing cross-sectional study of
monolinguals and bilinguals at different levels of second language acquisition. Brain and Cognition, 5,
477-497.

Green, A., & Vaid, J. (1986). Methodological issues in the
use of the concurrent activities paradigm. Brain and
Cognition, 5, 465-476.

Hartje, W., Willmes, K., & Weniger, D. (1985). Is there
parallel and independent hemisphereic processing of intonational and phonetic components of dichotic speech
stimuli? Brain and Language, 24, 83-99.








88


Heilman, K.M. (1983). Introduction. In K.M. Heilman & P.
Satz (Eds.) Neuropsychology of Human Emotion. New York:
Guilford Press.

Heilman, K.M., Bowers, D., Speedie, L., & Coslett, H.B.
(1984). Comprehension of affective and nonaffective
prosody. Neurology, 34, 917-921.

Hellige, J.B. (1985). Hemisphere-specific priming and
interference: Issues in conceptualization. Paper presented at the Annual Convention of The International
Neuropsychological Society, San Diego, California.

Hellige, J.B., & Longstreth, L.E. (1981). Effects of
concurrent hemisphere-specific activity on unimanual
tapping rate. Neuropsychologia, 1-, 395-405.

Huck, S., Cormier, W.H., & Bounds, W.G. (1974). Reading
Statistics and Research. New York: Harper and Row.

Kee, D.W., Morris, K., Bathurst, K., & Hellige, J.B. (1986).
Lateralized interference in finger tapping: Comparisons
of rate and variability measures under speed and consistency tapping instructions. Brain and Cognition,
., 268-279.

Kent, R.D., & Rosenbek, J.C. (1982). Prosodic disturbance
and neurologic lesion. Brain and language, 1-5, 259-291.

Kimura, D., & Vanderwolf, C.H. (1970). The relation between
hand performance and the performance of individual finger movements by left and right hands. Brain, 93,
769-774.

Kinsbourne, M., & Cook, J. (1971). Generalized and
lateralized effects of concurrent verbalization on a unimanual skill. Quarterly Journal of Experimental
Psychology, 23, 341-345.

Kirk, R.E. (1968). Experimental Design: Procedures for the
Behavioral Sciences. Belmont, California: Wadsworth
Publishing Co., Inc.

Lackner, J.R., & Shattuck-Hufnagel, S.R. (1982). Note:
Alterations in speech shadowing ability after cerebral
injury in man. Neuropsychologia, aQ, 709-714.

Lezak, M.D. (1983). Neuropsychological Assessment. New York:
Oxford University Press.








89


Lomas, J., & Kimura, D. (1976). Intrahemisphereic
interaction between speaking and sequential manual
activity. Neuropsychologia, 14, 23-33.

Lonie, J., & Lesser, R. (1983). Intonation as a cue to
speech act identification in aphasic and other braindamaged patients. Research News: International Journal
of Rehabilitation Research, Q, 512-513.

Marslen-Wilson, W.D. (1975). Sentence perception as an
interactive parallel process. Science, 12, 226-228.

McFarland, K., & Aston, R. (1978). The influence of
concurrent task difficulty on manual performance.
Neuropsychologia, 16, 735-741.

Monrad-Krohn, G.H. (1947). Dysprosody or altered "melody of
language." Brain, 7Q, 405-423.

Navon, D., & Gopher, D. (1979). On the economy of the human
information processing system. Psychological Review,
6_, 214-225.

Oldfield, R.C. (1971). The assessment and analysis of
handedness: The Edinburgh inventory. Neuropsychologia,
9, 97-113.

Olive, J.P. (1975). Fundamental frequency rules for
synthesis of simple declarative English sentences.
Journal of the Acoustical Society of America, _Z, 476482.

Papanicolaou, A.C., Levin, H.S., Eisenberg, H.M., & Moore,
B.D. (1983). Note: Evoked potential indices of
selective hemisphere engagement in affective and
phonetic tasks. Neuropsychologia, 21, 401-405.

Peters, M. (1977). Note: Simultaneous performance of two
motor activities: The factor of timing.
Neuropsychologia, 1_5, 461-465.

Peters, M. (1981). Note: Handedness: Effect of prolonged
practice on between hand performance differences.
Neuropsychologia, 12, 587-590.

Rice, D.G., Abroms, G.M., & Saxman, J.H. (1969). Speech and
physiological corelates of "flat" affect. Archives of
General Psychiatry, 2Q, 566-572.








90


Ross, E.D. (1981). The aprosodias: Functional-anatomic
organization of the affective components of language in
the right hemisphere. Archives of Neurology, a_, 561569.

Ross, E.D., & Mesulam, M. (1979). Dominant language
functions of the right hemisphere? Prosody and
emotional gesturing. Archives of Neurology, 55, 144148.

Ryalls, J.H. (1982). Intonation in Broca's aphasia.
Neuropsychologia, Q, 355-360.

Ryalls, J.H. (1986). Reply to Shapiro and Danly. Brain and
Language, U, 183-187.

SAS Institute Inc. (1985). SAS User's Guide: Statistics,
Version 5 Edition. Cary, NC: SAS Institute Inc.

Shapiro, B.E., & Danly, M. (1985). The role of the right
hemisphere in the control of speech prosody in
propositional and affective contexts. Brain and
Language, 25, 19-36.

Shattuck, S.R., & Lackner, J.R. (1975). Speech production:
Contribution of syntactic structure. Perceptual and
Motor Skills, 4_Q, 931-936.

Shipley-Brown, F., & Dingwall, W.O. (1986). Affective and
linguistic prosody: A dichotic listening test of their
processing in normals. Paper presented at the Annual Convention of the International Neuropsychological
Society, Denver, Colorado.

Sidtis, J.J. (1984). Music, pitch perception, and the
mechanisms of cortical hearing. In M.S. Gazzaniga
(Ed.), Handbook of Cognitive Neuroscience. New York:
Plenum Press.

Summers, J.J., & Sharp, C.A. (1979). Bilateral effects of
concurrent verbal and spatial rehersal on complex motor
sequencing. Neuropsychologia, 17, 331-343.

Taylor, M.M., Lindsay, P.H., & Forbes, S.M. (1967).
Quantification of shared capacity processing in
auditory and visual discrimination. Acta Psychologia,
a2, 223-229.

Tompkins, C.A., & Flowers, C.R. (1985). Perception of
emotional intonation by brain-damaged adults: The influence of task processing levels. Journal of Speech
and Hearing Research, a_, 526-538.








91


Tompkins, C.A., & Mateer, C.A. (1985). Right-hemisphere
appreciation of prosodic and linguistic indications of
implicit attitude. Brain and Language, _4, 185-203.

Tucker, D.M., Watson, R.T., & Heilman, K.M. (1977).
Discrimination and evocation of affectively intoned speech in patients with right parietal disease.
Neurology, aZ, 947-950.

Vaissiere, P., (1983). Language independent prosodic
features. In A. Cutler and D.R. Ladd (Eds.),
Language and Communication: Vol.: 14. Prosody: Models
and Measurments. New York: Springer-Verlag.

Weintraub, S., Mesulam, M., & Kramer, L. (1981).
Disturbances in prosody: A right hemisphere
contribution to language. Archives of Neurology, __,
742-744.

Williams, C. E., & Stevens, K. N. (1972). Emotions and
speech: Some acoustical correlates. Journal of the
Acoustical Society of America, U, 1238-1250.

Wyke, M. (1968). The effect of brain lesions in the
performance of an arm-hand precision task.
Neuropsychologia, Q, 125-134.













BIOGRAPHICAL SKETCH


Nancy Jeanne Haak, daughter of Dr. Edward D. Haak and Mrs. Jeanne Brainard Haak, was born in LaGrange, Georgia, on

September 8, 1957. The fourth of five children, she spent the first seventeen years of her life in Warm Springs, Georgia. Her father practiced his medical specialty there at

the Georgia Warm Springs Polio Foundation. From her family and the Foundation, Nancy Jeanne developed an interest in the health related professions and has carried with her a model of how rehabilitation should be done.

Her teachers at Warm Springs Elementary School fostered in her a love of teaching. Nancy Jeanne was an honors graduate of her 1975 high school class in Manchester, Georgia. During her years there, she decided to pursue the profession of speech pathology as it appeared to have the right combination of teaching and clinical practice. In 1979, she graduated with highest honors from Auburn

University with her Bachelor of Arts degree in speech pathology and audiology and her minor in psychology. Her Master of Science degree in speech pathology was obtained in

1980 from Purdue University with her minor in psychology. After two and a half years of practicing clinical


92




Full Text

PAGE 1

TESTING A MODEL OF PROSODY PRODUCTION IN BRAIN-INTACT INDIVIDUALS USING THE DUAL TASK PARADIGM BY NANCY JEANNE HAAK A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 1987

PAGE 2

Copyright 1987 by Nancy Jeanne Haak

PAGE 3

TO MY FATHER FOR HIS BIRTHDAY

PAGE 4

ACKNOWLEDGMENTS The constant love, support, and encouragement given to me by my parents made the completion of this project possible. A special thank you is given to my dear friend and colleague, Janet Harrison, who devoted much of her time and effort to help me through the dark days and to celebrate the victories. Many thanks are also given to my committee members, Dr. Fennell, Dr. Gonzalez-Rothi , Dr. Lombardino, and Dr. Rothman. They have been there from the begining always willing to guide, advise, and encourage. Words of gratitude seem so inadequate when it comes to acknowledging the efforts of my chairman, Michael Crary. He has been my friend and made me feel welcome in his home and with his family. He has been my colleague and allowed me to learn along with him. He has been my professor and shown me how to mature as a clinician and as a researcher. For these and the countless times he has had more than 'a minute', I offer my warmest thanks. iv

PAGE 5

TABLE OF CONTENTS Page ACKNOWLEGMENTS iy ABSTRACT vii CHAPTERS I INTRODUCTION 1 II REVIEW OF THE LITERATURE 5 Prosody 5 Comprehension of Prosody in Brain-Damaged Persons. ... 8 Production of Prosody in Brain-Damaged Persons. ... 9 Comprehension of Prosody in Brain-Intact Persons .... 13 Production of Prosody in Brain-Intact Persons .... 14 Dual Task Paradigm 15 Finger Tapping 20 Speech Shadowing 22 Statement of the Problem .27 Research Hypotheses 28 III METHODS 31 Subjects 31 Apparatus 31 Finger Tapping 31 Speech Stimuli 33 General Procedure 36 Data Analysis 38 Statistical Analysis 41 IV RESULTS Finger Tapping Data 43 Speech Data 46 Summary 65 v

PAGE 6

Page V DISCUSSION 69 Procedural Explanations 69 Prosody Model Explanations 72 Dual Task Paradigm Explanations 77 Concluding Remarks 79 Implications for Future Research ... 80 APPENDICES A QUESTIONNAIRE 82 B INSTRUCTIONS 83 C SAS STATISTICAL PROGRAM 84 REFERENCES 86 BIOGRAPHICAL SKETCH 92 Vi

PAGE 7

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy TESTING A MODEL OF PROSODY PRODUCTION IN BRAIN-INTACT INDIVIDUALS USING THE DUAL TASK PARADIGM By Nancy Jeanne Haak August, 1987 Chairman: Michael A. Crary, PhD. Major Department: Speech A current model of prosody production suggests that the right cerebral hemisphere processes emotional prosody and that both hemispheres may be responsible for the production of linguistic prosody. The primary basis for this model has involved brain-damaged patients. The current investigation was conducted to assess this model in brain-intact persons. A dual task paradigm was used to designate hemispheric laterality through interference effects in the performance of competing tasks. The tasks selected were single index finger tapping and speech shadowing. Twenty right-handed males with no history of neurological problems participated in the study. Four prosodic contours representing linguistic and emotional prosody were imitated in the shadowing task. Tapping was performed as rapidly and consistently as possible. vii

PAGE 8

Analyses of the concurrent performances and their comparison to baseline were made for both tasks. Ten dependent variables were analyzed, three describing tapping and seven acoustical characteristics of prosody. No significant interactions between tapping and shadowing were found. However, descriptive analysis revealed a bimanual and symmetrical effect of speech upon finger tapping. Tapping did not have an effect upon the speech (prosody production) characteristics . The model of prosody production under investigation in this study suggested that both hemispheres contributed to the production of linguistic prosody but one hemisphere was responsible for the production of emotional prosody. The results of this study did not support this notion of a hemispheric split in prosodic production. Two possible explanations are discussed. First, the production of prosody may be a bihemispheric event, and second, linguistic and emotional contexts may influence the hemisphericity of the productive control of prosody. viii

PAGE 9

CHAPTER I INTRODUCTION If we want to know how the normal brain works, the best subjects may be normal human beings. (Heilman, 1983, p.2) The neuropsychological models for the normal production of prosodic features have been based primarily on braindamaged populations. From such lesion data the traditional model has allocated the control of linguistic prosody to the left hemisphere and the control of emotional prosody to the right hemipshere. Recent models, however, suggest that the right hemisphere may play a more global role in the production of emotional and linguistic prosody (Shapiro and Danly, 1985). Studies examining the neurological basis of prosody in brain-intact individuals have focused upon comprehension. Laterality for the perception of prosody has been tested using dichotic listening paradigms. Recent results (ShipleyBrown SÂ’ Dingwall, 1986) revealed a left ear superiority effect for both emotional and linguistic prosody. The left ear advantage was less strong for the linguistically based prosodic contours. From these and similar data, ShipleyBrown and Dingwall (1986) have proposed the "Attraction" 1

PAGE 10

2 model. This model places the right hemisphere in the dominant role for processing both emotional and linguistic prosody. The left hemisphere is placed in a supporting role, as it "attracts" processing of the more segmental features signaling linguistic prosody. This model supports the majority of the literature on comprehension of prosody in brain-intact populations (and in large part the comprehension of prosody in studies of brain-damaged persons as well). The question remains, can the laterality effects of comprehension be equally applied to the laterality of production? Some studies, involving brain— damaged populations, have reported laterality effects in production to be similar to those found in comprehension. Data from brain-intact subjects are needed to clarify such a model for normal functioning. This investigation was designed to meet this need. A dual task paradigm was used as the method by which to collect data from normal subjects. Based on the principle of interference or "something's gotta give" (Bryden, 1982, p. 112), this paradigm required the concurrent performance of two tasks. A decrement in performance on at least one of the tasks was expected if the tasks required the same brain space. Bryden (1982) has described the effects of this paradigm as being analogous to the creation of "a 'temporary lesion' in a perfectly normal subject" (p. 112). Such interference effects therefore, permited testing of a

PAGE 11

3 neuropsychological model of prosodic feature production in brain-intact subjects. A current model for the production of prosody based on brain-damaged subjects, suggests that the right hemisphere dominates the production of emotional prosodic features. The right and the left hemisphere are viewed as being more equally involved in their contribution to the production of linguistic prosodic features. The purpose of this investigation was to test this neuropsychological model in brain-intact subjects using a dual task paradigm. Two tasks, one verbal and one motor, were performed concurrently and individually for comparison of interference effects. Speech shadowing of a short paragraph, was the verbal task. A target sentence with a specific prosodic contour was embedded in an otherwise neutral paragraph. The prosodic features of the target sentence shadowed by the subject were acoustically analyzed. Each target sentence represented one of four main prosodic contours. Two of the contours represented emotional prosodic features (happy and sad), and two of the contours represented linguistic prosodic features (statements and questions). Each hand's performance of single index finger tapping was the motor task. The temporal characteristics of the finger tapping were assessed. These included tapping rate and variability. The tapping characteristics in the dual condition were

PAGE 12

4 measured for the exact same time period as the shadowed target sentence. Interference effects were expected when the "alone" trials were compared with the "dual" trials. The differences expected were decrements in hand tapping performance and/or alteration in prosodic shadowing. For example, according to the suggested model, the greatest difference in performance would have been observed when shadowing of emotional prosodic features (an hypothesized right hemisphere task) was paired with left hand finger tapping (a right hemisphere task). A bidirectional analysis of the interference effects was necessary since prediction of a unidirectional effect would limit observation of the data to either deficits or facilitations in performance. This direction of the effect would have been difficult to predict a priori.

PAGE 13

CHAPTER II REVIEW OF THE LITERATURE Prosody The term "prosody" was first used by Monrad-Krohn in 1947. He defined prosodic qualities as including "correct inflection of words, correct placing of stress upon syllables and words in sentences; natural rhythm, pauses and rate of speaking, natural shifting of pitch from syllable to syllable" (1947, p. 406). This definition can be condensed into three key perceptual features, pitch, rate and syllabic stress. These perceptual features correspond to the acoustic measures of fundamental frequency, duration and intensity. The measurement of such acoustic parameters allows objective and quantifiable assessment of the prosodic contours of speech. This instrumental or experimental approach to the study of prosody has been described by Cutler and Ladd (1983) as the "concrete" approach. They contrast this with the more descriptive and theoretical studies that have taken the "abstract" approach. The literature from the concrete approach is germane to the purpose of this proposal. Prosody may convey linguistic information and/or emotional information. It has traditionally been divided 5

PAGE 14

6 into these two information types. In this respect, linguistic or propositional prosody signals syntactic distinction as in designating questions from statements. Emotional or affective prosody signals mood as in happy or sad. Particular patterns of acoustic features have been identified which serve to differentiate the subtypes of prosodic contours within linguistic and emotional prosody. Linguistic patterns of prosody have been described in terms of fundamental frequency declination (topline slope), sentence-final segmental lengthening and sentence-terminal intonation contour (Danly and Shapiro, 1982; Danly, Cooper and Shapiro, 1983). The terminal intonation contour has been identified as "the most important feature in distinguishing simple declarative statements from yes/no questions in a number of languages" (Vaissiere, 1983). In English, a rising terminal contour signals a question and a falling terminal contour signals a statement. Emotional patterns of prosody have been described in some studies as those acoustical properties of intensity and duration that are typically perceived as "stress" variation (Tucker, Watson and Heilman, 1977; Heilman, Bowers, Speedie and Coslett, 1984). Williams and Stevens (1972), however, have found that the contour of fundamental frequency versus duration most clearly indicates the emotionality of an utterance. For example, happy sentences are characterized as being more highly variable, higher in pitch and of greater intonational range than sad.

PAGE 15

7 Sad. sentences are characterized by more frequent pauses, longer durations of long vowels and consonants and more restriction intonational range (Williams and Stevens, 1972). For the most part, fundamental frequency and duration measures have been preferred in the acoustical descriptions of both linguistic and emotional prosody (Rice, Abroms and Saxman, 1969; Williams and Stevens, 1972; Olive, 1975; Cooper and Sorensen, 1981; Eady and Cooper, 1986). When considering those structures within the brain crucial to the processing of prosodic information, MonradKrohn (1947) concluded that, "co-operation of the whole brain is probably needed" (p. 415). Prosody has been connected with right-hemisphere processing by its association with the pragmatic aspects of language (Cutler and Ladd, 1983). Ross (1981) has asserted that prosody is an element of language in and of itself and as such is a dominant linguistic feature of the right hemisphere. Not all laterality studies have shown such exclusive support for the lateralization of prosody to the right hemisphere. There are studies for example that have identified disruption of prosodic abilities in aphasic patients as a result of lefthemisphere damage (i.e., Danly and Shapiro, 1982; Ryalls, 1982) and in dysarthric patients (i.e., Kent and Rosenbek, 1982). These studies of brain-damaged patients can be separated into assessment of comprehension and assessment of production of prosody.

PAGE 16

8 Comprehension of Prosody in Brain-Damaged Persons In general the literature on the comprehension of prosody among brain-damaged patients supports the dominance of the right hemisphere. (Tucker et al., 1977; Weintraub et al . , 1981; Lonie and Lesser, 1983; Denes, Caldonetto, Semenza, Vaggnes and Zettlin, 1984; Tompkins and Mateer, 1985; Tompkins and Flowers, 1985). Most of the studies of prosodic comprehension assessed emotional prosody only or did not specify a division of prosody into its emotional and linguistic components. The Heilman et al. (1984) study did separate comprehension of affective/emotional from linguistic prosody and found the right hemisphere dominant for affective and the left and right hemispheres to have fignal d ifficulty with the linguistic prosody. Hartje, Willmes , and Weniger (1985) also separated emotional and linguistic prosody and tested their brain-damaged subjects using CV syllables in a dichotic task. The results of their first experiment suggested that the left hemisphere could process intonation if required to do so without relying exclusively upon right hemisphere processing. The results of their second experiment revealed that the right hemisphere will consult the left hemisphere whenever it is uncertain about the phonetic information contained in a stimulus; however, "there seems to be no significant tendency of the left hemisphere to delegate the processing of intonation to the right hemisphere when phonetic [information] and

PAGE 17

9 international information are combined..." (p.98). This would suggest that while the right hemisphere may be implicated in many contexts, the left hemisphere has some capability in processing intonation and may "attract" such processing when phonetic information is involved. The comprehension studies of brain-damaged subjects have generally supported the dominance of the right hemisphere for emotional prosody, and they have acknowledged the participation of the left hemisphere in the processing of more linguistic aspects. Hartje et al. (1985) summarized this viewpoint well as they concluded, that while the right hemisphere may lead the left hemisphere in intonation processing, "it cannot be excluded that in normal conditions of auditory language perception the entire processing is done by the left hemisphere" (p. 97). Productio n of Prosody in Brain-Damaged Persons The traditional model of prosody production was proposed by Hughlings Jackson (1932, cited in Heilman, Bowers, Speedie, and Coslett , 1984). Jackson suggested that the left hemisphere was dominant in mediating linguistic prosody and the right hemisphere was dominant in mediating emotional prosody. The following studies have supported the traditional model. In 1977, Tucker, Watson and Heilman studied the production of affective prosody in eleven patients with right temporoparietal lesions. In the production task an

PAGE 18

10 emotionally indifferent sentence was provided and the subjects were asked to repeat the sentence with one of the selected affective tones. Patients with such righthemisphere lesions were found to be deficient in their ability to produce affective speech. Tucker et al. noted that these patients "would often use propositional speech to express an emotion" (1977, p. 950). Weintraub, Mesulam and Kramer (1981) looked at the effect of right-hemisphere damage on prosody in spontaneous production tasks. No clear differentiation between emotional and linguistic features was made. They found their nine right-hemisphere-damaged patients to have had significantly more difficulty than their control group; however, no lefthemipshere-damaged patients were included for comparison in the study. Ross (1981) observed ten patients with focal righthemisphere lesions and described their disruption in the production of prosody as "aprosodia." He suggested that these deficits mirrored the aphasias of the left hemipshere. However, no objective measures were used in the analyses. Studies of the disruption of prosody in lefthemipshere-damaged patients with aphasia have supported the traditional model. Ryalls (1982) compared the production of prosody in Broca's aphasia patients with normal controls and found a significantly restricted intonational range among the Broca's aphasia patients. Danly and Shapiro (1982) also

PAGE 19

11 assessed the production of prosody in five Broca's aphasia patients and found abnormalities in the acoustical characteristics of linguistic prosody. Findings similar to those for the Broca's patients were revealed by Danly, Cooper and Shapiro (1983) in their study of five patients with Wernicke's aphasia. Cooper, Soares, Nicol, Michelow and Goloski (1984) assessed the acoustical features of fundamental frequency and speech timing in four patients with unilateral lefthemisphere damage, four patients with unilateral righthemisphere damage and four brain-intact controls. The subjects read aloud a series of "non-emotive " sentences. While abnormalities were observed in both leftand rightbrain-damaged subjects, the left-hemisphere patients exhibited a greater impairment in both the speech timing and the fundamental frequency. Crary and Haak (1986) have assessed the acoustical features of linguistic prosody production in eight patients with leftversus right-hemisphere brain damage. Several of their patients were aphasic. Their data suggested that a basic deficit in prosodic production resulted from either rightor left-hemisphere damage. The linguistic aspects of prosodic production however, were impaired only for those subjects with aphasia. Some investigators have offered the argument that lefthemisphere damage results in a distortion of prosodic

PAGE 20

12 features or a "dysprosodia" while right -hemipshere damage results in a loss of prosodic features or "aprosodia". The results of the Kent and Rosenbek (1982) study support this argument. They examined the prosodic contours of dysarthric patients with lesions of the right hemisphere, left hemisphere, cerebellum and basal ganglia. They described aprosodias in the patients with right-hemisphere and basal ganglia lesions and dvsprosodlas in the patients with lefthemisphere and cerebellar lesions. In contrast, the results of Shapiro and Danly (1985) challenge the traditional model of prosody production. They compared the production of emotional (happy and sad) and linguistic prosody (questions and statements) in patients with anterior right-hemisphere damage, posterior righthemisphere damage and posterior left-hemisphere damage with normal controls. The results of their study suggested that right -hemisphere damage alone "may result in a primary disturbance of speech prosody that may be independent of the disturbances in affect often noted in right-brain-damaged populations" (1985, p. 19). The prosody of the patients with left-posterior brain damage was found to be similar to that of the normal controls. The study did not include leftfrontal lesions, and therefore, while the results implicate a more global role for the right hemisphere in both emotional and linguistic prosody, the participation of the left hemipshere in linguistic prosody can not be dismissed.

PAGE 21

13 In summary, production of emotional prosody in braindamaged patients suggests a neuropsychological model of right-hemisphere dominance. This finding is consistent with the traditional model and is similar to the literature on comprehension. More equivalent participation of the left and the right hemispheres in the production of linguistic prosody has been suggested recently and differs from the traditional viewpoint of left dominance. Com prehension of Prosody in Brain-Intact Persons Studies of comprehension of prosody in brain-intact subjects have most often employed the use of a dichotic listening paradigm. Right -hemisphere superiority for comprehension was found in studies of normal subjects for prosody in general (Blumstein and Cooper, 1974); for emotional prosody (Bryden and Ley, 1983 [adults and children]; Papanicolaou, Levin, Eisenberg and Moore, 1983); and for emotional and linguistic prosody (Shipley-Brown and Dingwall, 1986). In the Shipley-Brown and Dingwall (1986) dichotic study both types of prosody elicited a left ear (right hemisphere) superiority effect. The investigators noted that the advantage was greater for the emotional prosody than for the linguistic prosody. Shipley-Brown and Dingwall advanced a model which seemed to account for these results. The Attraction Model' hypothesizes that the more non-segmental the feature (e.g., intonation) the more it is lateralized to

PAGE 22

14 the right hemisphere (nondominant). Conversely, the more segmental the prosodic feature (e.g., tone) the more lateralized the processing is to the left hemisphere (dominant). They suggest that pitch when produced in isolation or used to signal emotion, produces strong righthemisphere laterality effects. However, pitch used to signal linguistic prosody is drawn to the left hemisphere where most sequential processing takes place. In this respect, the right hemisphere is viewed as processing both emotional and linguistic prosody but the left hemisphere processes linguistic prosody only. production of Prosody in Brain-Intact Persons Shipley-Brown and Dingwall (1986) add that this Attraction Model accounts well for the results of their experiments and for the majority of the experiments that deal with comprehension of prosody. However, they conclude that the production of prosodic features is more problematic. They suggest that the comprehension and production of affective prosody appear to be mediated by the right hemisphere. It is plausible, they note, that the right hemisphere also directs the production of linguistic prosody. Shipley-Brown and Dingwall cite as support that "k~* iein; is pke i, e damage does not eliminate prosody, it only changes its characteristics; whereas, right-hemisphere damage has been noted to produce aprosodia.

PAGE 23

15 Neuropsychologically based models of prosody production from brain-intact populations are virtually nonexistent. In order to achieve a valid model of normal function such data should be collected. The current study has been designed to collect such data using a dual task paradigm, and the remainder of the review of the literature will define and delineate the use of this procedure. Dual Task Paradig m "Dual," "concurrent" and "interference" are terms used to define a task design which was popularized in the "1960's and 1970 's" psychology literature. Its underlying premise is defined as follows: When two or more discriminations are processed together, either they proceed in parallel with the same efficiency as when either is processed alone, or they may interfere with one another, and share the total available processing capability. (Taylor, Lindsay SÂ’ Forbes, 1967, p. 227) Kinsbourne and Cook (1971) are often credited with being the first to use this method to assess behavioral asymmetries in brain function (Green 8 Vaid, 1986). Since its first use, the laterality-based dual task paradigm has incorporated a variety of verbal tasks. These have included spontaneous speech, sentence, phrase and word repetition, nursery rhymes, tongue twisters, verbal routines (i.e., counting), reading and object naming. Nonverbal tasks have included humming, viewing pictures of faces and construction of block designs. Manual tasks have typically included dowel

PAGE 24

16 balancing in the early studies, followed by finger-tapping sequences and single-finger tapping. The data generated have been complex and variable. Analyses of performance deficits have focused on leftversus right-hand performance. In the case of tapping, errors have been evaluated in terms of the errors in sequence or in rate of tapping or number of taps per trial. Analysis of the speech task was completed rarely in the early studies. When the speech was assessed, it was in terms of the number of words produced or the number of omitted or incorrectly produced (i.e., misarticulated) words. Recent studies more frequently address the analysis of speech but with little more precision in analysis; the task of interest has continued to be the effect on tapping. Green S’ Vaid (1986) advance support for the dual paradigm over dichotic and/or tachistoscopic methods for the following reasons: 1) it considers the lateralization of function mainly at the output level (i.e., laterality of speech production may be assessed in addition to, or instead of speech perception); 2) it is not subject to such brief stimuli and thus may allow more naturalistic linguistic tasks to be examined; and 3) the dual task design supports the view of the brain as an "integrated neural network where activation of one region may influence normal functioning of another region” (p.466).

PAGE 25

17 Bryden (1982) suggests that the procedure is complex and requires careful control observations, judicious selection of tasks, and some theoretical notion as to the nature of the interference (i.e., structural or capacitive) . Critical variables must be controlled in a dual task paradigm. A minimum of four dependent measures is required, the speed and/or accuracy of both tasks alone and both tasks together. These measures are crucial for the determination of how adequately the subject is maintaining performance in the dual condition. Otherwise the subject could be dividing attention/processing resources between tasks and this would lead to an underestimate of the amount of interference (Bryden, 1982). In terms of task selection, the notion of "structural interference" implies that two different tasks will interfere with each other only to the degree that they require similar brain structure. Therefore, two different tasks requiring nonoverlapping brain structures should cause no interference. Attempts to explain the effects of the dual task procedure have been described in terms of models. The Kinsbourne Model" also called the "Functional Cerebral Distance Model" (Hellige, 1985) or the " Intrahemispheric Competition Model" (Bowers, Heilman, Satz and Altman, 1978) is a thought to be a neuropsychologically based model. It

PAGE 26

18 has been contrasted with the cognitive psychology models of Resource Allocation. Hellige (1985) reviewed two of the more current forms of these models, a Multiple Resource Model of Friedman and Poison (1981) and the Functional Cerebral Distance Model of Kinsbourne and colleagues. In discussing the resource model, Hellige describes the position taken by Navon and Gopher (1979). They were said to have suggested that the originally proposed sin g l e resource pool was inadequate in its explanation of recent data. In its stead the multiple resource models have been proposed. Freidman and Poison's model has two separate pools which are hemisphere specific. They propose that two tasks will interfere with each other to the extent that they require resources from the same limited-capacity pool. In addition, each pool is completely independent and within a pool for a single hemisphere the resources are completely undifferentiated. Such resource allocation models would appear to oppose the current concept of many neuropsychologically described systems (including prosody) that propose more interactive systems. The Functional Cerebral Distance Model states that disruption occurs to the extent that two activities place overlapping demands on spatially adjunct neural systems. The amount of functional overlap is presumably inversely related to the actual distance between areas (either within or between hemispheres responsible for mediating these activities. (Bowers et al., 1978, p. 540)

PAGE 27

19 According to Hellige (1985), such, an interaction could be facilitative (produce a priming effect) if the two tasks are compatible or to the extent that the two tasks involve cortical regions which are sparsely interconnected. It could, on the other hand, be inhibiting (produce interference effects) if the two tasks are incompatible. In other words interference occurs to the extent that there is "maladaptive cross-talk". Hellige likens this to what Navon and Gopher have termed 'concurrence costs'. These may be described as a summation effect whereby the dual condition requires more resources than the sum of the separate task resources. It is therefore the concurrent costs, rather than multiple resources which are responsible for the lateralized interference effects in this model. Bowers et al. (1978) note three aspects of ambiguity in the Kinsbourne model: 1) the underlying mechanisms which account for disruption are unclear; 2) the concept of neural overflow rather loosely accounts for the facilitation as well as the disruption of performance (they suggest that the experimental procedures of two target tasks versus one target task may account for this), and 3) prediction of onesided interference effects or mutual interference effects is not permitted by the model. They recommend a bidirectional analysis to rectify this final point. Evidence in the literature suggests that interference effects may be most noticeable when a simple repetitive task

PAGE 28

20 requiring minimal attention is paired with, a difficult task. This conclusion is pragmatically based on the reasoning that asymmetry/interference effects may be masked if both tasks are difficult and that a "ceiling effect" may occur if both tasks are too simple (Green and Vaid, 1986). Finger Tapping Single finger tapping, an elegantly lateral and simple task, requires minimal attention. General support for this comes from the neuropsychological assessment literature. The Finger Ta p pin g T es t portion of the Halstead-Reitan Battery has been widely used and has been reported to demonstrate laterality, as the tapping rate of the hand contralateral to the lesioned hemisphere typically shows a slowed rate (Lezak, 1983). The body's inability to control the fine distal movements of the ipsilateral hand have been recounted by Bryden (1982). This has been documented in both human and animal studies. One such investigation was conducted by Brinkman and Kuypers (1972) with split-brain monkeys. Their findings supported the contention that "distal motor control apparently is not available to the ipsilateral hand and fingers" (p. 538). The implication here is that single finger tapping can be said with confidence to be a task which involves the contralateral hemisphere exclusively. In comparing finger tapping with a verbal task, some a priori account for anticipated "cerebral distance" is necessary for the proposed model. Mary Wyke (1968) examined

PAGE 29

21 the effects of arm-hand precision in the presence of brain lesions. She observed "impairment in the ability to make movements of the arm and hand requiring accuracy in timing and precision" were more often associated with frontal lobe lesions than with the temporal or parietal lobes (p. 125). A "frontal lobe" manual task would therefore be desirable as it has been well documented that motor-speech programing and control are also a frontal lobe functions. Kimura and Vanderwolf (1970) compared hand performance on a "very simple, but demanding motor task" (p. 769). They selected individual finger movement. Their results suggested a left-hand superiority for most right handers. This was later contradicted by a study conducted by Peters (1981) who demonstrated that preferred hand superiority would persist even after prolonged periods of practice in tapping. Such data would dictate that baseline measurements of finger tapping rate be taken for each hand individually. Finally, dual task paradigms employing finger tapping have shown stronger interference/laterality effects than those using sequential finger tapping. See, Morris, Bathurst & Hellige (1986) studied an alternating two-key tapping sequence and compared the results to previous single finger tapping studies. They found limited verbal laterality effects in the two-key tapping and concluded that the method of choice would be single finger tapping.

PAGE 30

22 S peech Shadowing The verbal task to be paired with the simple repetitive motor task should be complex, and continuous (Green & Vaid, 1986). Individual variability would be lessened if the task required an exact repetition of a model stimulus rather than spontaneous generation of items. Shadowing of speech would appear to meet these criteria. Shadowing requires the subject to repeat as rapidly as possible everything they hear, as they listen to it (Shattuck and Lackner , 1975). While similar to reading in terms of providing continuity, shadowing appears to be more difficult as it does not permit the scanning preview of information to follow and relies on the subject's constant attention when required to imitate online . Some of the original work using the shadowing technique was conducted by Cherry and Taylor (1954) to demonstrate the limitations of secondary recognition. Their hypothesis was that a performance decrement should be found if secondary recognition processes are required to monitor two channels instead of one. Mar slen-Wilson (1975) used shadowing to support the model of sentence processing in which the listener analyzes incoming stimuli at all available levels of analysis (therefore information at each of the levels would have the potential to constrain or to guide simultaneous processing at other levels).

PAGE 31

23 Shattuck and Lackner (1975) used shadowing to delineate the contribution of syntactic structure to speech production. They found evidence to support the notion that speaking a sentence involves planning farther ahead than the next word. Allport, Antonis and Reynolds (1972) used shadowing in a dual task paradigm to disprove the single-channel hypothesis. They found that their third-year music students could sight read music and shadow speech without compromise of either task. However, they did find that proficiency of piano playing influenced the subject's ability to answer questions regarding the content of the speech shadowed. They used this to advocate a multichannel hypothesis. Lackner and Shattuck-Huf nagel (1982) selected the shadowing procedure to assess the long-term subtle linguistic deficits of Korean War veterans who had sustained penetrating head injuries. They chose a task that would be differentially sensitive to a range of syntactic and semantic factors and that would tax to the utmost speech comprehension and production skills" (p.709). They found shadowing to be "extraordinarily sensitive in detecting residual language dysfunction" (p.712). From the above review it can be seen that shadowing has the potential to be an effective task which can be used successfully in the dual paradigm. The typical focus of the dual paradigm literature has been on the interference of the

PAGE 32

24 finger tapping rates and total number of taps. Studies that have contained a bidirectional analysis have selected word distortion and omission as the parameters by which to assess the interference effects on the speech (and have found limited effects). Perhaps assessment of the more subtle aspects of speech (which would be possible using a shadowing task) would reveal interference effects. A need for more precise analysis of the performance patterns of both speech and tapping has been indicated by Green 5? Vaid (1986). In agreement with the proposed study, they suggest that one avenue of assessment would be the comparison of temporal characteristics of the tapping with those of speech. A foundation to support such analyses can be found in a study conducted by Peters (1977). This dual task study was designed to replicate the findings of Lomas and Kimura (1976). They found that speaking during single finger tapping produced equal tapping performance by both hands. Peters used single finger tapping (as fast as possible) as task one and recitation of a nursery rhyme (Humpty Dumpty) as task two. His results showed a small degree of bilateral disruption. Not all subjects showed the disruption effects (2 of the 48 subjects revealed priming effects). However, further investigation (beyond that of tapping rate effects) gave Peters the impression that the finger tapping was integrated with the speaking as the taps and the stressed

PAGE 33

25 words seemed to Peters to coincide. He hypothesized that the concurrent tasks produced relatively diminished interference because of the flexibility of the rhythmic patterns required. Two subsequent experiments were designed to test this hypothesis. In experiment 2 the subjects were asked to tap continuously with one hand and to tap a specific rhythm with the other hand. Only 15 of his 150 subjects could successfully execute this task. Of these fifteen subjects, the right-hand-tap, left-hand-rhythm condition was performed with more ease. All those who passed had musical backgrounds. It was again hypothesized by Peters that the single finger continuous tapping task permitted some flexibility in rhythm. In experiment 3 Peters further reduced the flexibility as he required the subjects to recite Humpty-Dumpty while beating the same specific rhythm with either the right or the left hand. The subjects were instructed to recite the nursery rhyme at normal speed and with the proper rhythmic intonation. They were asked to not use one rhythm to pace the other and none, of the 100 subjects could perform the task flawlessly. Peters went on to posit a working hypothesis that "the CNS [central nervous system] in the voluntary guidance of movement can produce only one basic rhythm at a time" (p. 463). He offered five conditions under which successful concurrent activity of two motor systems

PAGE 34

26 would be possible. In abreviated version these included 1) flexible rhythms as in experiments 1 6? 2, 2) one motor system producing a rhythm and the other being based on preformed species specific patterns such as walking, 3) one rhythm following another, 4) one producing a rhythm and the other coming in on the pauses, and 5) one motor system dominating so that the rhythms produce predictable and interlocking patterns of stresses and pauses. The hypotheses of Peters could be reinterpreted to suggest that the rigidly controlled properties of the nursery rhyme paired with a continuous tapping condition would impose their temporal characteristics upon the more flexible, continuous single finger tapping. If the prosodic production task selected was more complex than a nursery rhyme, temporal differences in finger tapping might result between the hands. This supposed interference in hand performance might then imply lateralizing effects of prosody within the brain. A bidirectional effect might also be achieved if Peters's conditions were altered one step further. Having the person tap as fast and as consistently as possible might add more temporal rigidity to the finger tapping task. Following a thorough review of methodological issues in the use of the concurrent activies paradigm, Green and Vaid (1986) make the following recommendations:

PAGE 35

27 Finally, further research using the concurrent activies paradigm should be directed at exploring the temporal relationship between the manual task. . A microanalysis of the temporal relations between speech and tapping during concurrent task performance would undoubtedly provide more precise information about the allocation of attention and possible attentional tradeoffs than do the procedures currently adopted. ... In order for this procedure to be informative, it would be advisable that the linguistic tasks selected involve continuous, rather than discrete, output, (p. 473) Statement of the Problem The preceding literature review has demonstrated that neuropsychological models of the production of prosody have emerged from the study of brain-damaged patients. Neuropsychological models of prosody involving brain-intact populations have excluded aspects of production, addressing the hemispheric control of comprehension instead. The purpose of this study is to investigate an existing neuropsychological model of production of prosodic features (both emotional and linguistic) using brain-intact persons. The literature on the dual task paradigm has been reviewed and this paradigm has been found to meet the purpose of this investigation. The basic components will involve the use of a laterality-specific motor task, single finger tapping, to be performed concurrently with a complex speech shadowing task. The neuropsychological model of prosody production which was tested suggested that the right hemisphere

PAGE 36

28 dominates the control of emotional prosody, whereas the left hemsiphere and the right hemisphere share control of linguistic prosody. Imposed upon this model was the dual paradigm model of Functional Cerebral Distance. The well lateralized characteristics of single finger tapping were paired with prosodically shadowed speech and the expected interference effects formed the research hypotheses. Research Hypotheses The following null hypotheses were suggested: Ho: 1. Right-hand tapping in the dual task conditions involving emotional prosody would not differ, in rate and/or variability, from the right-hand-tapping-alone condition . Ho: 2. Right-hand tapping in the dual task conditions involving linguistic prosody would not differ, in rate and/or variability, from the right -hand-tapping-alone condition . Ho: 3. Left-hand tapping in the dual task conditions involving emotional prosody would not differ, in rate and/or variability, from the lefthand-tapping-alone condition. Ho: 4. Left-hand tapping in the dual task conditions involving linguistic prosody would not differ, in rate and/or variability, from the lefthand-tapping-alone condition. Ho: 5. Neither the right-hand-dual nor the lefthand-dual performances would differ from their baseline performance; therefore, the relative performances of the right and the left hands would be equivalent . Ho: 6. The linguistic prosodic contours for the utterance shadowed in the dual condition of righthand tapping would not differ, in the acoustical parameters measured, from the linguistic prosodic contours shadowed alone.

PAGE 37

29 Ho: 7. The linguistic prosodic contours for the utterance shadowed in the dual condition of left hand tapping would not differ, in the acoustical parameters measured, from the linguistic prosodic contours shadowed alone. Ho: 8. Neither the linguistic prosody with the left-hand tapping nor the linguistic prosody with the right-hand tapping would differ from the linguistic prosody without tapping; therefore, the relative performances would be equivalent . Ho: 9. The emotional prosodic contours shadowed in the dual condition of ri ght-hand tapping would not differ, in the acoustical parameters measured, from the emotional prosodic contours shadowed alone . Ho: 10. The emotional prosodic contours shadowed in the dual condition of left-hand tapping would not differ, in the acoustical parameters measured, from the emotional prosodic contours shadowed alone . Ho: 11. Neither the emotional prosody with the left-hand tapping nor the emotional prosody with the right-hand tapping would differ from the emotional prosody without tapping; therefore, the relative performances would be equivalent . As implied by the above hypotheses, a bidirectional analysis was conducted. For example, a difference in finger tapping performance from baseline was viewed as an interference effect whether it was detrimental or facilitating. There were however, some expectations which could be predicted in terms of the model of prosody production under investigation. These were as follows: 1) Interference effects would be found for the right-hand-dual task performance with lingusitic prosody that would be equivalent to the interference effects of the left-hand-dual task performance with linguistic prosody.

PAGE 38

30 2) Interference effects would be found for the left-hand-dual task performance with emotional prosody that would be greater than the interference effects shown with right-hand-dual task performance with emotional prosody. 3) Interference effects in linguistic prosody production would be found in the total utterance and would be equivalent for rightand left-handdual task conditions. 4) Interference effects in emotional prosody production would be found to a greater or more frequent degree when the dual task involved lefthand tapping than when it involved right-hand tapping.

PAGE 39

CHAPTER III METHODOLOGY Subjects Twenty right -banded males between 18 and 23 years of age served as subjects. All bad English as their native language and did not consider themselves to be bilingual (Green, 1986). They were University of Florida undergraduate or graduate students. Before acceptance as a subject, each person was required to complete a confidential questionnaire (See Appendix A). The questionnaire was designed to disqualify persons who had a history of neurological problems or an identified hearing loss, or who were Morse code users, ham radio operators or musicians. The Edinburg h Handedness Inventory (Oldfield, 1971) was included as part of the questionnaire to verify right handedness. A score of 90 or better was required. Appar a tu s Finger Tapping A solid-state device was constructed to register the subject's taps as a tone of 1000Hz. This tone was produced each time and for as long as the subject's finger made contact with the device. The tone was recorded directly on the left channel of a reel-to-reel audio tape recorder. It 31

PAGE 40

32 was monitored on the recorder's VU meter prior to each experimental session and adjusted to an acceptable recording level on the recorder. The tapping apparatus used a threeby four-inch circuit board. A designated tapping zone was etched in the center of this board. This rectangular area, which measured 1 and 1/ 16-inches long by 1 and 3/4-inches wide. The subject's thumb, and middle finger remained in constant contact with the outer area of the copper circuit board (forming one electrode). The central tapping zone on the plate formed the second electrode. The central zone remained charged up to a 9-volt supply (with a high current limit to prevent any shock) until contact was made by the subject's index finger. The contact produced a voltage drop which was detected by the device and equated to on/off switching. This served to gate the tone production onto the tape recorder as taps. The performance of the device was checked prior to each subject's use. This device complied with the subtle tapping requirements outlined by Green and Vaid (1986) as they specified that "electronic or solid state counters used to register taps are preferable to mechanical and electromechanical counters since the latter devices have a greater time delay in registering taps" (p. 472). The microswitch has been used in some studies but Green and Vaid suggested that this device may be too sensitive as quivering

PAGE 41

33 of the finger may register as a tap. A computer program was designed to effectively measure the tapping rate, as well as the temporal characteristics of the tapping (i.e, variability, and peak-to-peak distance). This program was analogous to the equipment design suggested by Green SÂ’ Vaid (1986) of a distance transducer attached to an oscillograph. They noted that such computer usage would "provide additional analytical power" (p. 473). Subjects were seated at a table in a quiet room. The tapping device was positioned at a comfortable distance for the subject's hands. The subjects were permitted to swivel the copper plate toward each hand as the tapping tasks were introduced so that the counter was at a comfortable angle. Subjects were instructed to use only their index finger to tap and to tap in an up-and-down, not side-to-side manner. (See Appendix B.) They were required to keep their thumb and middle finger in constant contact with the copper sheet and outside of the central rectangle during the tapping trials. The ring and little fingers were positioned on the copper or on the table. The wrist and forearm were in constant contact with the table for the duration of each trial. The same tapping device was used by all subjects. Speech Stimuli The same target sentence "He will be here tomorrow" was embedded in a short paragraph of neutral context . Only the target sentence conveyed the differing prosodic

PAGE 42

34 contour. In this respect the only variable was the prosodic contour as the segmental aspects were identical across all paragraphs . The following prosodic conditions were modeled: He will be here tomorrow. (Declarative) He will be here tomorrow? (Interrogative) He will be here tomorrow. (Happy) He will be here tomorrow. (Sad) These were embedded in the following neutral paragraph: Last week Rick had written that he would be back from Denmark on Saturday. Alice wondered what she should do. When the phone rang, Alice recognized the voice of Rick's friend. He will be here tomorrow she said and they planned what they would do. These stimuli were patterned after the Shapiro and Danly (1985) study. They assessed these same target sentences in a reading task. However, their paragraphs permitted the anticipation of the type of target through appropriate content and prosody, whereas the stimuli in this study were all of neutral context. In the making of the stimulus tape, five paragraphs were read by the same adult male. These original recordings were made using a TEAC X-7 RmkII reel-to-reel audio tape recorder. All the audio tapes used were professional quality Maxell UD 50-60, Hi-output, extended range, low noise, polyester base tapes. Each of the five original

PAGE 43

35 paragraphs was coded semantically and was read with the intention of conveying a specific prosodic contour, one declarative, one interrogative, one happy, one sad and one neutral. These paragraphs were modified and shortened versions of the Danly and Shapiro (1985) stimuli. The tape was taken to a professional broadcast studio where the splicing and dubbing were completed to create the master stimulus tape. The neutral paragraph (as cited above) was dubbed four times. The target sentence "He will be here tomorrow" was carefully spliced out of each of the four dubbings of the neutral paragraph so that the same length of tape was removed each time. The target sentence from the declarative paragraph was spliced out of that paragraph and inserted into one of the neutral paragraphs. The same procedure was conducted for interrogative, happy and sad targets. This resulted in the creation of four stimulus paragraphs, each having a different prosodically intoned target sentence embedded in an otherwise identically neutral content. These created paragraphs were then dubbed onto both channels of a new master tape in a designated blockrandomized order to permit five repetitions of each of 12 conditions requiring shadowing. A total of sixty paragraphs was recorded. The tape was played for ten listeners who were asked to match the target sentence with the appropriate prosodic contour type. The average percent of correct contour identification was 94.3. When confusions were made,

PAGE 44

36 the primary error was between sad and statement contours. This was understandable as the major difference between these two is duration of the utterance. Each subject heard the paragraphs to be shadowed through both earphones of a light-weight NOVA 33-976 headset by Realistic. The presentation sound level was adjusted to a comfortable listening level for each subject by adjusting the output dial on the TEAC X-7rmKII reel-to-reel tape recorder playing the master stimulus tape. The shadowed response was recorded on the right channel of the second TEAC X-7rmKII reel-to-reel tape recorder (tapping was recorded on the left channel of this recorder) via a Realistic 33-1052 condensor lapel microphone. A plastic headband with an extension arm held the microphone at a fixed microphone-to-mouth distance of five inches. The recording level was adjusted for each patient prior to the experimental trials. All responses were recorded on one Maxell UD 50-60 professional quality polyester base reel-toreel tape. General Procedure All tasks were performed alone and in the dual conditions. Equal priority was assigned to the performance of both tasks in the dual condition. The subjects were instructed to tap as rapidly and as consistently as possible and to shadow the paragraphs using exactly the same words and in exactly the same manner.

PAGE 45

37 Initial pilot testing was conducted to design the protocol. Further pilot testing was done to test the final protocol, to establish the specific instructions (see Appendix B) and to determine the time commitment for each subject. The entire protocol (including questionnaire) required one session of approximately one hour and fifteen minutes. The total series of tasks included: 1) Six 15-second trials of tapping (alternating three with the left; three with the right hand) were performed as practice. Reinstruction on hand positioning, rate, and consistency of tapping were given as needed. The tapping device was used but the responses were not recorded. 2) At least four practice trials of shadowing were conducted with re-instruction and repetition of paragraphs performed as needed for compliance with the task requirements (same words in same manner). The stimuli shadowed in the practice trials were modifications of Danly and Shapiro's (1985) paragraphs as were the test stimuli. These practice paragraphs however, included the target sentence "You wrote it last night". They were not neutrally worded. Instead, they conveyed the intended prosodic contour (one declarative, one interrogative, one happy and one sad). The four paragraphs were presented via cassette recording and were read by the same male speaker who recorded the stimulus paragraphs. The subject listened to the paragraphs from a TEAC V-350C Stereo Cassette Deck via the same earphones. The microphone was not used, as the practice trials were not recorded.

PAGE 46

38 Observation from the pilot testing and review of the designs of previous tapping studies added the provision that no two series of tasks required the same hand to tap. Therefore the following series of tasks were presented in a Modified Randomized Block Design. The same order of blocks was presented to each subject. A total of seventy trials were performed as each block contained fourteen trials, one for each condition. Repetitions were needed to ensure an adequate sample of performance from each subject and five trials was the largest number of repetitions found in a dual task study involving speech (Dalen and Hugdahl, 1986) 3) Five 15-second trials of right-hand indexfinger tapping alone. 4) Five 15-second trials of left-hand index-finger tapping alone. 5) Twenty 15-second trials of speech shadowing alone. One for each of the prosodic conditions (statement, question, happy, sad) times five repetitions . 6) Twenty 15-second pairings of right-hand tapping with speech shadowing. One for each of the prosodic conditions times five repetitions. 7) Twenty 15-second pairings of left-hand tapping with speech shadowing. One for each of the prosodic conditions times five repetitions. Data Analysis The raw data for each subject was derived from the audio tape recordings of the patient's verbal shadowing of the target sentence on the right channel, with the alternate or simultaneous tapping performance on the left channel.

PAGE 47

39 Data from the finger-tapping-alone conditions were obtained from the first two seconds of tapping in the fifteen-second sample. The data for the shadowing alone conditions were collected from the target sentence only. In the dual conditions, tapping and shadowing were measured in an approximately simultaneous time frame (maximum skew of time frame was estimated to equal 1.5 milliseconds). The analyses of both tapping and prosody were computer generated. The tapping information was sent from the left channel of the tape recorder into an analog-to-digit al converter within the hardware of an IBM XT personal computer; the speech signal was sent through a PM Pitch Analyzer (Program 201) prior to its connection to the analog-to-digital converter. The PM Pitch Analyzer served as a filtering and initial data gathering device for the measurement of intensity and frequency information on the speech targets selected. As the tape was played for each target sentence, the PM Pitch Analyzer triggered the computer for frequency information, and the computer then gathered the data over four channels at 8KHz (or 2KHz per channel). Six hundred points of data were collected for a 2 second time frame by the computer (providing three times the resolution available from the Pitch Analyzer alone). These data were stored in memory as raw data and then converted to a fundamental frequency trace and waveform envelope for speech data, or converted to square waves for tapping data.

PAGE 48

40 In the data collection for the tapping alone condition the system was triggered by the input from the left channel of the tape recorder. A two-second time window of square waves was traced across the video screen. The cursors were consistently set at the beginning and end of the two-second time window. At the final setting of the second cursor the predetermined calculations were performed on the data between the cursors and sent to the data file for that condition code. The calculations included rate of tapping, mean peak-to-peak distance between taps, and tapping variance (measured as standard deviation) . In the data collection for the shadowing-alone condition the system was triggered from the frequency data entering the PM Pitch Analyzer and two seconds of speech data were traced across the video screen as fundamental frequency trace and waveform envelope. The first cursor was placed at the initial nonzero portion of the fundamental frequency trace for the target utterance and the second cursor at the last nonzero segment of the fundamental frequency trace for the target utterance. The predetermined measures and calculations were then made by the computer. These measures included mean fundamental frequency, utterance duration, variance of the frequency (standard deviation), minimum frequency, maximum frequency, frequency range and slope of the utterance.

PAGE 49

41 In the dual conditions both the fundamental frequency trace and the square-wave tapping trace were on the video screen. The cursor manipulation was made for duration of the target utterance and the corresponding tapping (between the cursors) was also measured. Fifteen of the expected 1400 datalines were missing as a result of unmeasurable traces. Factors which contributed to the unmeasurable traces were inability to place the cusor at a definite begining and/or ending of the trace, omission of the target by the subject, and interference in the signal from the recorder. Statistical Analysis The statistical procedures were calculated using SAS PROC statements (SAS Institute Inc., 1985). See Appendix C. First, these data were sorted by the three independent variables or factors: subject, hand and condition. Then the raw data file was condensed into mean data creating a new data set of 280 observations (20 subjects with 14 conditions each) with no missing cells. The three independent variables were arranged in a 20 Z 3 X 5 factorial ANOVA to assess the validity of the null hypotheses in this investigation (Huck, Cormier & Bounds, 1974). The first variable had 20 levels, one for each subject. The second variable, hand, had three levels, right-, leftand no-hand levels. The third variable, condition, had five levels, tapping only, questions, statements, happy and sad. There

PAGE 50

42 were two groupings of dependent variables. The dependent variables for tapping included rate of taps, mean peak-topeak distance between taps and the variance of tapping. The dependent variables for shadowing included mean fundamental frequency, utterance duration, variance of the frequency, minimum frequency, maximum frequency, range of frequency and slope of the utterance. Main effects for hand and conditions were obtained as well as interactions. The hypotheses generated by this study revolved around the the question of whether there would be an interaction between the levels of hand performance and the prosodic condition levels. The ANOVA statistic permitted the assessment of main effects and of this interaction of hand by condition which was of primary interest in the design. Hypotheses testing for hand and for condition effects were conducted using hand-by-subject and condition-by-subject as the respective error terms. The mean data were then collapsed across subjects and sorted by hand and condition. A second ANOVA permitted separate analyses of the observations within each hand level (left-, rightand no-hand). Post-hoc analyses were performed using the Duncan New Multiple Range Test (Kirk, 1968 ).

PAGE 51

CHAPTER IV RESULTS The statistical results for each of the eleven dependent variables will be presented in the following text . The measures of interest will include means, standard deviations, the ANOVA results for the interaction term handby-condition and for main effects, and the results of the post-hoc testing as indicated. Finger Tapping Data Means and standard deviations depicting the rate of tapping in each prosodic condition are presented in Table 1. The means are displayed graphically in Figure 1 . The right hand tapped faster than the left hand (F = 13.10, df = 1/19, PR>F = .0018). The main effect for condition was significant (F = 71.22, df = 4/76, PR>F = .0001). Post-hoc testing showed the rate of tapping to be significantly slower for all the speech conditions when compared to the baseline tapping-alone condition. The interaction term of hand-byprosodic-condition was nonsignificant (F = 1.74, df = 4/76, PR>F = .1493). Both hands were slower during speech, with the right hand tapping faster than the left in all conditions . 43

PAGE 52

44 Table 1 . Mean data across subjects for dependent variable: rate as taps per second. Condition Left Mean Hand S.D. Right Mean Hand S.D Grammatical : Questions 4.17 .59 4.46 . 53 Statements 4.11 .57 4.43 .57 Emotional : Happy 4.10 .67 4.7 .41 Sad 3.99 .67 4.35 .5 Tap Alone : 5.22 .54 5.71 .63

PAGE 53

45 8-i 7— Means for the Rate of Taps 2 — TA Q P H S TA = Tapping Alone Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand Z = Left Hand Conditions Figure 1 . Plot of mean rate of taps by condition for each hand. O *

PAGE 54

46 Means and standard deviations for the variable the peak-to-peak distance between taps in each prosodic condition are presented in Table 2. The means are displayed graphically in Figure 2. The right hand had a significantly shorter peak-to-peak distance than the left (F = 17.0, df = 1/19, PR>F = .0006). The effect for prosodic condition was significant (F = 41.13, df = 4/76, PR>F = .0001). Post-hoc testing revealed the peak-to-peak distance to be significantly longer for all the speech conditions when compared to the baseline, tapping-alone condition. No handby-prosodic-condition interaction was shown (F = 1.09, df = 4/76, PR > F = .3682). Both hands were slowed in the speaking conditions and the left hand was slower than the right . Means and standard deviations representing the tapping variance in each prosody condition are presented in Table 3. The means are displayed graphically in Figure 3. No significant hand effect was shown (F = 2.14, df = 1/19, PR>F = .1602). The main effect for condition was significant (F = 5.46, df = 4/76, PR>F = .0006). Post-hoc testing indicated that the sad condition had greater variability than the rest of the prosodic conditions and the tapping-alone condition. The interaction term, hand-by-prosodic-condition , was nonsignificant (F = .49, df = 4/76, PR>F = .7441). Speech Data Means and standard deviations depicting the mean fundamental frequency in each prosody condition are

PAGE 55

47 Table 2. Mean data across subjects for dependent variable: peak-to-peak distance between taps. Condition Left Hand Mean S.D. Right Hand Mean S.D. Grammatical : Questions Statements Emotional : Happy Sad Tap Alone: 46.0 5.54 47.35 6.4 45.87 7.4 49.91 7.62 37.29 3.83 42.36 4.05 42.71 4.09 42.83 3.98 44.42 3.32 34.04 3.32

PAGE 56

48 Means for the Peak-toPeak Distance Between Taps TA = Tapping Alone Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand Z = Left Hand Conditions Figure 2. Plot of mean peak-to-peak distance between taps by condition for each hand.

PAGE 57

49 Table 3. Mean data across subjects for dependent variable: standard deviation of taps. Condition Left Hand Mean S.D. Right Hand Mean S.D. Grammatical : Questions Statements Emotional : Happy Sad Tap Alone: 2.62 1.58 2.02 .87 2.45 1.4 3.21 1.83 1.73 .67 2.18 1.45 2.02 .73 2.01 1.31 2.84 1.84 1.65 .93

PAGE 58

50 Means for the Variance of Taps TA = Tapping Alone Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X = Left Hand Conditions Figure 3. Plot of mean variance of taps by condition for each hand.

PAGE 59

51 presented in Table 4. The means are displayed graphically in Figure 4. The main effect for hand was not significant (F = .73, df = 2/38, PR>F = .4890). The main effect for prosody condition was significant (F = 43.38, df = 3/57, PR>F = .0001). Post-hoc testing for condition effects showed the following pattern in mean fundamental frequency, happy > question > sad = statement. No hand by prosody interaction was shown (F = 1.39, df = 6/114, PR>F = .2252). Means and standard deviations representing the utterance duration in each prosodic condition are presented in Table 5. The means are displayed graphically in Figure 5. The main effect for hand was nonsignificant (F = .27, df = 2/38, PR>F = .7676). The main effect for prosody condition was significant (F178.16, df = 3/57, PR>F = .0001). Posthoc testing showed a significantly longer duration for the sad condition when compared to the other prosody conditions. The hand-by-prosodic-condition interaction was nonsignificant (F = 1.40, df = 6/114, PR>F = .2211). Means and standard deviations depicting the variance in frequency for each prosodic condition are presented in Table 6. The means are displayed graphically in Figure 6. The main effect for hand was nonsignificant (F = 1.14, df = 2/38, PR>F = .3290). The main effect for prosodic condition was significant (F = 33.97, df = 3/57, PR>F = .0001). Post-hoc analysis produced the following pattern of variabilty, happy > question > sad = statement. No hand-by-prosodic-

PAGE 60

52 Table 4. Mean data across subjects for dependent variable: mean fundamental frequency. Condition Left Hand Right Hand No Hand Grammatical : Questions Mean S.D. 110.93 11.3 111.73 10.6 113.71 11.35 Statements Mean S.D. 103.96 9.08 103.32 9.66 105.48 11.39 Emotional : Happy Mean S.D. 121.75 16.3 122.77 15.4 122.5 15.74 Sad Mean S.D. 104.1 8.81 103.68 8.0 105.16 10.21

PAGE 61

53 Means for the Mean Fundamental Frequency 125—, 124123122 121 120 119118117116115115114113112 111 110 109108107106105104103102 101 100-L T T Q P H S Conditions Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X = Left Hand = No Hand Figure 4. Plot of means for the mean fundamental frequency by condition for each hand. OX

PAGE 62

54 Table 5. Mean data across subjects for dependent variable: utterance duration (in seconds). Condition Left Hand Right Hand No Hand Grammatical : Questions Mean 1.027 1.014 1.026 S.D. .09 .08 .08 Statements Mean 1.026 1.044 1.044 S.D. .08 .07 .09 Emotional : Happy Mean 0.991 1.003 1.007 S.D. .08 .08 .09 Sad Mean 1.271 1.272 1.25 S.D. . 13 . 12 . 13

PAGE 63

55 Means for the Utterance Duration (in seconds) Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X = Left Hand = No Hand Figure 5. Plot of means for the utterance duration by condition for each hand.

PAGE 64

56 Table 6. Mean data across subjects for dependent variable : variance of frequency (standard deviation). Condition Left Hand Grammatical : Questions Mean S.D. 15.71 5.15 Statements Mean S.D. 9.97 2.88 Emotional : Happy Mean S.D. 20.65 8.46 Sad Mean S.D. 10.6 3.55 Right Hand No Hand 17.07 17.51 6.03 6.29 9.87 11.09 2.54 5.45 21.39 22.04 7.5 7.98 10.67 11.08 2.96 5.18

PAGE 65

57 Means for the Frequency Variance Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand 2 = Left Hand = No Hand Figure 6. Plot of means for the frequency variance by condition for each hand.

PAGE 66

58 condition interaction was revealed (F = .41, df = 6/114, PR >F = .8717). Means and standard deviations for the variable minimum fundamental frequency for each prosodic condition are presented in Table 7. The means are displayed graphically in Figure 7. The main effect for hand was not significant (F = .45, df = 2/38, PR > F = .6419). The main effect for prosody condition was significant (F = 6.10, df = 3/57, PR>F = .0011). Post-hoc testing showed higher minimum frequencies for the happy and question condtions than for the sad and statement conditions. The interaction term for hand-byprosodic-condition was nonsignificant (F = .35, df = 6/114, PR >F = .9057). Means and standard deviations depicting the maximum frequency for each prosodic condition are presented in Table 8. The means are displayed graphically in Figure 8. The main effect for hand was not significant (F= .96, df = 2/38, PR>F = .3924). The main effect for prosodic condition was significant (F = 35.60, df = 2/38, PR>F = .0001). Post-hoc testing revealed the following pattern of significant differences in prosodic condition for maximum frequency, happy > questions > sad = statement. No hand-by-prosodiccondition interaction was shown (F = .33, df = 6/114, PR>F = .9207) . Means and standard deviations representing the frequency range for each prosody condition are presented Table 9. The means are displayed graphically in Figure 9. No

PAGE 67

59 Table 7. Mean data across subjects for dependent variable: minimum frequency. Condition Left Hand Right Hand No Hand Grammatical : Questions Mean S.D. 85.81 5.87 85.95 7.76 86.02 7.77 Statements Mean S.D. 84.14 6.4 82.99 6.87 83.18 7.91 Emotional : Happy Mean S.D. 85.67 6.24 85.01 6.78 86.36 7.66 Sad Mean S.D. 83.73 6.3 83.88 5.6 84.68 7.08

PAGE 68

60 Means for the Minimum Frequency Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X =Left Hand = No Hand Figure 7. Plot of means for the minimum frequency by condition for each hand.

PAGE 69

61 Table 8. Mean data across dependent variable : subjects for maximum frequency. Condition Left Hand Right Hand No Hand Grammatical : Questions Mean S.D. 158.54 23.31 162.63 23.61 162.54 23.26 Statements Mean S.D. 133.15 14.27 131.58 12.33 136.31 23.28 Emotional : Happy Mean S.D. 173.57 35.66 177.21 29.45 184.3 46.11 Sad Mean S.D. 136.91 21.0 137.43 13.33 139.97 22.92

PAGE 70

62 P = Statements H = Happy S = Sad 0 = Right Hand Z = Left Hand = No Hand Figure 8. Plot of means for the maximum frequency by condition for each hand.

PAGE 71

63 Table 9. Mean data across subjects for dependent variable: frequency range. Condition Left Hand Right Hand No Hand Grammatical : Questions Mean 72.74 76.68 76.52 S.D. 23.23 22.18 21.55 Statements Mean 49.01 48.59 53.14 S.D. 14.74 10.72 24.04 Emotional : Happy Mean 87.9 92.2 97.94 S.D. 33.31 26.17 46.04 Sad Mean 53.18 53.55 55.29 S.D. 18.3 13.54 23.69

PAGE 72

64 Means for the Frequency Range Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X = Left Hand = No Hand Figure 9. of means for the frequency range condition for each hand. Plot

PAGE 73

65 main effect for hand was shown (F = .78, df = 2/38, PR>F = .4643). The main effect for prosody condition was significant (F = 34.9, df = 3/57, PR>F = .0001). Post-hoc testing revealed the following pattern in frequency range, happy > questions > sad = statements. The interaction term for hand-by-prosodic-condition was not significant (F = .26, df = 6/114, PR >F = . 9551) . Means and standard deviations depicting the utterance slope for each prosody condition are presented in Table 10. The means are displayed graphically in Figure 10. The hand effects were nonsignificant (F = .81, df = 2/38, PR>F = .4539). The main effect for prosodic condition was significant (F = 42.61, df = 3/57, PR>F = .0001). Post-hoc testing indicated that the happy and question contours with positive slopes differed significantly from the sad and statement contours with negative slopes. The hand-byprosodic-condition interaction was not significant (F = 2.02, df = 6/114, PR > F = .0688). Summary Analyses of the concurrent performances and their comparison to the baseline performances were made for both the finger tapping and the prosodic condtions. None of the dependent measures demonstrated a hand-by-prosodic-condition interaction. However, descriptive analysis revealed a bimanual and symmetrical effect of speech upon the finger tapping. The tapping was slower in the speaking conditions than in the baseline conditions. Tapping did not have an

PAGE 74

66 Table 10. Mean data across subjects for dependent variable: slope of utterance. Condition Left Hand Right Hand No Hand Grammatical : Questions Mean .0875 . 122 .113 S.D. .09 . 1 1.0 Statements Mean -.067 -.071 -.056 S.D. .04 .04 .07 Emotional : Happy Mean .052 .038 .058 S.D. .05 .07 .08 Sad Mean -.049 -.054 -.05 S.D. .05 .03 .05

PAGE 75

67 Means for the Slope of the Utterance Q P H S Conditions Q = Questions P = Statements H = Happy S = Sad 0 = Right Hand X = Left Hand = No Hand Figure 10. Plot of means for the slope of the utterance by condition for each hand.

PAGE 76

68 effect upon the speech prosody production characteristics. These results require the acceptance of the null hypotheses for the tapping, for the production of emotional prosody, and for the production of linguisitic prosody.

PAGE 77

CHAPTER V DISCUSSION The results of this investigation do not support the notion of a hemispheric split in prosodic production. The lack of an asymmetric interference effect in the dual task paradigm suggests that both hemispheres may be processing aspects of linguistic and emotional prosody. There are at least two sides to this position. The first is that the production of prosodic features is a bihemispheric phenomenon regardless of emotional or linguistic factors. The second possible explanation is that linguistic or emotional contexts influence the hemisphericity of the productive control of prosody. However, the procedures used in this investigation did not allow for the influence of linguistic or emotional factors in the speech produced. Procedural Explanations Whenever the expected results are not obtained, the potential influence of the procedures must be addressed. The literature on dual task paradigms describes a number of procedural guidelines to maximize the likelihood of creating interference effects (Bryden, 1982; Green and Vaid, 1986). The careful planning of this investigation took many of these precautions into consideration. In contrast with the 69

PAGE 78

70 majority of studies reviewed in the literature on dual task paradigms, the bimanual results obtained in this investigation were symmetrical. The subjects of the study did not appear to influence the production of symmetrical results. The number of subjects did not appear to be a factor. The original work of Kinsbourne and Cook (1971) produced significant asymmetrical results with twenty subjects. However, the motor task in their study was dowel balancing rather than finger tapping. Kee , Bathurst and Hellige (1983) did use single finger tapping as the motor task and obtained asymmetrical results with twelve right-handed and twelve left-handed subjects. Green and Vaid (1986) recommended that handedness, sex, and language experience be controlled in subject selection. These criterion were accounted for in the questionnaire for enrollment as a subject in the current investigation. The use of single finger tapping had been shown in the literature to produce greater verbal laterality effects than two-key tapping (Kee, Morris, Bathurst and Hellige, 1986). Distal finger movement has been well documented as an exclusively contralateral event (i.e., Brinkman and Kuypers, 1972). Therefore, the nature of the motor task itself would not appear to be responsible for the symmetrical results. The sensitivity of the tapping device has come under close scrutiny in the literature (Green and Vaid, 1986). The device constructed for this study was desgined to meet the

PAGE 79

71 most current requirements as outlined in the literature (Green and Vaid, 1986). Analysis of the tapping has been the focus of the majority of verbal-manual dual task studies. Tapping rate has typically been measured for the duration of the trial (usually 15 to 20 seconds). In this study however, only that portion of the tapping coinciding with the target sentence was analyzed. It is conceivable that since the paragraphs were always the same, the tapping may have been altered just prior to the target. At that point in time the subject may have been adjusting his attentional set and preparing for the analysis of the novel segment of the stimuli. Speech shadowing was selected to meet the recommendation that the verbal task in the dual task paradigm be complex and continuous (Green and Vaid, 1986). Shadowing permited some control over individual variability of speech production as it required the exact repetition of a model. The difficulty level of such online imitation may have forced the subjects to attend to the acoustic/phonetic trascoding of the information rather than to the linguistic and/or emotional information. This argument will be addressed further in the discusion section on prosody models. It is also possible that the use of the same paragraph across all conditions may have served to reduce the attentional requirements of the subjects on all but the target sentence. The neutrality of the stmuli, with the

PAGE 80

72 exception of the target sentence, may have been a factor. Paragraphs carrying a greater degree of prosodic information and a greater variety of contours may not only have produced stronger attentional demands, but may have produced more asymmetrical results in response to such prosodic demands. Prosody Model Explanations In understanding the neurologic basis for prosody, there are two interrelated issues that should be addressed. One deals with the question of whether the left, right or both hemispheres are dominant for speech prosody. A related issue concerns the various functions of prosody as part of the production of speech, as a mechanism for expressing emotions or as part of the linguistic code. Prosody has been called a third component of language along with grammar and semantics (Weintraub et al. , 1981). The left hemisphere's dominant role in language has been clearly established, and linguistic aspects of prosody have been attributed to the left hemisphere. Prosody (especially affective prosody) has been linked with emotional functions of the right hemisphere (Tucker et al . , 1977). Another aspect of production of prosody is that prosody is tied to the acoustic-phonetic process of speech and is involved in changes in pitch, intensity and duration. Studies of patients with right or left hemisphere lesions have documented impairment to the segmental and prosodic aspects of speech following lesions to either hemisphere or for that matter, brainstem

PAGE 81

73 structures (Kent and Rosenbek, 1982). When discussing the production of prosodic features, the role of motor speech ability and the acoustic-phonetic aspects of speech and prosody should not be ignored. Siditis (1984) has suggested that deficits in the production of prosodic features may occur independently of emotional or linguistic content. However, he states further that the existence of an expressive dyprosody for emotional and paralinguistic content with the control of vocal pitch otherwise intact remains to be demonstrated. (1984, p. 110) Obviously speech production is the mechanism by which humans express spoken language and/or emotional content. In this respect, the prosodic aspects of speech are influenced by the context (linguistic and/or emotional) in which they are evoked. Emotional speech also has linguistic structure. Therefore, it seems that it is the strength of the relative contributions of linguistic versus emotional content that determines the characteristics of speech prosody. The implication that prosody resides in one or the other hemisphere may reflect nothing more than the function served by prosodic features to express specific linguistic or emotional content . This is a similar argument to that offered by ShipleyBrown and Dingwall (1986) in advancing their Attraction Hypothesis. They suggested that the comprehension of prosody for both emotional and linguistic content was primarily a right -hemisphere phenomenon. In contrast, they observed that

PAGE 82

74 the left hemisphere also was active in comprehending linguistic but not emotional prosody. They hypothesized that the right hemisphere is dominant for prosodic comprehension; however, the influence of linguistic content may attract certain prosodic characteristics to the left hemisphere. One of the few studies to evaluate production aspects of linguistic and emotional prosody as a function of leftand right -hemisphere damage was conducted by Shapiro and Danly (1985). Their findings suggested dominance of the right hemisphere in the production of emotional prosody, but bilateral contribution in the production of linguistic prosody. This is a similar position to that offered by Shipley-Brown and Dingwall in their study of prosodic comprehension. The view, as stated above, that prosody is tied to speech and that the characteristics may be influenced by linguistic or emotional contexts, would predict different results from those obtained by Shipley-Brown and Dingwall (1986) and Shapiro and Danly (1985). Specifically, this modification of the Attraction Hypothesis, hereafter reffered to as the Investment hypothesis, would predict that the right hemisphere would have a greater investment in emotional speech prosody and that the left hemisphere would have a greater investment in linguistic speech prosody. The investment is determined by the contextual nature of the

PAGE 83

75 utterance. The prosodic feature is the outcome of the selective investment of emotional or linguistic input. The results of the Shapiro and Danly (1985) study suggested that right hemisphere damage alone "may result in a primary disturbance of speech prosody that may be independent of the disturbances in affect often noted in right-brain damaged populations" (1985, p. 19). Their conclusions may have been premature as they did not test patients with left-frontal lesions. Crary and Haak (1986) tested patients with right-and left-hemisphere lesions including left-frontal. Some of their patients were aphasic, while others were not. They observed that a basic deficit to speech prosody resulted from either rightor left-frontalhemisphere damage. However, linguistic aspects of prosodic production (i.e., terminal contour direction and sentence fundamental frequency declination) were impaired only in those patients demonstrating an aphasic impairment. Based on the results obtained by Crary and Haak (1986), the question is raised as to whether the linguistic prosody deficit in the right-hemisphere-damaged patients of Shapiro and Danly (1985) indicated the presence of linguistic prosody in the right hemisphere or represented a more direct impairment in the production of segmental and prosodic aspects of speech irrespective of language content. Further support for this position was offered by Ryalls (1986), in his reply to Shapiro and Danly. He cited studies describing prosodic

PAGE 84

76 deficits with left-hemisphere damage and referred to Shapiro and Danly's (1985) claim about right hemisphere damage producing a primary deficit in prosody as "premature" (1986, p. 187). In summarizing the first argument , it is proposed that the production of speech prosody is bihemispheric only in as much as it may be influenced by linguistic and emotional content of spoken utterances. This position is consistent with traditional views such as those of Monrad-Krohn who when considering the localization of prosody espoused, "cooperation of the whole brain is probably needed" (p. 415) . The results of the present study did not support this Investment Hypothesis. However, the model may still be useful in explaining the present results. The suspect area of influence is the shadowing technique used in the dual task paradigm. Shadowing was selected because current literature on verbal manual dual task paradigms suggested that the verbal task should be complex and continuous (Green and Vaid, 1986). Retrospectively, it is possible that the difficulty of the shadowing task caused the subjects to focus on the acoustic-phonetic components of segmental and prosodic speech rather than on the linguistic or emotional context. In fact, the subjects were very accurate at reproducing the respective prosodic contours and showed no confusion among expected acoustic parameters associated with

PAGE 85

77 the four types of utterances. Marslen-Wilson (1975) has suggested that during shadowing there are simultaneous levels of processing. He concluded that it would be possible for information processing at one level to constrain information processing at another level. The procedures used in this investigation did not permit the formation of inferences regarding whether or not the subjects attended to the linguistic or affective contexts of the shadowed stimuli . The investment hypothesis would predict an asymmetrical hand effect for emotional versus linguistic context. However, if the contextual investment of emotion or linguistic information was weakened as a result of task demands, no asymmetry attributed to emotion or to language would be expected. The effect of speech on hand performance would be expected, however. Dual Task Paradigm Explanations The history of the dual task paradigm is laden with studies showing bimanual effects. These effects however, are most often asymmetrical. One argument used to explain bilateral and symmetrical interference effects has been directed toward the specific tasks involved. When both tasks are too difficult the interference effects are thought to be masked; when both tasks are too simple, a ceiling effect is thought to occur (Green and Vaid, 1986). The symmetrical effects in this study do not appear to be the result of

PAGE 86

78 either masked interference or ceiling effects as one task was designed to be simple, the other complex. McFarland and Aston (1978) have offered an explanation which may be better suited to tasks used here. They studied sequential finger tapping and verbal memory load tasks. Their investigation used two levels of difficulty in the verbal task. The more difficult verbal task did not produce the differential interference effects. McFarland and Ashton suggested that increased difficulty in either of the tasks (cognitive/verbal or motor) leads to a loss of asymmetrical effects. They noted that "under the increased memory-span condition, verbal-task performance may involve more diffuse neural activity ..." (1978, p.739). The issue in relating these findings to the current investigation is one of defining levels of difficulty. No levels of shadowing difficulty were included for comparision. It is conceivable however, that the demands of the shadowing task were such that the differential interference effects were lost. Summers and Sharp (1979) also found bimanual and symmetrical results in their pairing of sequential finger tapping and verbal rehersal. However, their results using single finger tapping were asymmetrical. Despite the fact that these studies used sequential finger tapping and neither used shadowing, the argument that increasing the difficulty of either task produces bilateral disruption should be considered. Previous use of speech shadowing in a dual task

PAGE 87

79 have been limited. Allport, Antonis and Reynolds (1972) used shadowing of speech and reading of music as dual tasks and did not find significant interference. It has been suggested that this was because the participants were musicians and were proficient in their musical ability. This implied that the tasks were too simple for the subjects and therefore produced a ceiling effect. Nonlateralizing results have also been obtained by investigators who were attempting to replicate studies with lateralizing results (i.e., Lomas and Kimura, 1976). The functional cerebral distance model used in this investigation suggested that no interference may be found if both tasks are different and require brain structures that do not overlap. Single finger tapping and speech tasks have been shown to produce interference effects. Few of the studies that have used speech as the interfering task have analyzed the speech. One of those that did was conducted by Bowers et al. (1978). They did not find disruption of speech. They did find disruptions of the tapping. Bowers et al. suggested that this may have been because of a 'one-way street' phenomenon which exists with language having priority over motor performance. Concluding Remarks Prosody may be a component of speech production. As such, prosody would appear to be influenced by the context in which it is produced. It was proposed that this influence

PAGE 88

80 comes from varying degrees of investment in emotional and linguistic information. This Investment Hypothesis would predict an asymmetrical hand effect for emotional versus linguistic context . This asymmetrical effect may be weakened as a result of task demands, thereby producing no attributable asymmetry to either emotion or language. In the verbal-manual dual task paradigm, the verbal task appears to dominate the direction of the interference effects. Implication s for Future Re se arch The results of this study question the existing models of prosody production. The findings by themselves are not sufficient to make an unequivocal statement about the production of prosody in brain-intact males. However, the strength of these results in their nonsignificance warrants further attention to this paradigm. One aspect of future research would involve further analysis of the data already obtained in the current study. As suggested in the discussion of the procedures, disruption in tapping performance may have ocurred just prior to the target sentence or just after the target. Comparison of the tapping rate before, during and after the target sentence would permit verification of attentional shifting on the part of the subject as he responds to the novel segment of the paragraph. A second area for future research would address the assessment of the subject's processing of the linguistic and

PAGE 89

81 emotional content of their productions. The overall design would be the same as the current one with the inclusion of an additional step in the procedure. The subjects would be asked to select (from a list) the type of contour most closely resembling the one they just shadowed. This contour choice would be made after each paragraph and would allow the examiner to compare the acoustical accuracy of production with the perceptual processing of the contour. Another potential study would address the influence of emotional or linguistic investment in production of prosodic contours. This would require practice sessions during which each subject would be instructed in how they were to produce target sentences using specific prosodic contours. Then within the context of tapping, the subject would be shown a card indicating the contour type to be produced. The subject would then be asked to produce the target sentence with as much emotional or linguistic investment as possible.

PAGE 90

APPENDIX A QUESTIONNAIRE Subject Number: Date : Sex: Age: Handedness : Native Language: Education Level: Do you have a history of any neurological problems? Do you have an identified hearing loss? Do you consider yourself an expert in: Typing (faster than 40 wpm)? Morse Code? Ham Radio Operation? Are you a musician? 82

PAGE 91

APPENDIX B INSTRUCTIONS Tasks There will be seventy 15 second tasks and four paragraphs to read. I will tell you which tasks to do. Some will be tapping your finger alone, some will be repeating paragraphs alone and at times you will be tapping and repeating at the same time. When you do the two at the same time, you should try to do our best on both. DO THEM EQUALLY WELL. Tap ping You will be asked to tap your index finger on this copper plate. Use the pad of your finger not the very tip. You will be tapping with your right hand and with your left hand. Demonstrate position. (Checklist) 1. Tap within the square. 2. Thumb and middle finger stay in contact with the rest of the copper plate. 3. You may arch your fingers. 4. Arm in contact with the table. 5. Use an up and down motion, not a rocking from sideto-side . Go ahead and try the hand position. Tap as FAST and at as CONSISTENT a rate as you possibly can. PRACTICE TAPPING TRIALS. ANY QUESTIONS? Shadowing You will be listening to a tape recording of a man reading paragraphs (one at a time). You are to repeat the paragraph by following along behind him. Try not to get too far behind, and as you become more familiar with the information, don't get ahead of the speaker. It is important to repeat EXACTLY WHAT the man says and in EXACTLY THE SAME WAY the man says it. PRACTICE SHADOWING TRIALS. ANY QUESTIONS? Review Remember: Say the Same words in the Same way as the speaker. Remember: Tap as FAST and at as CONSISTENT a rate as you possibly can. ANY QUESTIONS? We will pause to take 2 short breaks. You may request to stop at anytime. 83

PAGE 92

APPENDIX C SAS STATISTICAL PROGRAM DATA DISS; INPUT HAND $ 1 COND $ 3-4 SENT 5-6 SUBJ 8-9 TAPSEC TAPMN TAPMAX TAPSD MEANHZ TESTDUR SDHZ MINHZ MAXHZ CHANGEHZ SLOPE; IF COND = ' THEN COND = 0 IF COND = 'GQ' THEN COND = 1 IF COND = 'GP' THEN COND = 2 IF COND = 'EH' THEN COND = 3 IF COND = 'ES' THEN COND = 4 IF HAND = 'N' THEN DO; TAPSEC= . ; TAPMN = . ; TAPSD= . ; END; IF COND = 0 THEN DO; MEANHZ = . ; TESTDUR = . ; SDHZ= . ; MINHZ =. ; MAXHZ =. ; SLOPE= . ; END; IF SENT = -2 THEN DELETE; CARDS ; LDGP-1 1 3.62 55.0 2.0 105 1.105 7.30 92 124 32 -0.082660 NAES 1 0.00 0.0 0.0 107 0.985 9.92 89 142 53 -0.102510 PROC SORT; BY SUBJ HAND COND; PROC MEANS; BY SUBJ HAND COND; VAR TAPSEC TAPMN TAPSD MEANHZ TESTDUR SDHZ MINHZ MAXHZ CHANGEHZ SLOPE; OUTPUT OUT = AVES MEAN = TAPI TAP2 TAP3 SP1 SP2 SP3 SP4 SP5 SP6 SP7 ; PROC ANOVA; CLASSES SUBJ HAND COND; MODEL TAP1-TAP3 SP1-SP7 = SUBJ HAND SUBJ* HAND COND COND* SUBJ HAND* COND; TEST H = HAND E = HAND* SUBJ; 84

PAGE 93

APPENDIX C continued TEST H = COND E = COND*SUBJ; MEANS HAND* COND; PROC SORT; BY HAND COND; PROC MEANS; BY HAND COND; VAR TAP1-TAP3 SP1-SP7 ; OUTPUT OUT = AVES MEAN = MTAP1 MTAP2 MTAP3 MTAP5 MSP1 MSP2 MSP3 MSP4 MSP5 MSP6 MSP7 ; PROC PLOT; PLOT (MTAP1 — MSP7) *COND = HAND; PROC ANOVA DATA = AVES; BY HAND; CLASSES SUBJ COND; MODEL TAPI— SP7 = SUBJ COND; MEANS COND/DUNCAN; / *EOJ

PAGE 94

REFERENCES Allport, D.A., Antonis, B. , $? Reynolds, P. (1972). On the division of attention: A disproof of the single channel hypothesis. Quarterly Journal of Experimental Psycholog y. 24, 225-235. Blumstein, S., S’ Cooper, W.E. (1974). Hemisphereic processing of intonation contours. Cortex . 10 . 391-404. Bowers, D. , Heilman, K.M. , Satz, P. , Altman, A. (1978). Performance on verbal, nonverbal and motor tasks by right-handed adults. Cortex . 14, 540-556. Brinkman, J. , & Kuypers, H.G.J.M. (1972). Splitbrain monkeys: Cerebral control of ipislateral and contralateral arm, hand, and finger movements. Science . ITS, 536-539. Bryden, M.P. (1982). Laterality: Functional asymmertv in the intact brain . New York: Academic Press. Bryden, M.P., & Ley, R.G. (1983). Right-hemispheric involvement in the perception of expression of emotion in normal humans. In K. M. Heilman & P. Satz (Eds.), Neuropsychology of Human Emotion (pp. 6-44). New York: Guilford Press. Cherry, E.C., Sf Taylor, W.K. (1954). Some further experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America . 2£, 554-559. Cooper, W.E., Soares, C., Nicol, J., Michelow, D. , & Goloski, S. (1984). Clausal intonation after unilateral brain damage. Language and Speech . 2 Z, 1724. 86

PAGE 95

87 Crary, M.A., S’ Haak, N.J. (1986). A neurolinguistic basis for propositional prosody . A poster presented at the annual meeting of the International Neuropsychological Society in Denver, Colorado. Cutler, A., & Ladd, D.R. (1983). Lang ua ge and Communication: Vol . : 14. Prosody: Models and Measurements . New York: Springer-Verlag . Dalen, K., & Hugdahl , K. (1986). Inhibitory versus facilitory interference for finger-tapping to verbal and nonverbal, motor, and sensory tasks. Journal of Clinical and Experimental Neuropsycholog y. 2., 627-636. Danly, M. , Cooper, •.E., S’ Shapiro, B.E. (1983). Fundamental frequency, language processing and linguistic structure in Wernike ' s aphasia. Brain and Languag e. 19 . 1-24. Danly, M. , S’ Shapiro, B.E. (1982). Speech prosody in Broca's aphasia. Brain and Lang ua g e. 16 . 171-190. Denes, G. , Caldonetto, E.M., Semenza, C., Vaggnes, K., S Zettlin, M. (1984). Discrimination and identification of emotions in human voice by brain-damaged subjects. Acta Neurologica Scandinavia . 62, 154-162. Eady, S. J. , S Cooper, W.E. (1986). Speech intonation and focus location in matched statements and questions. Journal of the Acoustical Society of America . 6Q, 402415. Friedman, A., S Poison, M.C. (1981). Hemispheres as independent resource systems: Limited-capacity processing and cerebral specialization. Journal of Experimental Psychology: Human Perception and Performance . Z, 1031-1058. Green, A. (1986). A time sharing cross-sectional study of monolinguals and bilinguals at different levels of second language acquisition. Brain and Cognition . 5, 477-497. Green, A., S’ Vaid, J. (1986). Methodological issues in the use of the concurrent activities paradigm. Brain and Cognition . §., 465-476. Hart j e , •., Willmes, K., S’ Weniger, D. (1985). Is there parallel and independent hemisphereic processing of intonational and phonetic components of dichotic speech stimuli? Brain and Languag e. 24, 83-99.

PAGE 96

88 Heilman, K.M. (1983). Introduction. In K.M. Heilman 8 P. Satz (Eds.) Neuropsychology of Human Emotion . New York: Guilford Press. Heilman, K.M., Bowers, D. , Speedie, L. , 8 Coslett, H.B. (1984). Comprehension of affective and nonaffective prosody. Neurolog y. 34, 917-921. Hellige, J.B. (1985). Hemisphere-specific priming and interference: Issues in conceptualization. Paper presented at the Annual Convention of The International Neuropsychological Society, San Diego, California. Hellige, J.B., 8 Longstreth, L.E. (1981). Effects of concurrent hemisphere-specific activity on unimanual tapping rate. Neuropsycholog ia . IQ, 395-405. Huck, S., Cormier, W.H., 8 Bounds, W.G. (1974). Reading Stati stics and Research . New York: Harper and Row. Kee, D.W., Morris, K. , Bathurst, K. , S’ Hellige, J.B. (1986). Lateralized interference in finger tapping: Comparisons of rate and variability measures under speed and consistency tapping instructions. Brain and Cognition . 5., 268-279. Kent, R.D., 8 Rosenbek, J.C. (1982). Prosodic disturbance and neurologic lesion. Brain and lang ua g e. 15, 259-291. Kimura, D. , 8 Vanderwolf, C.H. (1970). The relation between hand performance and the performance of individual finger movements by left and right hands. Brain . 93, 769-774. Kinsbourne, M., 8 Cook, J. (1971). Generalized and lateralized effects of concurrent verbalization on a unimanual skill. Quarterly Journal of Experimental Psycholog y. £3, 341-345. Kirk, R.E. (1968). Experimental Design: Procedures for the Behavioral Sciences . Belmont, California: Wadsworth Publishing Co. , Inc. Lackner, J.R. , 8 Shattuck-Hufnagel , S.R. (1982). Note: Alterations in speech shadowing ability after cerebral injury in man. Neuropsycholog ia. 29, 709-714. Lezak, M.D. (1983). Neuropsychological Assessment . New York: Oxford University Press.

PAGE 97

89 Lomas, J., & Kimura, D. (1976). Int rahemisphereic interaction between speaking and sequential manual activity. Neuropsycholog ia. 14, 23-33. Lonie, J., G? Lesser, R. (1983). Intonation as a cue to speech, act identification in aphasic and other braindamaged patients. Research News: International Journal of Rehabilitation Research . 6, 512-513. Marslen-Wilson , W.D. (1975). Sentence perception as an interactive parallel process. Science . 189 . 226-228. McFarland, K., & Aston, R. (1978). The influence of concurrent task difficulty on manual performance. Neuropsycholog ia . 16, 735-741. Monrad-Krohn, G.H. (1947). Dysprosody or altered "melody of language." Brain . 76, 405-423. Navon, D. , SÂ’ Gopher, D. (1979). On the economy of the human information processing system. Psychological Review . 86, 214-225. Oldfield, R.C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsycholog ia. 6, 97-113. Olive, J.P. (1975). Fundamental frequency rules for synthesis of simple declarative English sentences. Journal of the Acoustical Society of America . 52, 476482. Papanicolaou, A.C. , Levin, H.S., Eisenberg, H.M. , & Moore, B.D. (1983). Note: Evoked potential indices of selective hemisphere engagement in affective and phonetic tasks. Neuropsycholog ia. 21, 401-405. Peters, M. (1977). Note: Simultaneous performance of two motor activities: The factor of timing. Neuropsycholog ia. 15, 461-465. Peters, M. (1981). Note: Handedness: Effect of prolonged practice on between hand performance differences. Neuropsycholog ia . 12, 587-590. Rice, D.G. , Abroms, G.M. , S? Saxman, J.H. (1969). Speech and physiological corelates of "flat" affect. Archives of General Psychiatry . 22, 566-572.

PAGE 98

90 Ross, E.D. (1981). The aprosodias: Functional-anatomic organization of the affective components of language in the right hemisphere. Archives of Neurolog y. 58 . 561569. Ross, E.D., S? Mesulam, M. (1979). Dominant language functions of the right hemisphere? Prosody and emotional gesturing. Archives of Neurolog y. 35, 144148. Ryalls, J.H. (1982). Intonation in Broca's aphasia. Neuropsycholog ia . 20, 355-360. Ryalls, J.H. (1986). Reply to Shapiro and Danly. Brain and Lang ua g e. 20, 183-187. SAS Institute Inc. (1985). SAS User's Guide: Statistics. Version 5 Edition . Cary, NC: SAS Institute Inc. Shapiro, B.E., & Danly, M. (1985). The role of the right hemisphere in the control of speech prosody in propositional and affective contexts. Brain and Lang ua g e. 25, 19-36. Shattuck, S.R. , & Lackner, J.R. (1975). Speech production: Contribution of syntactic structure. Perceptual and M otor Skills , 4Q, 931-936. Shipley-Brown, F., 5? Dingwall, W.O. (1986). Affective and linguistic prosody: A dichotic listening test of their processing in normals. Paper presented at the Annual Convention of the International Neuropsychological Society, Denver, Colorado. Sidtis, J.J. (1984). Music, pitch perception, and the mechanisms of cortical hearing. In M.S. Gazzaniga (Ed.), Handbook of Cognitive Neuroscience . New York: Plenum Press. Summers, J.J., 6? Sharp, C.A. (1979). Bilateral effects of concurrent verbal and spatial rehersal on complex motor sequencing. Neuropsycholog ia. 12, 331-343. Taylor, M.M., Lindsay, P.H., & Forbes, S.M. (1967). Quantification of shared capacity processing in auditory and visual discrimination. Acta Psycholog ia. 21, 223-229. Tompkins, C.A., & Flowers, C.R. (1985). Perception of emotional intonation by brain-damaged adults: The influence of task processing levels. Journal of Speech and Hearing Research . 2£l, 526-538.

PAGE 99

91 Tompkins, C.A., SÂ’ Mateer, C.A. (1985). Right-hemisphere appreciation of prosodic and linguistic indications of implicit attitude. Brain and Languag e. 24 . 185-203. Tucker, D.M., Watson, R.T., SÂ’ Heilman, K.M. (1977). Discrimination and evocation of affectively intoned speech in patients with right parietal disease. Neurolog y. 2Z, 947-950. Vaissiere, P. , (1983). Language independent prosodic features. In A. Cutler and D.R. Ladd (Eds.), Lang ua ge and Communication: Vol. : 14. Prosody: Models and Measurments . New York: Springer-Verlag . Weintraub, S., Mesulam, M., SÂ’ Kramer, L. (1981). Disturbances in prosody: A right hemisphere contribution to language. Archives of Neurolog y. 58 . 742-744. Williams, C. E., SÂ’ Stevens, K. N. (1972). Emotions and speech: Some acoustical correlates. Journal of the Acoustical Society of America . 52 . 1238-1250. Wyke, M. (1968). The effect of brain lesions in the performance of an arm-hand precision task. Neuropsvchologia . 6., 125-134.

PAGE 100

BIOGRAPHICAL SKETCH Nancy Jeanne Haak, daughter of Dr. Edward D. Haak and Mrs. Jeanne Brainard Haak, was born in LaGrange, Georgia, on September 8, 1957. The fourth of five children, she spent the first seventeen years of her life in Warm Springs, Georgia. Her father practiced his medical specialty there at the Georgia Warm Springs Polio Foundation. From her family and the Foundation, Nancy Jeanne developed an interest in the health related professions and has carried with her a model of how rehabilitation should be done. Her teachers at Warm Springs Elementary School fostered in her a love of teaching. Nancy Jeanne was an honors graduate of her 1975 high school class in Manchester, Georgia. During her years there, she decided to pursue the profession of speech pathology as it appeared to have the right combination of teaching and clinical practice. In 1979, she graduated with highest honors from Auburn University with her Bachelor of Arts degree in speech pathology and audiology and her minor in psychology. Her Master of Science degree in speech pathology was obtained in 1980 from Purdue University with her minor in psychology. After two and a half years of practicing clinical 92

PAGE 101

93 speech/language pathology, Nancy Jeanne chose her specialty area in neurogenic communication disorders. In the fall of 1983 she came to the University of Florida to begin her doctoral studies in speech pathology and to minor in neuropsychology . Nancy Jeanne is a member of professional organizations at the state and national level and through the encouragement of her doctoral committee chairman has presented papers and posters at state, national and international meetings. At the present time Nancy Jeanne is Director of the Department of Communicative Disorders at a recently opened rehabilitation hospital in Gainesville, Florida.

PAGE 102

I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. [6'Ml -X : Michael A. Crary ,<^Ph. D. Chairman Associate Professor of Speech I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. ^ * — ') J V Lombardino, Ph.D. iate Professor of Speech I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Howard B. Rothman, Ph.D. Associate Professor of Speech I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Gonzalez-Rothi, Ph.D. Assistant Professor of Speech I certify that I have read this study and that in my opinion it conforms to acceptable standards of scholarly presentation and is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. • D') ll Eileen B. Fennell, Ph.D. Professor of Clinical and Health Psychology

PAGE 103

This dissertation was submitted to the Graduate Faculty of the Department of Speech in the College of Liberal Arts and Sciences and to the Graduate School, and was accepted as partial fulfillment of the requirements for the degree of Doctor of Philosophy. August, 198? Dean, Graduate School