The Effects of lexical and semantic processing on visual selective attention

MISSING IMAGE

Material Information

Title:
The Effects of lexical and semantic processing on visual selective attention
Physical Description:
xiv, 232 leaves : ill. ; 29 cm.
Language:
English
Creator:
Petry, Margaret Carthas, 1966-
Publication Date:

Subjects

Subjects / Keywords:
Research   ( mesh )
Attention   ( mesh )
Semantic Differential   ( mesh )
Reading   ( mesh )
Neuropsychological Tests   ( mesh )
Department of Clinical and Health Psychology thesis Ph.D   ( mesh )
Dissertations, Academic -- College of Health Related Professions -- Department of Clinical and Health Psychology -- UF   ( mesh )
Genre:
bibliography   ( marcgt )
non-fiction   ( marcgt )

Notes

Thesis:
Thesis (Ph.D.)--University of Florida, 1995.
Bibliography:
Bibliography: leaves 221-231.
Statement of Responsibility:
by Margaret Carthas Petry.
General Note:
Typescript.
General Note:
Vita.

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002331221
oclc - 50414483
notis - ALT4899
System ID:
AA00009037:00001


This item is only available as the following downloads:


Full Text











THE EFFECTS OF LEXICAL AND SEMANTIC PROCESSING
ON VISUAL SELECTIVE ATTENTION
















By

MARGARET CARTHAS PETRY


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE
UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE
REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


1995
















ACKNOWLEDGMENTS

Sincere thanks is extended to my supervisory committee

for their contributions to my dissertation, and in turn, to

my development as a researcher. Dr. Bruce Crosson,

chairperson of the committee, is given special thanks for

his invaluable guidance and support throughout this project.

Thanks is also offered to Dr. Ira Fischler for his

assistance during review of the dual-task literature and in

improving the methodology and design of this study. Dr.

Leslie Gonzalez Rothi is thanked for encouraging me to

broaden my original conceptualization of attention and

language as well as to set manageable objectives for this

project. Dr. Eileen Fennell is extended thanks for her

support and her assistance with methodological and design

issues. Dr. Russell Bauer is thanked for his help during

early conceptualization of this project as well as with

issues regarding its design. Dr. Alan Agresti is offered

thanks for his flexibility and provision of ready

statistical consultations.

Dan Edwards, Cliff LeBlanc, James Burnette, and Paula

Usita merit thanks for their assistance with subject

recruitment at Santa Fe Community College.










Extra special thanks is given to my family and friends

for their continued encouragement. My deepest thanks goes

to my husband, Andy, for his unending support and help.














TABLE OF CONTENTS


ACKNOWLEDGMENTS .....................


LIST OF TABLES ................


.......................... v ii


LIST OF FIGURES ........................................... x

ABSTRACT ................................................ xiii

CHAPTERS

1 LITERATURE REVIEW................................. 1

Attention........................................... 1
What is It? ....................................... 1
How does Attention Operate?....................... 3
Single Unshared Resource Theory................. 7
Single Shared Resource Theory.................. 12
Multiple Unshared Resource Theory............. 15
Automatic and Controlled Processing of
Information................................. 24
Anatomy ....................................... 25
Language........................................... 31
Anatomy. ....................................... 32
Model of Recognition, Production, and
Comprehension of Written Words............... 40
Selective Attention and Language................... 44
Dual-Task Paradigm ................................ 49
Assumptions. ..................................... 49
Covert Orienting of Visual Attention Task....... 60
Experiment and Hypotheses .......................... 72
Experiment....................................... 72
Event-Related Potentials........................ 74
Anatomy Proposed to be Primarily Involved in
Selective Attention and Subsequent Processing
of Language and Visuospatial Information...... 77
Hypotheses....................................... 81


2 MATERIALS AND METHODS............................. 91

Subjects .......................................... 91
Hand Preference................................... 92
Apparatus.......................................... 93
Covert Orienting of Visual Attention Task....... 93


page

ii









Language Tasks .................................. 95
Covert Orienting of Visual Attention Task
Paired with Language Tasks..................... 98
Procedure. ......................................... 104


3 RESULTS ........................................... 112

Statistical Analyses ............................ 112
Covert Orienting of Visual Attention Task
Alone and Paired with Language Tasks.......... 113
100 ms Word-COVAT Delay....................... 116
250 ms Word-COVAT Delay....................... 117
Transformed Data: 100 ms Word-COVAT Delay.... 121
Transformed Data: 250 ms Word-COVAT Delay.... 122
Language Tasks .................................. 123
Specific Types of Language Errors............. 125
Overall Language Errors....................... 127
Language Task Order........................... 129
Word Familiarity .............................. 131
Task Difficulty................................. 132
Slow Responders During the Semantic
Association-COVAT Condition.................... 133


4 DISCUSSION AND CONCLUSIONS......................... 178

Covert Orienting of Visual Attention Task....... 178
Covert Orienting of Visual Attention Task
Alone and Paired with Language Tasks.......... 179
100 ms Word-COVAT Delay ....................... 179
250 ms Word-COVAT Delay........................ 186
Language Tasks .................................. 191
Language Task Order........................... 194
Word Familiarity .............................. 194
Task Difficulty................................. 195
Slow Responders During the Semantic
Association-COVAT Condition.................... 196
Future Research................................. 199


APPENDICES

A AAL SCREENING QUESTIONNAIRE........................ 204

B BRIGGS-NEBES MODIFICATION OF THE ANNETT
HANDEDNESS QUESTIONNAIRE......................... 206

C INSTRUCTIONS FOR COVAT............................ 207

D INSTRUCTIONS FOR THE READING TASK.................. 208

E INSTRUCTIONS FOR THE SEMANTIC ASSOCIATION TASK .... 209










F LOW FREQUENCY WORDS WITH LOW-TO-MODERATE RATINGS
IN CONCRETENESS AND IMAGEABILITY USED IN THE
READING AND SEMANTIC ASSOCIATION TASKS........... 211

G HIGH FREQUENCY WORDS WITH LOW-TO-MODERATE RATINGS
IN CONCRETENESS AND IMAGEABILITY USED IN THE
SEMANTIC ASSOCIATION TASKS ...................... 212

H INSTRUCTIONS FOR THE COVAT PAIRED WITH THE
READING TASK...................................... 213

I INSTRUCTIONS FOR THE COVAT PAIRED WITH THE
SEMANTIC ASSOCIATION TASK ....................... 214

J LOW FREQUENCY WORDS WITH LOW-TO-MODERATE RATINGS
IN CONCRETENESS AND IMAGEABILITY USED IN THE
READING AND SEMANTIC ASSOCIATION TASKS WHEN
PAIRED WITH THE COVAT............................ 216

K HIGH FREQUENCY WORDS WITH LOW-TO-MODERATE RATINGS
IN CONCRETENESS AND IMAGEABILITY USED IN THE
READING AND SEMANTIC ASSOCIATION TASKS WHEN
PAIRED WITH THE COVAT .......................... 218

L POST-EXPERIMENTAL QUESTIONNAIRE ................... 220

REFERENCES .............................................. 221


BIOGRAPHICAL SKETCH.....................................


232
















LIST OF TABLES


Table pace

2-1 Counterbalanced Order of Single-Task
Presentation .................................. 106

2-2 Counterbalanced Order of Dual-Task
Presentation .................................. 107

3-1 False Positive Errors: Statistics for the
Main Effect of Task........................... 135

3-2 COVAT Alone: Statistics for the Main Effect
of Trial Type................................. 136

3-3 100 ms Word-COVAT Delay: Statistics for the
Main Effect of Task........................... 137

3-4 100 ms Word-COVAT Delay: Statistics for the
Main Effect of Trial Type ..................... 138

3-5 250 ms Word-COVAT Delay: Descriptive
Statistics for the Task by Target Side
Interaction................................... 139

3-6 250 ms Word-COVAT Delay: Statistics for the
Task by Target Side Interaction............... 140

3-7 Transformed Data at the 100 ms Word-COVAT
Delay: Descriptive Statistics for the Task
by Trial Type Interaction...................... 142

3-8 Transformed Data at the 100 ms Word-COVAT
Delay: Statistics for the Task by Trial
Type Interaction .............................. 143

3-9 Transformed Data at the 250 ms Word-COVAT
Delay: Descriptive Statistics for the Task
by Trial Type Interaction...................... 145

3-10 Transformed Data at the 250 ms Word-COVAT
Delay: Statistics for the Task by Trial
Type Interaction .............................. 146










Table


3-11 Overall Language Errors at the 100 Word-COVAT
Delay: Statistics for the Task Main Effect... 148

3-12 Overall Language Errors at the 250 ms Word-
COVAT Delay: Descriptive Statistics for the
Task by Frequency Interaction.................. 149

3-13 Overall Language Errors at the 250 ms Word-
COVAT Delay: Statistics for the Task by
Frequency Interaction......................... 150

3-14 Language Task Order: Statistics for the Task
Order by Word Frequency Interaction for
the Reading Only Condition..................... 152

3-15 Language Task Order: Descriptive Statistics
for the Semantic Association Condition........ 153

3-16 Language Task Order: Descriptive Statistics
for the Dual-Task Conditions .................. 154

3-17 Task Difficulty: Statistics for the Single-
and Dual-Task Conditions ...................... 155

3-18 Slow and Faster Responders During the
Semantic Association-COVAT Condition:
Descriptive Statistics for Response
Strategy ...................................... 156

3-19 Slow and Faster Responders During the
Semantic Association-COVAT Condition:
Descriptive Statistics for False Positive
Errors ........................................ 157

3-20 Slow and Faster Responders During the
Semantic Association-COVAT Condition:
Descriptive Statistics for Language Errors.... 158

3-21 Slow and Faster Responders During the
Semantic Association-COVAT Condition:
Descriptive Statistics for Word Familiarity... 160


page










Table page

3-22 Slow and Faster Responders During the
Semantic Association-COVAT Condition:
Descriptive Statistic for Task Difficulty..... 161















LIST OF FIGURES


Figure page

1-1 Model for the recognition, production, and
comprehension of written words, based on
Ellis and Young (1988) ........................ 84

1-2 Schematic of the covert orienting of visual
visual attention task......................... 86

1-3 Schematic of dual-task performance with the
covert orienting of visual attention task
and a language task........................... 88

1-4 Stimulus onset asynchronies for dual-task
performance of the covert orienting of
visual attention task and a language task..... 90

2-1 Stimulus onset asynchronies for the covert
orienting of visual attention task............ 109

2-2 Schematic of the language tasks: Reading
and generation of semantic associations....... 111

3-1 Covert orienting of visual attention task
(COVAT) alone and paired with the language
tasks at the 100 ms delay between
language task and onset of the COVAT
(C=COVAT; RC-Lo=Reading-COVAT condition with
low frequency words; RC-Hi=Reading-COVAT
condition with high frequency words;
SC-Lo=Semantic association-COVAT condition
with low frequency words; SC-Hi=Semantic
association-COVAT condition with high
frequency words; Rt=Targets presented in
the right visual field; Lt=Targets presented
in the left visual field; 100=100 ms delay
between language task and onset of the
COVAT) ........................................ 163










Figure


3-2 Covert orienting of visual attention task
(COVAT) alone and paired with the language
tasks at the 250 ms delay between
language task and onset of the COVAT
(C=COVAT; RC-Lo=Reading-COVAT condition with
low frequency words; RC-Hi=Reading-COVAT
condition with high frequency words;
SC-Lo=Semantic association-COVAT condition
with low frequency words; SC-Hi=Semantic
association-COVAT condition with high
frequency words; Rt=Targets presented in
the right visual field; Lt=Targets presented
in the left visual field; 250=250 ms delay
between language task and onset of the
COVAT) ........................................ 165

3-3 Relative frequency histograms for the mean
reaction time of the covert orienting of
visual attention task (COVAT) and
Reading-COVAT condition ....................... 167

3-4 Relative frequency histogram for the mean
reaction time of the Semantic Association-
COVAT condition............................... 169

3-5 Covert orienting of visual attention task
(COVAT) alone and paired with the language
tasks at the 100 ms delay between
language task and onset of the COVAT,
after equating for general response time
in each condition with no cue trials
(C=COVAT; RC-Lo=Reading-COVAT condition with
low frequency words; RC-Hi=Reading-COVAT
condition with high frequency words;
SC-Lo=Semantic association-COVAT condition
with low frequency words; SC-Hi=Semantic
association-COVAT condition with high
frequency words; Rt=Targets presented in
the right visual field; Lt=Targets presented
in the left visual field; 100=100 ms delay
between language task and onset of the
COVAT) ........................................ 171


page










Figure


3-6 Covert orienting of visual attention task
(COVAT) alone and paired with the language
tasks at the 250 ms delay between
language task and onset of the COVAT,
after equating for general response time
in each condition with no cue trials
(C=COVAT; RC-Lo=Reading-COVAT condition with
low frequency words; RC-Hi=Reading-COVAT
condition with high frequency words;
SC-Lo=Semantic association-COVAT condition
with low frequency words; SC-Hi=Semantic
association-COVAT condition with high
frequency words; Rt=Targets presented in
the right visual field; Lt=Targets presented
in the left visual field; 250=250 ms delay
between language task and onset of the
COVAT) ........................................ 173

3-7 Language errors for single- and dual-task
conditions (Semantic Only=Semantic
association task; COV&Sem=Semantic
association-COVAT condition; Reading only=
Reading task; COV&Read=Reading-COVAT
condition; 100=100ms delay between language
task and onset of the COVAT; 250=250 ms
delay between language task and onset of
the COVAT) .................................... 175

3-8 Task difficulty ratings (COVAT only=Covert
orienting of visual attention task; Reading
Only=Reading task; Semantic Only=Semantic
association task; COV&Reading=Reading-COVAT
condition; COV&Semantic=Semantic association-
COVAT condition) .............................. 177


page
















Abstract of Dissertation Presented to the Graduate School of
the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

THE EFFECTS OF LEXICAL AND SEMANTIC PROCESSING
ON VISUAL SELECTIVE ATTENTION
By

MARGARET CARTHAS PETRY

August 1995

Chairperson: Dr. Bruce Crosson
Major Department: Clinical and Health Psychology

Twenty-nine right-handed undergraduates completed the

covert orienting of visual attention task (COVAT) alone and

paired with two lexical tasks (reading aloud single words

and generating semantic associations to written words).

Subjects responded fastest to the COVAT alone, intermediate

to the COVAT-Reading condition, and considerably slower to

the COVAT-Semantic Association task. Because of left-

hemisphere dominance for language, differences between left

and right visual field responses were expected on the COVAT,

but none were found. With the exception of generating a

semantic associate for low-frequency words at a longer delay

between word and COVAT onset, interference from the language

task caused the COVAT validity effect for valid versus

invalid trials to disappear. Interference effects exhibited

during dual-task performance suggest that lexical processing










of familiar words, especially with an emphasis on semantics,

shares a common selective attentional resource with covert

orientation to visuospatial information.















CHAPTER 1
LITERATURE REVIEW


Attention

What is It?

According to William James (1890), "everyone knows what

attention is" (p. 403) yet psychology still lacks an

accepted definition of attention. James claimed that

attention "is the taking possession by the mind, in clear

and vivid form, one out of what seem several simultaneously

possible objects or trains of thought" (p. 403). Based on

James' description, attention has been commonly

characterized as the capacity for selective processing of

information (Kinchla, 1980; Navon, 1985).

In 1971, Posner and Boies refined the definition of

attention. They proposed that attention consists of three

components: arousal, selective attention, and vigilance.

Arousal is the general facilitation of cognitive processing

to any and all information, while selective attention is the

facilitation of cognitive processing to just a specific

source of information. Posner and Boies' definition of

selective attention corresponds to James' previously

mentioned popular description of attention. Vigilance is

the ability to sustain arousal and selective attention over









time. Posner and Boies conceptualized a hierarchical

arrangement for these three components of attention, such

that arousal is necessary in order to selectively attend

while arousal and selective attention are necessary for

sustained concentration. Although evidence exists that

arousal, selective attention, and vigilance depend upon

different neurophysiological systems (Moruzzi & Magoun,

1949; Pardo, Fox, & Raichle, 1991; Posner, Walker,

Friedrich, & Rafal, 1987; Watson, Valenstein, & Heilman,

1981), the three components of attention are often difficult

to distinguish in behavior, particularly with neurologically

intact individuals. Thus, it is not surprising that the

three components of attention are frequently referred to as

'attention' or the capacity for selective processing of

information (i.e., selective attention) disregarding arousal

and vigilance. Unfortunately, this lack of specification

often causes confusion especially when attempting to compare

findings across studies of attention.

Another important distinction that is frequently

ignored is between attention and intention. While attention

(i.e., selective attention) is the facilitation of cognitive

processing for a specific source of sensory information,

intention is the facilitation of cognitive processing for a

specific type of motor activation. Thus, cognitive

processing can be enhanced for subsequent sensation, such as

detection of visuospatial information at a particular

location, or for subsequent movement, such as pressing a key









with a particular hand. Attentional facilitation and

intentional facilitation result in faster responding

(Heilman, Bowers, Valenstein, & Watson, 1987; Posner, 1980).

Based on studies with brain injured patients, failure to

respond can result from damage to one of four anatomical

systems: neurons responsible for the processes between

cognition and sensation (i.e., for selective attention),

neurons responsible for the processes between cognition and

movement (i.e., for intention), neurons responsible for

actual sensation, or neurons responsible for actual movement

(Heilman et al., 1987). When evaluating an individual's

performance, the effects of attention and intention, as well

as sensory and motor functioning, need to be considered.

How does Attention Operate?

Theories regarding how attention operates vary along

two main dimensions: (a) whether attention is a single

resource or multiple resources and (b) whether or not

attention is shared among cognitive tasks within a single

resource or between multiple resources (Green & Vaid, 1986;

Hiscock, 1986). From the various combinations (i.e., single

unshared resource, single shared resource, multiple unshared

resources, and multiple shared resources), four types of

information-processing theories emerge.

Welford's (1952) single channel theory and Broadbent's

filter model (1958) are examples of the single unshared

resource theory of attention. Humans are hypothesized to

have a single resource for selective processing of










information. This resource cannot be shared among

concurrent tasks. Attention can switch rapidly between

tasks but at a cost. Based on this theory, dual task

performance will always be worse (e.g., slower, more errors)

than performance on a single task.

Many researchers have proposed a single shared resource

theory of attention (Kahneman, 1973; Moray, 1967; Norman &

Bobrow, 1975). Although humans are hypothesized to have a

single resource of attention, this resource can be shared or

divided among concurrent tasks. Performance on simultaneous

tasks is hypothesized to require more attention than

performance on each individual task. However, impairments

in selective processing of information will only be observed

when the total demand for this resource exceeds the

available supply.

Other experimenters (Allport, Antonis, & Reynolds,

1972; Friedman & Polson, 1981; Wickens, 1984) have

postulated a multiple unshared resources theory of

attention. According to this type of theory, humans are

hypothesized to have multiple resources for selective

processing of particular information. For example, Wickens

proposes separate resources for visual and auditory

information as well as for verbal and spatial information.

Friedman and Polson claim a separate resource for each

cerebral hemisphere. Contrary to the single unshared

resource theory where attention could not be shared among

tasks, unshared now refers to the capacity for selective





5



processing of information within a particular resource of

attention. Attention can be shared between tasks that

utilize a particular resource (e.g., left cerebral

hemisphere); however, attention cannot be shared between

resources (e.g., left and right cerebral hemispheres).

While performing concurrent tasks, impairments will only

occur if the supply of one or more resources is exhausted.

If there is little overlap in task resource demand, then

dual-task performance will be essentially equivalent to

performance on each individual task.

Navon and Gopher (1979) proposed a multiple shared

resources theory of attention. Humans are hypothesized to

have multiple resources for selective processing of specific

types of information. In addition, these separate supplies

of attention are shared, such that part of one resource's

supply can be re-allocated temporarily in order to assist in

the processing of a type of information that is exhausting

another resource's supply. Based on this theory, dual-task

performance will tend to be unimpaired and similar to

performance on each of the individual tasks. Although

impairments may occur during performance of concurrent tasks

when compared to performance on each individual task, Navon

and Gopher advocate pairing these tasks with additional

tasks in order to discover whether the impairments are

spurious or actually the result of a depleted common

resource. For instance, in order to discover whether task A

and task B utilize a common processing resource, Navon and









Gopher recommend obtaining single-task and dual-task

performance with tasks A and B as well as with task A and a

task similar in difficulty to task B (i.e., task C). If

impairments result during concurrent performance of tasks A

and B as well as during tasks A and C when compared to

performance on each tasks individually, Navon and Gopher

suggest having subjects systematically vary the amount of

attention allocated to task A in a subsequent series of

dual-task experiments with B and with C. If the impairment

(e.g., slowed reaction time) varies systematically with the

amount of attention allocated to task A, then the two tasks

(e.g., tasks A and B) are presumed to depend on a common

resource. If the impairment does not vary systematically

with the amount of attention allocated to the primary task,

then the two tasks (e.g., tasks A and C) are presumed to use

independent resources.

Across these four types of theories, attention is

assumed to be essentially limited in overall amount of

supply. Although it has been shown that performance on

certain combinations of tasks is better than performance on

each of the individual tasks (Kinsbourne, 1970), the supply

of attention is presumed to fluctuate but not to increase

indefinitely. In an experimental situation, fluctuation of

attention can be minimized by asking human subjects to give

their best effort while performing the task(s) (Friedman &

Polson, 1981). To avoid fatiguing subjects, adequate rest

breaks should be provided.










Review of the recent dual-task literature revealed at

least six studies pertaining to the shared/unshared

resource/resources theories of attention. Two articles

(Gladstones, Regan, & Lee, 1989; Pashler, 1992) with

differing interpretations and results offered support for

the single unshared resource theory of attention. One

article (Ballesteros, Manga, & Coello, 1989) upheld the

single shared resource theory of attention. The multiple

unshared resources theory of attention was strengthened by

two articles (Friedman, Polson, & Dafoe, 1988; Herdman &

Friedman, 1985) but weakened by another article (Pashler &

O'Brien, 1993). Finally, no studies directly supported or

refuted the multiple shared resources theory of attention.

A lack of evidence for this latter theory may reflect

difficulties developing and implementing a reasonable

methodology for adequate testing. Navon and Gopher (1979)

advocate numerous dual-task experiments with subtle changes

of isolated variables. All articles but Herdman and

Friedman (1985) will be reviewed in detail. This latter

article will be omitted; instead, more recent articles,

where each author is the first author, will be described

(Friedman, Polson, & Dafoe, 1988; Herdman, 1992).

Single Unshared Resource Theory

According to the single unshared resource theory of

attention (Welford, 1952; Broadbent, 1958), humans are

hypothesized to have a single resource for selective

processing of information. This resource cannot be divided









or shared across concurrent tasks; thus, dual-task

performance will always be worse than performance on a

single task.

Support for Welford's (1952) single channel theory has

been offered by Pashler (1992). Although Welford's theory

can be considered an example of a single unshared resource

theory of attention, Pashler rejects this classification.

Instead of characterizing attention as a single mental

resource, Pashler suggested that it is composed of separate

mechanisms: one for facilitation of cognitive processing

for a specific source of sensory information (i.e.,

selective attention) and another for response selection

(i.e., intention).

In accordance with Welford (1952), Pashler (1992)

stated that a single mental mechanism exists for selecting

responses. This proposed mechanism can handle response

selection for only one task at a time. When presented with

concurrent tasks, response selection occurs for the primary

task. Upon completion of this process and initiation of

response production for the primary task, response selection

will commence for the secondary task. Impaired performance

in dual-task conditions was hypothesized to result from a

bottleneck in the response selection mechanism.

In Pashler's (1992) model, response selection occurs

after perception and prior to response production. Neither

perception nor response production are limited in










information capacity; therefore, more than one item can be

perceived and more than one response can be produced.

Pashler (1991) presented subjects with a concurrent

paradigm, where they were required to press a button to a

tone as well as report a briefly displayed letter. Reaction

time was measured in the tone task, while accuracy of oral

response was measured in the letter task. Because subjects

were able to quickly shift their attention in order to

accurately detect a letter while pressing a button to a

tone, Pashler maintained the distinction between selective

attention (i.e., facilitation of cognitive processing for a

specific source of sensory information) and response

selection (i.e., intention). Aside from concluding that

response selection (pressing a button) does not interfere

with selective attention (detecting a letter), Pashler did

not further address the relationship between selective

attention and response selection (i.e., intention).

Gladstones et al. (1989) claim support for a single

unshared resource theory of attention, despite obtaining

comparable performance during single-task and dual-task

conditions. Gladstones and colleagues studied the effects

of stimulus and response modality on the single- and dual-

task performance of "two forced-paced serial reaction time

tasks" (p. 1) where subjects were required to respond as

rapidly as possible to concurrent presentation of two

stimuli.










The stimuli were presented either visually or

auditorily. The visual stimulus was presentation of one of

three lights varying in color to either the left or right

side of space. The auditory stimulus was binaural

presentation of one of three tones varying in pitch. In

response to the visual stimuli, subjects responded either

manually or vocally. The manual response required subjects

to press one of three specified keys with either the left or

right hand. The vocal response required subjects to say
"a", "b", or "c". Subjects were administered all

combinations of these stimuli and responses in a single-task

and dual-task format.

Prior to actual performance on the experimental tasks,

ten subjects received extensive practice with each single-

and dual-task condition. Practice continued until the

shortest interstimulus interval (ISI) was determined,

allowing subjects to respond at a 93.94% accuracy level on

three or more successive trials. For single-task

conditions, the ISI was approximately 700 ms. For dual-task

conditions, the ISI was approximately 1400 ms. Practice and

test sessions were distributed over several weeks. Each

session lasted between 60 and 90 minutes. Total

participation required approximately eight hours. Nine

subjects were right-hand dominant, and one subject was left-

hand dominant.

Gladstones et al. (1989) measured performance accuracy.

They reported that single-task performance was not









significantly different from dual-task performance. They

interpreted this result based on an alternative

conceptualization of Welford's single channel theory (1952)

but they did not provide a rationale. According to

Gladstones et al., comparable single-task and dual-task

performance supports the single channel theory of

information processing (i.e., a single unshared resource

theory of attention). Many researchers (Allport et al.,

1972; Wickens, 1984) interpret such a finding as evidence

against the single channel theory and as support for a

multiple channel theory of information processing (e.g.,

multiple unshared or multiple shared attentional resources).

In addition, Gladstones et al. claimed that if dual-task

performance had exceeded single-task performance, then a

multiple channel theory would have been supported.

Gladstones et al. did not account for the numerous examples

in the literature of dual-task decrement, in comparison to

single-task performance.

Aspects of the methodology utilized by Gladstones et

al. (1989) may account for their finding. Subjects in this

study received extensive practice, yielding performance at

an optimal level and eliminating possible interference

effects that reflect the operation of underlying processes.

In addition, subjects were given twice as much time to

respond to stimuli in the dual-task condition than they were

allowed in the single-task condition. Finally, simultaneous

presentation of concurrent stimuli tends to not produce









differences between dual-task conditions; however, when

certain larger stimulus onset asynchronies are utilized,

differences are observed (Herdman, 1992).

Overall, the evidence provided by the recent dual-task

literature for the single unshared resource theory of

attention is not very strong.

Single Shared Resource Theory

The single shared resource theory of attention

(Kahneman, 1973; Moray, 1967; Norman & Bobrow, 1975)

proclaims that a single resource of attention can be shared

or divided among concurrent tasks. Impairments in selective

processing of information will only be observed when the

total demand for this resource exceeds the available supply.

Ballesteros et al. (1989) investigated the effects of

detecting nonmatching nonwords and nonmatching lines on

unilateral hand tapping. Sixteen subjects between the ages

of 20 and 23 years were selected for participation in this

study, based on their detection rate of nonwords and lines

differing from reference stimuli. Of the sixteen subjects,

eight were quick to identify the nonmatching stimuli. The

remaining eight subjects performed this task slowly. All

subjects were right-hand dominant.

In the single-task condition, subjects tapped as

quickly as possible with their right hand and then with

their left hand. In one of the dual-task conditions,

subjects tapped with one hand while they used the other hand

to point to the nonwords that differed from the reference









nonword. In a related dual-task condition, subjects tapped

one hand but pointed to the letter of the nonwords that made

it differ from the reference nonword. In another dual-task

condition, subjects tapped one hand while they used the

other hand to point to lines that differed in orientation

from the reference line. In the final dual-task condition

which relates to the previous condition, subjects tapped one

hand while they pointed to the part of the line causing it

to differ from the reference line. The number of taps as

well as the accuracy of detecting nonmatching nonwords and

nonmatching lines were collected over a 20 second period for

each dual-task condition. In addition, the number of taps

over a 20 second period were obtained for the single-task

condition with each hand.

Pertinent results from Ballesteros et al. (1989)

indicated a significant decline in hand-tapping when it was

paired with detection of the different nonwords; this effect

occurred only for the group of subjects who were fast to

detect nonmatching stimuli. Subjects who were slow to

detect different stimuli exhibited a significant decline in

hand-tapping when it was paired with detection of the

different lines (subject group by nonmatching task

interaction). In comparison to the single-task condition

(only hand-tapping), a significant reduction was observed

for right hand-tapping when it was paired with another task

(i.e., nonword and line detection tasks). Left hand-tapping

was comparable for the single- and dual-task conditions.









Significant differences between tapping hand and dual-task

performance with the nonword and line detection tasks were

not discovered.

Because performance decrement was identified in some

dual-task conditions in comparison to the single task

condition, Ballesteros et al. (1989) interpreted these

findings as support for Kahneman's (1973) limited

attentional resource theory. In accordance with a single

shared resource theory of attention, Ballesteros et al.

reported that impairments resulted after the demand for

attentional resources exceeded the available supply. They

explained the reduction of right hand-tapping performance

across the dual-task conditions as an indication of left-

hemispheric dominance for concurrent performance of a right-

handed motor task and these cognitive tasks (detection of

nonmatching nonwords and lines).

Methodological concerns may account for Ballesteros et

al. (1989) findings, which are difficult to explain. They

obtained a limited sampling of each single- and dual-task

condition; that is, only 20 seconds of behavior (one trial).

Future studies would benefit from the administration of more

than one trial to insure a more representative behavior

sample. Additionally, this experiment utilized the

detection of nonmatching "language" and "spatial" stimuli

for its primary task in the dual-task condition. These

elementary tasks may not have engaged the corresponding

hemispheres sufficiently to cause expected dual-task









decrements (i.e., reduction in left and right hand-tapping

performances when paired with detection of nonmatching

nonwords or lines for all subjects).

Multiple Unshared Resources Theory

According to the multiple unshared resources theory of

attention (Allport et al., 1972; Friedman & Polson, 1981;

Wickens, 1984), humans have multiple resources for selective

processing of particular information. Attention can be

shared between tasks that utilize a particular resource, but

not between resources.

Support for a multiple unshared resource theory of

attention has been provided by Friedman and her colleagues

(Friedman & Polson, 1981; Friedman, Polson, Dafoe, &

Gaskill, 1982; Herdman & Friedman, 1985) as well as by other

researchers (Hardyck, Chiarello, Dronkers, & Simpson, 1984;

Hellige & Wong, 1983). More specifically, these researchers

claim that independent processing resources exist for each

cerebral hemisphere. Processing resources can be shared

among tasks utilizing one hemisphere, but not between

hemispheres. When performing concurrent tasks, impairments

tend to result from tasks primarily using resources from the

same hemisphere in comparison to tasks using resources from

different hemispheres. Same hemisphere tasks reportedly

deplete available resources, causing impaired dual-task

performance in comparison to single-task performance.

Different hemisphere tasks reportedly have sufficient

resources for single-task and dual-task performances,









yielding comparable results. According to Friedman and

Poison (1981), two tasks share a common resource when

performance increases in the emphasized task and performance

decreases in the other task. If this performance trade-off

does not occur, then the two tasks presumably do not share a

common processing resource. This theory has been primarily

supported by evidence from pairs of perceptual or cognitive

tasks (e.g., reading aloud and remembering nonwords paired

with pressing a key indicating whether laterally presented

nonwords were the same or different; reading aloud and

remembering nonwords paired with remembering lateralized

tones).

A recent study conducted by Friedman et al. (1988)

caused them to entertain the possibility that independent

processing resources may exist for cognitive and motor tasks

in addition to the independent resources existing for left

and right hemisphere tasks. Friedman et al. investigated

the effects of reading and remembering nonwords on

unilateral finger tapping. Eight right-handed male subjects

were selected for participation in this study, based on a

right-hand advantage for tapping and on a reading advantage

for nonwords presented to the right rather than the left-

side of space.

In the single-task condition for nonwords, subjects

initially received a warning tone which was followed by a

fixation point at the center of a screen. The fixation

point remained present for 7 s, then it was replaced by









three five-letter nonwords (consonant-vowel-consonant-vowel-

consonant). The nonwords were displayed for 5 s. During

this interval, subjects were instructed to read the nonwords

aloud. Following this period, the nonwords were replaced by

the fixation point that remained present for 5 s.

Subsequently, a second tone sounded, signaling subjects to

recall the nonwords presented in this trial.

In the single-task condition for finger tapping,

subjects tapped as quickly as possible with the index finger

of one hand for 17 s. A tone signaled the beginning and

ending of this period. Tapping was assessed for the left

and right index finger.

In the dual-task condition, the nonword and finger-

tapping tasks were combined. Upon hearing the tone,

subjects began tapping as rapidly as possible with one index

finger. Subjects kept their eyes focused on the central

fixation point, awaiting presentation of the three nonwords.

When the nonwords appeared on the screen, subjects read them

aloud. During subsequent presentation of the fixation

point, subjects presumably rehearsed the to-be-recalled

nonwords while focusing their eyes at the center of the

screen. Following another tone, subjects ceased finger-

tapping and recalled the nonwords. Task emphasis was

manipulated in the dual-task condition. In one condition,

subjects were paid more money for their memory performance

(i.e., number of nonwords correctly recalled). In the other

dual-task condition, subjects were paid more money for their









tapping performance. Friedman et al. (1988) assumed that

subjects would devote more attentional resources to

performance of the task that would pay them substantially

more money.

Results revealed that subjects remembered more nonwords

when concurrently tapping with their left finger than with

their right finger. Regardless of finger used, subjects

remembered more nonwords when the nonword task was

emphasized than when the tapping task was emphasized.

Following correction for the significant difference in

finger tapping (i.e., more taps for the right than left

finger), results indicated that subjects significantly

reduced their rate of tapping for both fingers when they

concurrently read nonwords aloud in comparison to when they

fixated on a centrally located point. While reading

nonwords aloud and tapping, subjects' rate of tapping for

both fingers increased when the tapping task was emphasized

and then decreased when the nonword task was emphasized.

When concurrent rehearsal of nonwords and finger tapping was

compared to the other two dual-task conditions (i.e.,

fixation and tapping; reading and tapping), subjects

significantly reduced their rate of tapping with larger

decrements observed for the right finger. When task

emphasis was manipulated, a performance trade-off occurred

for the reading and tapping condition. Performance trade-

offs did not occur for the fixation and tapping condition or

for the rehearsal and tapping condition.










Friedman et al. (1988) interpreted their findings from

the finger tapping and nonword reading condition as evidence

of left-hemisphere involvement in the coordination of

speaking and finger movements, accounting for the comparable

decrement in left and right finger tapping during concurrent

reading. These researchers also concluded that separate

resources are required for finger tapping and for rehearsal

of nonwords. If these behaviors shared a common resource,

then performance trade-offs would have resulted when one

task was emphasized over the other.

From the results of five dual-task experiments, Pashler

and O'Brien (1993) offered evidence against a multiple

unshared resource theory of attention and support for a

single unshared resource theory of response selection. One

pertinent experiment from this article (experiment 3) will

be described in detail.

In their third experiment, Pashler and O'Brien (1993)

studied the effects of reading nonwords on reaction time to

lateralized visual targets. This experiment lacked single-

task conditions, utilizing only dual-task conditions. The

dual-task condition required subjects to keep their eyes

focused upon crossed lines at the center of a computer

monitor screen. For each trial, a six letter nonword

(consonant-vowel-consonant-consonant-vowel-consonant)

replaced the crossed lines. The nonword remained present

for 200 ms. After a stimulus onset asynchrony (SOA) of 50,

150, 500, or 1000 ms, a white disk was presented to a fixed










position in either the upper left, lower left, upper right,

or lower right quadrant of the computer screen. The disk

remained present for 100 ms. Twenty-four right-handed

subjects were instructed to read aloud each nonword and to

press one of four designated keys (one assigned to each

quadrant) when they detected the disk. Subjects responded

with a specified finger of the left hand to upper and lower

left-sided disks and with a specified finger of the right

hand to upper and lower right-sided disks. Reaction times

were recorded for both tasks. Subjects were instructed to

respond rapidly and accurately to the stimuli of each task.

Of the 24 subjects, 12 received a mixed presentation of

disks to the left and right sides of the computer screen

during dual-task performance. For the other 12 subjects,

disks were presented only to the left or only to the right

side of the screen. In this latter condition, side of disk

presentation alternated between dual-task blocks.

The primary finding from this experiment was a

significant increase in reaction times (i.e., slower

responses) for reading nonwords and for responding to disks

at the shorter SOA intervals. No significant differences in

reaction time were obtained between left- and right-sided

disks or between mixed or blocked presentation of the disks.

Thus, at longer SOA intervals, subjects read nonwords faster

and reacted faster to disks. Subjects responded comparably

to left- and right-sided targets while concurrently reading

nonwords.










According to Pashler and O'Brien (1993), results from

their third experiment reflect response selection

interference. Interference reportedly occurs because a

response to a second task can not happen until a response is

selected for the first task. Interference typically delays

responding to the second task. It may or may not affect

responding to the first task (Pashler & Johnston, 1989).

Response selection interference is most likely observed at

shorter SOA intervals. This interference has been termed

the psychological refractory period (PRP) effect, after

Welford (1952).

Because no significant differences were detected

between left- and right-handed reaction times to respective

left- and right-sided disks during concurrent nonword

reading, Pashler and O'Brien postulated that response

selection is a single mechanism operating within the brain.

If separate mechanisms existed for each hemisphere as

postulated by Friedman and Polson (1981), then impaired

performance should have resulted for right-handed reaction

times to right-sided disks while reading nonwords (same

hemisphere tasks) in comparison to left-handed reaction

times to left-sided disks while reading nonwords (different

hemisphere tasks).

Results from the other four dual-task experiments in

Pashler and O'Brien (1993) corroborated their finding of

response selection interference. Briefly, the first

experiment measured the effects of orally responding "high"










or "low" to respective binaural tones on reaction time to

lateralized visual disks (this latter task was used in their

third experiment). The second experiment utilized the tasks

from the first experiment but the order of presentation was

reversed. The fourth experiment measured the effects of

reaction time to left-sided visual disks on reaction time to

right-sided visual disks. The fifth experiment measured the

effects of reaction time to left-sided visual disks on

reaction time to say whether two centrally presented words

rhymed or not (i.e., "rhyme" or "no rhyme"). The primary

findings from these four experiments were decreased reaction

times (i.e., faster responding) at longer SOA intervals for

the first task (experiments 2 and 5) and for the second task

(experiments 1, 2, 4, and 5). Unexpectedly, reaction times

were faster for the first task (experiments 1 and 2) and for

the second task (experiment 1), when disks were presented to

the right quadrants.

To account for findings from other studies that have

demonstrated exacerbation of dual-task interference when the

tasks were performed primarily by the same cerebral

hemisphere rather than different hemispheres, Pashler and

O'Brien (1993) began by noting that these studies tended not

to require selection of separate responses to dual-task

stimuli. According to Pashler and O'Brien, response

selection interference is most pronounced when subjects have

to select between at least two possible responses for each

stimulus. These investigators questioned whether response









selection caused the interference observed with same

hemisphere tasks. After expressing doubt, they concluded

that the source of the interference for same hemisphere

tasks was still unknown. The study would have benefited

from single-task conditions, in order to determine just how

much interference was occurring in the various dual-task

conditions.

Unfortunately, none of these information-processing

theories of attention (i.e., single unshared resource,

single shared resource, multiple unshared resources, or

multiple shared resources) accounts for all experimental

findings. There are supporting as well as conflicting data

for each theory (Kinsbourne, 1981). For example, impaired

performance on concurrent tasks relative to performance on

each individual task provides support for the single

unshared resource theory of attention (Kerr, 1973; Klapp,

1979; Noble, Trumbo, & Fowler, 1967; Peters, 1977). For

example, in the study conducted by Noble et al. (1967),

undergraduate college students exhibited significant

interference during concurrent performance of a visual

pursuit tracking task and an auditory number monitoring

task, in comparison to performance only on the pursuit

tracking task. However, the single unshared resource theory

of attention does not account for findings of comparable

performance on concurrent tasks as well as on each

individual task (Allport et al., 1972; Spelke, Hirst, &

Neisser, 1976). For example, in the study conducted by









Allport et al. (1972), five third-year undergraduate Music

students were able to shadow a taped verbal passage while

sight-reading and playing an unfamiliar piece of music. In

this study (Allport et al., 1972), dual-task performance was

comparable to their single-task performances. Nonetheless,

these theories have served as heuristics for further

exploration of how attention operates.

Automatic and Controlled Processing of Information

An alternative heuristic for how attention operates has

also been used (Cowan, 1988; Neisser, 1967; Posner & Snyder,

1975; Schneider & Shiffrin, 1977; Shiffrin & Schneider,

1977). The operation of attention varies according to the

type of information to-be-processed and whether or not this

information can be processed automatically. Humans are

hypothesized to have a central (i.e., single sharable)

attentional resource for controlled processing of selected

information. This central resource is voluntarily directed

by the individual for processing a limited amount of

information. According to Cowan (1988), the focus of

attention is limited to approximately two or three items.

However, humans are also hypothesized to process some

information automatically. This information tends to be

well learned, and according to Cowan (1988), the information

is processed automatically under the control of long-term

memory. Although automatic processing reportedly does not

require attentional resources from the central supply, it

seems likely that at least a very small amount of attention









would be required for adequate routine processing. In

addition, automatic processing is less limited by capacity

restrictions; therefore, often more than three items can be

processed automatically. According to Allport (1990), "the

attempt to apply the same dichotomy (automatic/controlled)

to the whole range of dual-task concurrency costs appears to

have been, for most practical purposes, abandoned" (p. 640).

Anatomy

Up to this point in time, the information-processing

heuristics have essentially ignored the necessary anatomy

for the operation of attention. Since Friedman and Polson

(1981) hypothesized separate attentional resources for each

cerebral hemisphere, investigation of the anatomy

responsible for dual-task performance has remained dormant.

A heuristic which accounts for the anatomy needed for

attention within a concurrent task paradigm may better

account for the findings from this particular literature.

While such heuristics are lacking in the dual-task

literature, they exist in the hemispatial neglect and the

neuropsychological literatures.

Based on studies with brain-impaired humans and animals

demonstrating neglect of the contralesional side of space

despite intact sensory or motoric abilities, Heilman,

Watson, and Valenstein (1985) and Mesulam (1990) proposed

neuroanatomical models for arousal, selective sensory

attention, and selective motor intention. According to

Heilman et al., arousal (i.e., general readiness for sensory









and motor processing) is produced by activity of the

mesencephalic reticular formation (MRF). The MRF affects

cortical processing of sensory information directly by

diffuse polysynaptic projections to areas of the cortex

(i.e., primary sensory cortex, unimodal sensory association

cortex, prefrontal cortex, superior temporal sulcus,

inferior parietal lobule, and posterior cingulate gyrus)

possibly through cholinergic pathways or indirectly by

projections to the nucleus reticularis of the thalamus. The

MRF indirectly influences the cortex by inhibiting the

nucleus reticularis, resulting in the facilitation of

sensory information flow from the thalamus to the primary

sensory cortex. Sensory information proceeds from the

primary sensory cortex to unimodal sensory association

cortices for further processing. From the unimodal sensory

association cortices, information may advance to the

multimodal sensory association cortices directly or

indirectly after processing by the prefrontal cortex and the

superior temporal sulcus. Arousal is maintained by

projections from the prefrontal cortex and superior temporal

sulcus to the MRF and or nucleus reticularis. The inferior

parietal lobule, prefrontal cortex, superior temporal

sulcus, and posterior cingulate gyrus have reciprocal

connections. The inferior parietal lobule receives

information about the individual's goals from the prefrontal

cortex, about the significance of the sensory information

and the individual's biological needs from the posterior









cingulate gyrus, and about the sensory stimulus from

specific neurons within the inferior parietal lobule (e.g.,

enhancement neurons), resulting in selective sensory

attention.

According to Heilman et al. (1985), selective motor

intention requires arousal, resulting from inhibition of the

nucleus reticularis by the MRF. Selective motor intention

(i.e., the facilitation of cognitive processing for

production of a motor response to a meaningful stimulus)

depends on projections from the centromedian

parafascicularis nuclear complex of the thalamus (CMPF) to

the dorsolateral prefrontal cortex to the nucleus

reticularis. This CMPF-prefrontal cortex-nucleus

reticularis system produces inhibition of the nucleus

reticularis, leading to a reduction of its inhibitory output

to the ventrolateral nucleus of the thalamus. Subsequently,

facilitation of motor preparedness occurs between the

ventrolateral thalamic nucleus and the motor and premotor

cortices. The dorsolateral prefrontal cortex receives

information about the individual's biological needs from the

anterior cingulate gyrus, and in turn, influences motor and

premotor cortices, resulting in selective motor intention.

The anterior cingulate gyrus has reciprocal connections with

the prefrontal cortex. Adjacent to the anterior cingulate,

the supplementary motor area projects to the basal ganglia

(Heilman, Watson, & Valenstein, 1993) which in turn project









to the ventrolateral and ventroanterior nuclei of the

thalamus.

Mesulam (1990) proposed a single network model for

spatially directed attention, incorporating neuroanatomy

necessary for arousal, selective sensory attention, and

selective motor intention. According to Mesulam, spatially

directed attention results from three main, reciprocally

connected cortical areas: dorsolateral posterior parietal

cortex, dorsolateral premotor-prefrontal cortex, and

cingulate gyrus. The dorsolateral posterior parietal cortex

along with associated areas (e.g., sensory association

cortex, superior temporal sulcus) are primary for selective

sensory attention, while the dorsolateral premotor-

prefrontal cortex and associated areas (e.g., superior

colliculus) are primary for selective motor intention. The

cingulate gyrus determines the significance of sensory or

motoric information. These three main neuroanatomic regions

are connected with the striatum and the pulvinar nucleus of

the thalamus. In addition, these main regions receive

projections from the brainstem and thalamic components of

the reticular activating system, providing arousal.

Subsequently, Posner and Peterson (1990) and Mirsky,

Anthony, Duncan, Ahearn, and Kellam (1991) proposed general

theories of attention along with corresponding anatomy.

Based on visual detection studies with brain-impaired and

neurologically normal humans and monkeys, Posner and

Peterson suggested the existence of two attention systems.









The posterior attention system controls covert orienting of

visual attention to select spatial locations (i.e.,

selective sensory attention), while the anterior system is

involved in attention to language and regulation of the

posterior attention system. The posterior attention system

is composed of the following anatomical areas: primary

visual cortex, posterior parietal lobe, superior colliculus

and or surrounding midbrain areas, and pulvinar of the

thalamus. According to Posner and Petersen, the posterior

parietal lobe disengages selective attention from its

current focus after receiving information from the primary

visual cortex, then the superior colliculus and or

surrounding midbrain areas move attention to the location of

interest, and the pulvinar engages the point of interest.

The primary component of the anterior attention system is

the anterior cingulate gyrus. Posner and Petersen purported

that the right cerebral hemisphere contains the necessary

anatomy for alertness (i.e., arousal), influencing the

posterior and anterior attention systems.

Mirsky et al. (1991) proposed four components of

attention (i.e., focus-execute, encode, shift, sustain),

based on principal components analyses of neuropsychological

data from neurological, psychiatric, and normal adults and

from normal children. Following these analyses, Mirsky and

his colleagues referred to the human and animal neurological

literature to identify likely anatomical structures that

could account for the proposed attention components.









According to Mirsky et al., focusing on the environment

(i.e., selective sensory attention) and executing responses

is controlled by the inferior parietal cortex and its

connections to the corpus striatum. The superior temporal

sulcus is also involved in the focus component of attention.

Encoding of information is managed by the hippocampus and

amygdala, while shifting selective attention is controlled

by the prefrontal cortex, medial frontal cortex, and

anterior cingulate gyrus. In addition to arousal,

sustaining selective attention is dependent upon the tectum,

mesopontine regions of the reticular formation, and midline

region and reticular nucleus of the thalamus.

Amongst these neuroanatomical models of attention

(Heilman et al., 1985; Mesulam, 1990; Mirsky et al., 1991;

Posner & Petersen, 1990), a number of commonalties exist.

These models tend to propose similar structures for arousal

(i.e., mesencephalic reticular formation and nucleus

reticularis of the thalamus), selective sensory attention

(i.e., inferior parietal lobule) and selective motor

intention (i.e., dorsolateral prefrontal cortex). In

addition, the cingulate gyrus is identified as an important

structure, influencing selective sensory attention and

selective motor intention. Most important, attention is not

purported to exist within only one brain structure. Rather,

attention is the result of a network of specific anatomical

areas and pathways activated by corresponding sensory

stimuli, motoric responses, and motivational factors.









It is proposed that the results of dual-task studies,

investigating how attention operates, would be explained

best by considering the interaction of activated

neuroanatomical structures and pathways. Such a proposal is

outlined in the Experiment and Hypotheses section of this

chapter. However, the viability of this or any other theory

concerning the operation of attention would be best tested

with studies that monitor brain activity in addition to

cognitive performance for subsequent correlation of results.

In addition to experiments recording event related

potentials during performance of dual-task paradigms,

functional neuroimaging studies with magnetic resonance

imaging (MRI), positron emission tomography (PET), and

single photon emission computed tomography (SPECT) offer

mediums for elucidating the relationship among attention,

cognitive functioning, and involved anatomy.



Language

Language has been defined as the "code whereby ideas

about the world are represented through a conventional

system of arbitrary signals for communication" (Bloom &

Lahey, 1978, p. 4). This code has been conceptualized as

consisting of three main components: phonology, semantics,

and syntax (Mesulam, 1990). Phonology is the process that

sequences individual phonemes (i.e., sounds) to form words.

Semantics is the process that associates meaningful concepts

with their corresponding symbolic and lexical









representations (i.e., words). Syntax is the process that

designates relationships between words (Nadeau, 1988).

Anatomy

In the 1800s, investigators began studying brain-

injured patients in an attempt to discover areas of the

brain necessary for language. Broca (1964a/1861;

1964b/1861) provided some important evidence by describing

patients with impaired language production following a

common lesion in the third frontal convolution of the left

hemisphere. In 1874, Wernicke presented data associating a

lesion in the temporal-parietal cortex of the left

hemisphere with impaired language comprehension (Benson,

1985). Subsequently, these two regions of the left

hemisphere became known respectively as Broca's and

Wernicke's areas. These areas along with the

neuroanatomical connections between them were incorporated

into a model of how language functions within the left

hemisphere by Wernicke. In 1885, Lichtheim postulated a

model with two separate processing routes for speech

production: a phonological route and a semantic route.

In the 1965, Geschwind integrated past and then current

theories and studies of brain-injured humans demonstrating

impairments of language processing or speech production as

well as studies of brain-impaired animals, yielding a

neuroanatomical model of language. In order to read aloud a

written word, Geschwind proposed that written language is

initially processed by the primary visual cortex









(Broadmann's area 17) then by the visual association

cortices (Broadmann's area 18 and 19) in each hemisphere.

Subsequently, visual information is transferred to the left

angular gyrus directly in the left hemisphere and indirectly

from the right hemisphere by way of the following two

pathways: (1) right visual association cortices to right

angular gyrus to left angular gyrus via the corpus callosum

and (2) right visual association cortices to left visual

association cortices via the corpus callosum to left angular

gyrus. The left angular gyrus transforms the visual word

form (i.e., written language) into the auditory word form

(i.e., spoken language) and vice versa. After visual-to-

auditory transformation, this information proceeds to

Broca's area via the arcuate fasciculus for motor

programming of subsequent speech. From Broca's area,

processing occurs in the primary motor cortices of the

inferior frontal lobes (i.e., Brodmann's area 4), resulting

in formation of actual movements of the face and throat and

subsequently in spoken language. According to Geschwind,

Wernicke's area is the association cortex for auditory

information in the left hemisphere. Damage to these areas

or their connections will tend to yield predictable patterns

of impairment, otherwise known as aphasia syndromes.

Subsequently, support has amassed from clinical-

pathological (Taylor, 1969) and carotid sodium amytal

studies (Wada & Rasmussen, 1960) as well as from cerebral

stimulation (Rasmussen & Milner, 1977) and blood flow









(Lassen, Ingvar, & Skinhoj, 1978) studies demonstrating the

dominance of the left hemisphere for primary language

functions in approximately 96 % of right-handed individuals

and 70 % of left-handed individuals (Rasmussen & Milner,

1977).

Currently, the search continues to identify important

structures and pathways within the brain for phonological,

semantic, and syntactic aspects of language. In 1990,

Mesulam postulated a neuroanatomical model, involving two

main areas (i.e., Wernicke's and Broca's areas), to account

for these components of language. According to Mesulam,

Wernicke's area (i.e., posterior one-third of the left

superior temporal gyrus in Brodmann's area 22) along with

associated areas in the left temporal and parietal cortices

(i.e., Brodmann's areas 37, 39, and 40) are primarily

involved in processing of phonological, lexical (i.e.,

word), and semantic information. Within these anatomical

areas, information pertaining to sound, word, and meaning

relationships is stored, allowing transformation of auditory

information into visual word forms and vice versa as well as

access to semantics. Broca's area (i.e., peri-sylvian area

of the inferior gyrus in the frontal cortex, otherwise known

as Brodmann's area 44) and associated areas in the left

frontal cortex (i.e., Brodmann's area 6 in the pre-motor

cortex and Brodmann's areas 45, 47, and 12 in the prefrontal

cortex) are involved mainly in the processing of

articulatory and syntactic information. Within this region









of the left hemisphere, transformation of lexical

information into articulatory sequences occurs as well as

arrangement of words into sentences. Within Brodmann's area

6, the supplementary motor cortex is involved in planning

and initiation of speech. Brodmann's areas 12, 45, and 47

in the prefrontal cortex are involved in the retrieval of

words from superordinate categories. Mesulam's model

includes interconnections between the Wernicke's, Broca's,

and their associated areas, allowing for normal language

functioning.

Rapcsak, Rothi, and Heilman (1987) provided evidence

that separate routes exist for phonological and lexical

reading. They studied one patient who experienced a

cerebral hemorrhage, damaging the posterior portion of the

middle and inferior gyri of the left temporal cortex (i.e.,

Brodmann's areas 21 and 37) and underlying white matter.

This patient exhibited selective impairment of phonological

reading; thus, he was unable to read most nonwords.

However, he was able to read lexically the majority of words

with regular and irregular print-to-sound correspondence.

According to Rapcsak et al., the phonological reading route

involves the transfer of visual information from the ventral

visual association cortex to Wernicke's area via the middle

and inferior gyri and or underlying white matter of the left

temporal cortex. The patient's lesion in these connection

areas and pathways disrupted the phonological system. The

lexical reading route involves the transfer of visual









information from the ventral visual association cortex to

the angular gyrus and then to Wernicke's area, enabling the

patient to read irregular words.

Further support for the existence of separate

processing routes for language is offered by Coslett,

Roeltgen, Rothi, and Heilman (1987). These researchers

studied four patients with preserved repetition, impaired

comprehension, and fluent but semantically empty speech

(i.e., transcortical sensory aphasia), following different

areas of neuroanatomical damage (i.e., cortical atrophy and

symmetrically enlarged ventricles from a primary

degenerative dementia; left posterior parietal infarction;

bitemporal involvement from herpes simplex encephalitis; and

left basal ganglia and thalamic hemorrhage). Based on

results from assessment of language functioning (i.e.,

spontaneous speech, auditory comprehension, repetition,

naming, reading aloud, reading comprehension, lexical

decision making, and syntax), Coslett et al. concluded that

the following three processing components support repetition

and reading aloud: phonology without lexical (whole word)

access, semantics with lexical and syntactic access, and

lexical and syntactic access without semantics. Consistent

with an impairment in the mechanism for semantics despite

lexical and syntactic access, two of the patients exhibited

limited comprehension but they were able to read nonwords

because of an intact mechanism for phonology without lexical

access as well as read irregular words because of an intact









mechanism for lexical and syntactic access. Based on data

from two other patients, impairments of lexical, syntactic,

and semantic access were noted. One of the two patients

with complete assessment data exhibited difficulty reading

irregular words along with semantic and syntactic errors,

suggesting impairment of the mechanism for semantics with

lexical and syntactic access. However, this patient

demonstrated relative preservation in the ability to read

nonwords, suggesting an intact mechanism for phonology

without lexical access.

More recently, a case study has been reported that

identified the importance of Brodmann's area 37 (i.e., that

part of area 37 consisting of the posterior portion of the

left middle temporal gyrus) in the lexical-semantic system

(Raymer, Foundas, Maher, Greenwald, Morris, Rothi, &

Heilman, 1995). Following primary lesion of Brodmann's area

37, a patient developed significant word-finding problems

anomiaa), despite relatively intact comprehension. More

specifically, the patient experienced impairments in oral

and written naming of pictures, oral naming to auditory

definition, writing to dictation, as well as oral reading of

real words and nonwords. Raymer et al. concluded that

Brodmann's area 37 appears fundamental in the provision of

access to lexical information within the lexical-semantic

route for language processing.

In support of Lichtheim's model (1885), McCarthy and

Warrington (1984) provide evidence that separate routes









exist for phonological (i.e., lexical in this case) and

semantic processing of information. They studied two

patients with impaired repetition and relatively preserved

spontaneous speech (i.e., conduction aphasia) and one

patient with the opposite pattern, that is, relatively

preserved repetition and impaired spontaneous speech

(transcortical motor aphasia). The repetition and speech

production of the two patients with conduction aphasia only

improved when semantic processing was emphasized, while the

repetition and speech production of the patient with

transcortical motor aphasia worsened under the same

conditions.

Based on findings from cerebral blood flow studies with

normal humans as well as cognitive studies with normal and

brain-impaired humans, Petersen, Fox, Posner, Mintun, and

Raichle (1989) proposed a neuroanatomical model for lexical

processing of written and spoken language. According to

Petersen et al., a written word is initially processed in

the left and right primary visual cortices. Subsequently,

information is transferred to the visual association areas

(extrastriate cortex) for lexical access. To read aloud a

written word, the information is transferred from the visual

association areas to the supplementary motor area, sylvian

cortex, and left premotor area in the frontal lobe for motor

programming and articulatory coding. Following motor

programming and articulatory coding, processing continues in

the primary motor cortex for actual motoric output (i.e.,









speech) of the written word. To produce a semantic

associate to a written word, information is transferred from

the visual association cortices to the left anterior

inferior frontal cortex for generation of semantic

associations to the written word. Subsequently, this

information is transferred to the supplementary motor area,

sylvian cortex, and left premotor area in the frontal cortex

for motor programming and articulatory coding. Further

processing occurs in the primary motor cortex, yielding

motoric output of the semantic associate. Petersen et al.

did not find activation in either Wernicke's area or the

angular gyrus, while normal subjects viewed, read, or

generated a semantic associate to written words contrary to

Geschwind's model. Steinmetz and Seitz (1991) suggested

that methodological procedures, employed by Petersen et al.

(i.e., intersubject averaging of positron emission

tomography scans), increased data variability, obscuring

likely activation of these areas.

To summarize the findings from the literature

pertaining to language functioning and corresponding

neuroanatomy within the left hemisphere, the posterior

cortices have been primarily associated with phonological,

lexical, and semantic processing. More specifically,

Wernicke's area appears important for phonological

processing. Brodmann's area 37 appears important for access

to lexical information within the lexical-semantic route for

language processing. The angular gyrus (Brodmann's area 39)










has been identified as a likely structure for the

transformation of visual word forms into auditory word forms

and vice versa (i.e., lexical processing and for access to

meaning (i.e., semantic processing). Reading can occur by

way of any of the three following processing routes:

phonological, lexical without access to semantics, lexical

with access to semantics. The frontal cortex in the left

hemisphere has been mainly associated with processes related

to output and syntax. More specifically, Broca's area has

been identified as a structure important in motor

programming for subsequent speech. The supplementary motor

area is hypothesized to be involved in the initiation and

planning of speech. The arcuate fasciculus connects

Wernicke's with Broca's area. Similar to models of

attention, language is proposed to be the result of a

network of specific anatomical areas and pathways.

Model of Recognition, Production, and Comprehension of
Written Words
Ellis and Young (1988) proposed a fairly comprehensive

model for the recognition and production of spoken and

written words that accounts for the findings from Rapcsak et

al. (1987), Coslett et al. (1987), Raymer et al. (1995),

McCarthy and Warrington (1984) as well as the results from

many other studies documenting similar dissociations within

the field of reading (e.g., Schwartz, Saffran, & Marin,

1980; Warrington & Shallice, 1979). Because the present

investigation will examine selective attention as it relates









to language at the lexical and semantic level with single

written words, only a subset of Ellis and Young's composite

model is presented in Figure 1-1. From this model, only the

components necessary for reading a familiar word aloud and

for producing a similar meaning or a highly associated word

(i.e., a semantic association) will be defined. In

addition, neuroanatomical structures and pathways proposed

to be important for these components, based on the above

literature review, will be included.

According to Ellis and Young (1988), the visual

analysis system identifies letters as letters and notes

their position within a written word (visual association

cortices; Brodmann's areas 18 and 19). This system also

begins to identify whether the letters form a familiar or an

unfamiliar word. The visual input lexicon contains the

representations of familiar words in their written form

(Brodmann's area 37; angular gyrus, Brodmann's area 39),

while the auditory input lexicon contains the

representations of familiar words in their heard form. The

meanings of words are represented in a distributed semantic

system (McCarthy & Warrington, 1990) with access by way of

the angular gyrus (Brodmann's area 39). The speech output

lexicon contains the pronunciations of familiar words

(angular gyrus, Brodmann's area 39). The phoneme level

activates the motor programs for the phonemes in words

(Broca's area and surrounding frontal cortex; Brodmann's

area 44, 45, 47, 12 and 6).









In order to read a familiar word aloud, the written

word enters the cognitive processing system from visual

sensation. Many of the word's physical properties are

preserved in a brief sensory store for up to several hundred

milliseconds (Cowan, 1988). Perceptual processing of the

word occurs next, within the visual analysis store. Because

the word is familiar, it is further processed by the visual

input lexicon. When the word is recognized as familiar, the

spoken form of the word can be retrieved from the speech

output lexicon. Once the correct pronunciation of the word

is obtained, then the phoneme store becomes activated for

subsequent motor programming of the word.

To produce a semantically associated word for a read

word, the written word receives sensory processing, then

perceptual processing within the visual analysis system.

Because the word is familiar, it activates the corresponding

entry in the visual input lexicon. The meaning of the

familiar written word is obtained within the semantic

system. In addition, synonyms or highly associated words

are coactivated by input from the semantic system to the

output lexicon and the most strongly activated associate can

be chosen for production. Correct pronunciation of the

semantic associate is triggered from the speech output

lexicon. Subsequently, the proper phoneme sequences are

activated for motor programming.

Based on Ellis and Young's (1988) model for the

recognition, comprehension, and naming of written words,










these two language tasks (reading a word aloud; producing a

semantic associate for a word that is read) presumably

require differing amounts of cognitive processing. After

sensation, reading a familiar word aloud reportedly requires

processing by four language components (visual analysis

store, visual input lexicon, speech output lexicon, phoneme

store). Producing a semantic association requires

processing by five language components (visual analysis

store, visual input lexicon, semantic system, speech output

lexicon, phoneme store), following sensory processing.

Therefore, it is proposed that producing a semantic

association requires more cognitive processing than reading

a word aloud.

The attentional demands of these two tasks (i.e.,

lexical processing with and without an emphasis on

semantics) are the subject of the current study. Because

task difficulty may influence performance on these tasks in

addition to or rather than the number of processing steps,

high and low frequency words will be used to decipher the

possible effects of these factors. Investigators have

demonstrated differences between high and low frequency

words when utilized in single-task and dual-task paradigms.

While reading a passage in a single-task paradigm, subjects

fixated longer on low frequency than high frequency words

(Just & Carpenter, 1980). In another single-task study

(Polich & Donchin, 1988), subjects responded slower to low

frequency than to high frequency words when required to make









a lexical decision (i.e., deciding whether or not a word is

a real word). Additionally, these subjects exhibited a

longer P300 (event-related potential) latency and a reduced

P300 amplitude when making a lexical decision about low

frequency words, in comparison to high frequency words.

Polich and Donchin concluded that word frequency affects the

length of time spent searching the visual input lexicon for

the presented word. In dual-task paradigms, subjects

responded slower when low frequency words were paired with a

secondary task than when high frequency words were paired

with a secondary task (Becker, 1976; Herdman, 1992; Herdman

& Dobbs, 1989).



Selective Attention and Language

While theorists have postulated varied relationships

between vigilance and language (Glosser & Goodglass, 1990;

Luria, 1977), two consistent theories have been proposed

regarding selective attention and language (Nadeau &

Crosson, in press; Ojemann, 1983) in addition to Posner and

Petersen's theory (1990; see Anatomy in the Attention

section). Based on research from electrical stimulation

mapping studies (e.g., Ojemann, 1975; Ojemann, 1977),

Ojemann (1983) postulated two unique roles for the thalamus

within the hemisphere dominant for language (usually the

left). According to Ojemann, the left thalamus selectively

directs attention to language (a) by activating the relevant









areas of the brain for subsequent language processing (b) in

a simultaneous manner.

Nadeau and Crosson (in press) offered further support

for Ojemann's theory, while also expanding it. Following

examination of research on thalamic infarction, Nadeau and

Crosson proposed that the frontal lobe in the language

dominant hemisphere of the brain influences the nucleus

reticularis of thalamus via the inferior thalamic peduncle.

In turn, the nucleus reticularis influences the centromedian

nucleus of the thalamus, resulting in selective engagement

of cortical areas necessary for language processing (e.g.,

semantics) via diffuse connections of the centromedian

nucleus with various cortical regions. Meanwhile, the

frontal cortical-nucleus reticularis-centromedian system

reportedly holds other regions (i.e., those not necessary

for language) of the cortex in a state of relative

disengagement.

Nadeau and Crosson's (in press) theory suggests an

intrahemispheric mechanism for selective processing of

language, resulting from differential activation of the left

centromedian nucleus by the frontal lobe via the nucleus

reticularis. Posner and Petersen (1990) also proposed that

a frontal cortical structure (i.e., anterior cingulate

gyrus) is involved in attention to language as well as

regulation of the posterior attention system. While

Friedman and Polson (1981) concur that the left hemisphere

contains a mechanism and resources for selectively










processing language, they propose that the left hemisphere

hosts additional resources for selective processing of

information relevant to other left hemisphere functions

(e.g., motor control of the right hand). A similar left

intrahemisphere mechanism for directing selective attention

to information preferentially processed by the left

hemisphere has been considered by Petry, Crosson, Rothi,

Bauer, and Schauer (1994). Friedman and Polson differ from

this group of researchers with their view that the

attentional resources of the hemispheres can not be

differentially activated, despite inconsistent data from

functional neuroimaging studies (Binder, Rao, Hammeke,

Frost, Bandettini, Jesmanowicz, & Hyde, 1995; Demonet,

Chollet, Ramsay, Cardebat, Nespoulous, Wise, Rascol, &

Frackowiak, 1992; Petersen et al., 1989). Friedman and

Polson (1981) hypothesized that the left and right

hemispheres become equally activated for all information

processing even when one of the hemispheres participates at

a minimum level in the cognitive activity.

Based on results from a correlational study

investigating selective visual attention and language

functioning with neurologically normal and left-hemisphere

brain-damaged subjects, Petry et al. (1994) raised the

hypothesis that a left intrahemispheric mechanism with

associated resources exists for selective processing of left

hemisphere functions, such as language and detection of

right-sided visuospatial information. Relative to the










neurologically normal group, their left-hemisphere brain-

damaged subjects exhibited a deficit in the ability to

direct selective attention to the right side of space in

order to detect visuospatial information.

According to Petry et al. (1994), the left-hemisphere

brain-damaged subjects did not exhibit impairments on a test

of visual neglect or on a test of visual vigilance, when

compared to the neurologically normal control group.

Additionally, for the left-hemisphere brain-damaged group,

only the deficit of re-directing selective attention to the

right side of space significantly correlated with more

severe impairments on measures of verbal fluency, auditory

comprehension, naming, and repetition.

Petry et al. (1994) essentially hypothesized a multiple

unshared resources theory of attention, where a mechanism

within each hemisphere selectively engages various resources

for a particular type of information processing. Resource

limitations exist within each processor (hemisphere). These

researchers considered possible effects of an injury to the

left intrahemisphere mechanism. They postulated that damage

to this mechanism may result in inefficient engagement of

resources for processing of information in the left

hemisphere. Inefficient processing may then exacerbate

impairments from damage already present in structures and

pathways necessary for the function. For example, disorders

of attention and language are commonly reported after injury

to left hemisphere. Petry et al. hypothesized that the









level of difficulty experienced by language disordered

patients may be in part determined by an inefficiency in the

selective engagement of needed cortical processes. While

impaired selective engagement may influence language

dysfunction, it was not purported to be the primary cause of

language difficulties. Petry et al. maintained that damage

to important areas and pathways involved in language

processing (e.g., Broca's area, Wernicke's area, and arcuate

fasciculus) was the primary cause of language dysfunction.

As an alternative to a general left intrahemispheric

mechanism for selective engagement of information

processors, Petry et al. (1994) also considered the possible

existence of a more refined attention mechanism. Instead of

a mechanism where attention resources are shared across all

left hemisphere processing tasks, additional unsharable

mechanisms may coexist for selective processing of

particular types of information within the left hemisphere.

For example, there may be separate attention mechanisms for

language processing and for processing right-sided

visuospatial information. The resources from these separate

mechanisms can not be shared. Thus, the mechanisms for

processing language and right-sided visuospatial information

may coexist rather than relate within the left hemisphere.

In an attempt to clarify whether language is subserved

by a general selective attention mechanism that also engages

other cognitive systems (i.e., right-sided visuospatial

information) or by a specific selective attention mechanism









that engages only language within the left hemisphere, the

current project will employ the dual-task technique.



Dual-Task Paradigm

The dual-task method consists of comparing concurrent

performance on two tasks to performance on each individual

task. Traditionally, this paradigm has been used to study

the operation of selective attention, leading to the

development of the previously reviewed theories of attention

(i.e., single unshared resource, single shared resource,

multiple unshared resources, multiple shared resources, and

automatic/controlled processing). More recently, this

methodology has been used to investigate the lateralization

of functions within the brain. However, Ojemann (1983)

suggests that the dual-task procedure more applicably

measures the lateralization and operation of attentional

mechanisms rather than the lateralization of cognitive

functions.

Assumptions

Regarding the operation of selective attention, a

variety of dual-task methodologies, such as dichotic

listening, visual half-field presentation, and pairing of a

manual task with a cognitive task, have been employed.

While performing two concurrent tasks, the allocation of

attention is manipulated by varying task priority or task

difficulty (Green & Vaid, 1986). Subjects may be instructed

to focus their attention primarily on task A or B, or









equally on both tasks. When attention is primarily directed

to task A, results may then reflect the influence of task A

on task B. Typically, performance on task B will worsen

reflecting less attention to this task. When attention is

directed equally to both tasks, results then reflect the

bidirectional influence of each task. Additionally,

different levels of difficulty may be introduced for either

task A or B. It is generally presumed that more processing

resources are expended on a task with more focused

attention. For example, when task A receives more attention

than task B, typically performance on task B will decline.

It is also assumed that more attentional resources are

activated with increasingly difficult tasks. However, a

point is eventually reached when additional resources no

longer benefit an individual's performance, that is, when

the costs outweigh the benefits.

Because the current project will investigate whether

language is subserved by a general or a specific selective

attention mechanism within the left hemisphere, five

relevant studies (Herdman, 1992; Hiscock, Cheesman, Inch,

Chipuer, & Graff, 1989; Milner, Jeeves, Ratcliff, &

Cunnison, 1982; Posner, Early, Reiman, Pardo, & Dhawan,

1988; Posner, Inhoff, Friedrich, & Cohen, 1987) will be

reviewed. These studies were selected because they utilized

a language task in their methodology and because they were

representative of typical findings in the dual-task

literature. In comparison to single-task performance, dual-









task performance typically produces interference (Friedman

et al. 1988; Herdman, 1992; Hiscock et al., 1989; Milner et

al., 1982; Posner et al., 1988; Posner et al., 1987). The

interference tends to be greater for same hemisphere tasks

(Milner et al., 1982; Posner et al., 1988) than for

different hemisphere tasks. The next most frequent finding

in this literature is no change in performance from the

single-task to the dual-task conditions (Gladstones et al.,

1989; Pashler, 1992). Occasionally, concurrent performance

of same hemisphere tasks will yield enhancement in

comparison to the single-task performance (Hiscock et al.,

1989; Pasher & O'Brien, 1993). Overall, most studies report

a mixture of these types of findings; for example,

interference in one condition and no change in another

(Ballesteros et al., 1989; LaBarba, Bowers, Kingsberg, &

Freeman, 1987; Urbanczyk, Angel, & Kennelly, 1988). In

comparison to single-task performance, the following

variables tend to enable concurrent performance:

integration of the task into long-term memory, use of simple

stimuli and responses, dissimilar stimuli and responses for

tasks; and predictable presentation of task stimuli (Cohen,

1993; Hiscock, 1986).

Hiscock et al. (1989) investigated the effects of

reading aloud on a variety of unilateral finger tapping

tasks. Forty-eight right-handed subjects participated in

this experiment. In the single-task condition for reading,

subjects were instructed to read aloud, for 15 s, a summary









paragraph selected from an introductory psychology textbook.

Additionally, subjects were told to remember as much as

possible from the paragraph. Following the 15 s reading

period, subjects were asked to say three to five key words

reflecting the content of the paragraph.

In the single-task conditions for finger tapping,

subjects performed assorted tasks based on combinations of

the following variables: rate (speeded tapping vs.

consistent tapping), movement (repetitive tapping of one key

vs. alternating tapping between two keys), and hand used

(left vs. right). Thus, subjects received one trial (15 s)

of the following tasks: speeded-repetitive tapping,

speeded-alternating tapping, consistent-repetitive tapping,

and consistent-alternating tapping. This combination of

tasks was performed separately with the left hand and with

the right hand.

In the dual-task conditions for reading aloud and

finger tapping, the single-task condition for reading aloud

was paired with each combination of the single-task

conditions for finger tapping. Additionally, task emphasis

(reading vs. finger tapping) was manipulated, such that,

subjects received one trial (15 s) of each dual-task

condition where reading was the most important task as well

as one trial of each dual-task condition where finger

tapping was the most important task. Therefore, each dual-

task condition varied across four variables: rate,

movement, hand used, and task emphasis. An example of one









dual-task trial was speeded, repetitive, left-handed finger

tapping paired with reading aloud and remembering aspects of

the paragraph, where performance on the reading task was

emphasized.

Results from Hiscock et al. (1989) revealed that

reading aloud decreased the rate of speeded finger tapping

(7 %) but increased the rate of consistent finger tapping (3

%), regardless of task emphasized. In both conditions, the

right hand was significantly more affected than the left

hand; thus, when tapping rapidly right-handed tapping was

significantly reduced as compared to left-handed tapping,

and when tapping consistently right-handed tapping

significantly improved as compared to left-handed tapping.

Hiscock et al. interpreted this mixed finding of right-hand

interference (when reading and rapidly tapping) and of

right-hand facilitation (when reading and consistently

tapping) as evidence for the role of the left-hemisphere in

coordinating speech with right-hand movements in right-

handed individuals. Based on results from this study,

Hiscock et al. proposed that the left-hemisphere adjusts

right-hand movements in order to coordinate this activity

with speech.

Analysis of reading performance indicated a trend

toward faster reading and less errors in the single-task

conditions in comparison to the dual-task conditions when

reading was or was not emphasized. Hiscock et al. (1989)









concluded that the dual-task paradigm was a valid method for

the investigation of lateralized verbal processing.

Because Hiscock et al. (1989) compared such a large

number of dual-task conditions within subjects, the number

of trials per condition was constrained. The conclusions

from this study would be bolstered with replication as well

as with utilization of a design allowing for more than one

trial (15 s) in each dual-task condition.

Herdman (1992) investigated the effects of reading

words aloud as well as determining whether words were real

words or nonwords on reaction time to a tone discrimination

task. The methodology and results from Herdman's second

experiment will be described in detail. This second

experiment is a replication as well as an extension of the

first experiment listed in this article.

In the single-task condition for reading words aloud,

subjects were required to keep their eyes focused on a dot

that was located at the center of a computer monitor screen.

Subjects initiated a trial by pushing and holding a key with

the index finger of their preferred hand. After 500 ms, the

fixation dot was replaced by a high, medium, or low

frequency word, ranging from 4 to 6 letters. The words were

selected from Kucera and Francis (1967). Subjects were

instructed to read each word aloud as quickly as possible,

initiating removal of the word from the screen. Subjects

were given a maximum of 2 s to respond. Response times were

recorded.









In the single-task condition for determining whether

words were real words or nonwords (i.e., lexical decision

making), the procedure was the same as the one used in the

single-task condition for reading except for the following

differences: on half of the trials, the fixation dot was

replaced by a 4 to 6 letter nonword whereas on the other

half of the trials, the dot was replaced by a high, medium,

or low frequency 4 to 6 letter word. When subjects were

presented with a nonword, they responded aloud with

"nonword". When they received a real word, subjects said

"word". Because response times were recorded, subjects were

instructed to respond as quickly as possible.

In the single-task condition for the tone

discrimination task, subjects were instructed to keep their

eyes fixated on the central dot. Subjects started a trial

by pushing and holding a key with the index finger of their

preferred hand. After 500 ms, the fixation dot was replaced

by five asterisks, simulating the visual complexity of word

presentation in this position during the dual-task

conditions. Additionally, either a low or high pitch tone

was presented. When subjects received a low pitch tone

(distractor), they were instructed to continue holding the

key which they pushed to start the trial. When they

received a high pitch tone (probe), subjects were told to

remove their finger from the key as quickly as possible.

Reaction times were recorded. For distractor tones,

reaction times would be approximately 2 s (i.e., the maximum









amount of time allowed for a trial). Reaction times for

probe trials would be less than 2 s.

In one dual-task condition, reading words aloud was

paired with the tone discrimination task. In the other

dual-task condition, the lexical decision task was paired

with the tone discrimination task. For these dual-task

conditions, subjects were told to keep their eyes focused on

the central dot. Subjects initiated a trial by pushing and

holding a key with the index finger of their preferred hand.

After 500 ms, the fixation dot was replaced by a word

(reading task) or by either a word or a nonword (lexical

decision task). In addition, one of two tones was

presented. When subjects received a distractor tone, they

were instructed to respond to the letter string (i.e., read

it aloud or say either "word" or "nonword") while they

continued holding the key which started the trial. When

subjects received a probe tone, they were instructed to stop

performing the letter string task and to remove their finger

from the start key as rapidly as possible. On probe trials,

subjects did not provide a response for the reading task or

for the lexical decision task. This paradigm has been

referred to as the dual-task change paradigm because

subjects forfeit a response to the primary task in order to

respond only to the secondary task. The tone discrimination

task began 0, 84, or 167 ms, after the presentation of a

letter string (i.e., the primary task).









Primary task (reading and lexical decision) and

stimulus onset asynchrony (0, 84, and 167 ms) were between

subjects factors. Therefore, each subject was randomly

assigned to one of six dual-task conditions. Each of the

six dual-task conditions was performed by 16 subjects; thus

a total of 96 subjects participated in this study. In

addition, these subjects performed the corresponding single-

task conditions. For example, subjects assigned to reading

and 0 ms condition performed the single-task condition for

reading words aloud, the single-task condition for tone

discrimination and the dual-task condition for reading and

tone discrimination with a stimulus onset asynchrony (SOA)

of 0 ms between the tasks. Handedness of subjects was not

assessed.

Results from Herdman (1992) revealed significantly

slower responding to the probe tones during the tone

discrimination task when it was paired with the reading or

lexical decision tasks than when it was performed alone. In

addition, subjects took longer to respond to probe tones

paired with the lexical decision task in comparison to probe

tones paired with the reading task. Subjects also responded

slower to probe tones at later SOAs (84 and 167 ms) than to

probe tones presented simultaneously with a letter string (0

ms). Based on prior research demonstrating slower reaction

times to an auditory probe when it was paired with a low

frequency word in comparison to a high frequency word

(Becker, 1976; Herdman & Dobbs, 1989), Herdman (1992)









concentrated his analyses on the reaction times to probe

tones occurring with or after the presentation of high and

low frequency words. Results indicated that subjects took

longer to respond to probe tones paired with low frequency

words in comparison to probe tones paired with high

frequency words. Herdman concluded that "(a) lexical access

requires attentional resources and (b) more resources are

required to recognize low- as compared with high-frequency

words" (p. 466).

Milner et al. (1982) measured the effects of counting

backward as well as manually adjusting the orientation of

screws on reaction times to lateralized light flashes. In

the single-task condition, right-handed subjects were

required to keep their eyes continually focused on a

fixation spot appearing at the center of a computer monitor

screen. Following a brief warning tone, a small light flash

was randomly presented to a fixed location at the left or

right of the central fixation spot. Each light flash lasted

for 2 ms. Subjects were instructed to press a key with

either their left or right thumb whenever they detected a

light flash. On half of the trials, subjects responded with

their left thumb. On the other half of the trials, they

responded with their right thumb.

In one of the dual-task conditions utilized by Milner

et al. (1982), twelve right-handed subjects counted

backwards by 3, 4, or 6 while detecting light flashes.

Light flashes were presented only when subjects were in the









process of saying a number. In the other dual-task

condition, sixteen right-handed subjects adjusted the

orientation of screws to match the orientation of other

screws while detecting light flashes. Light flashes were

presented only when subjects were in the process of turning

a screw. In both dual-task conditions, an instruction to

focus on the central fixation spot replaced the warning

tone, prior to each trial.

Results from Milner et al. (1982) revealed a

significant increase in reaction times for each dual-task

condition in comparison to the single-task condition. When

the backwards counting task was paired with the light flash

detection task, larger reaction times were obtained to light

flashes appearing in the right visual field than in the left

visual field. When the manual orientation task was paired

with the light flash detection task, no differences existed

in the reaction times for the right and left visual fields.

This study replicated results obtained by Rizzolatti,

Bertoloni, and Buchtel (1979). According to Milner et al.,

the effect of the verbal task (i.e., slower responding to

right-sided light flashes) does not seem to be the result of

a "nonspecific cognitive overload" (p. 594) because the

manual orientation task did not also cause selectively

slower responses to right-sided light flashes.

Posner and his colleagues conducted two dual-task

experiments that utilized a task involving language and a

covert orienting of visual attention task (COVAT; Posner et









al., 1988; Posner et al., 1987). Before explaining the

dual-task experiments, the COVAT must be explained.

Covert Orienting of Visual Attention Task

The covert orienting of visual attention task (COVAT)

was developed by Posner in 1980. Since this time, it has

been used in an extensive number of studies with

neurologically normal subjects as well as brain-injured

patients. The task, which employs nonverbal symbols and

requires only a key-press response, yields a consistent

pattern of results with normal subjects (Posner, 1980;

Posner et al., 1988). For brain damaged patients, specific

patterns of impairment result following lesions of the left

or right hemisphere (Posner et al., 1984; Rafal & Posner,

1987).

The COVAT requires subjects to keep their eyes focused

on a fixation point at the center of a computer monitor

screen ("+") and to press a key as soon as they see the

target ("*"). The target appears in a box either to the

left or to the right of the central fixation point. On a

majority of trials, subjects receive a cue (i.e., the

brightening of one of the two boxes) indicating where the

target is likely to appear. For most cued trials (i.e., 80

% of cued trials), the target appears in the box which has

brightened (valid trials), but for some cued trials (i.e.,

20 % of cued trials), the target appears in the box which

has not brightened (invalid trials). On other trials,









neither box is brightened before the target appears (no cue

trials; see Figure 1-2).

When all three trials are randomly intermixed within

blocks, neurologically normal subjects respond fastest to

valid trials, slowest to invalid trials, and intermediately

to no cue trials (Posner, 1980; Posner et al., 1988). The

advantage in reaction time for valid over invalid trials has

been termed the validity effect (Posner, Sandson, Dhawan, &

Shulman, 1989), and it has been attributed to the effects of

the cue. In addition to alerting subjects to an upcoming

target, the cue is a sensory stimulus that covertly directs

attention to a particular location. The cue affects

processing within 50 ms of onset (Posner & Cohen, 1984). If

the target subsequently appears at the cued location, then

processing is facilitated. If the target appears elsewhere,

then processing is inhibited. The effects of facilitation

and inhibition of processing occur 100 ms after the onset of

the cue even when the cue's accuracy is reduced to chance,

that is, when the target has a 50 % chance of appearing in

the box that brightened (Friedrich & Rader, 1990).

In accordance with instructions, subjects do not move

their eyes during this task because the most efficient

strategy is to keep the eyes centered (Posner, 1980; Posner

& Cohen, 1984). With their eyes centrally fixated, subjects

attend, covertly and selectively, to cued locations for

rapid target detection. Posner, Synder, and Davidson (1980)

have characterized selective visual attention as a spotlight









varying in size with task demands. The spotlight enhances

processing of the selected information.

Posner and his colleagues described three mental

operations involved in covert orienting of visual selective

attention (Posner & Rafal, 1987; Posner, Walker, Friedrich,

& Rafal, 1984; Rafal & Posner, 1987; Rafal, Posner,

Friedman, Inhoff, & Bernstein, 1988). Covert orienting

involves the mental shifting of the spotlight (i.e.,

selective attention) to the source of interest. First,

selective attention 'disengages' from its current focus;

next attention 'moves' to the location of interest; and

finally attention 'engages' the point of interest.

Regarding performance on the COVAT, when the cue is

presented, subjects move selective attention covertly to the

cued location and engage attention in anticipation of the

upcoming target. If the target appears at this engaged

location (valid trials), then subjects tend to respond

quickly. However, if the target does not appear at this

engaged location (invalid trials), then subjects must

disengage attention, move it to the target location, and

engage the target, resulting in a slower response time. If

a cue is not presented (no cue trials), then subjects move

attention to the target and engage it at this location.

According to Posner and his collaborators, subjects do

not engage attention at the central fixation point; thus,

when they are presented with a cue or target, covert

orientation begins with movement of selective attention









rather than with disengagement. However, it appears that

some amount of engagement is present at the central fixation

point as well as at each possible target location. At the

fixation point, it is hypothesized that a small amount of

attention is engaged since subjects are instructed to keep

their eyes focused there. In addition, the most efficient

strategy for target detection entails keeping one's

attention centered at the beginning of each trial. At the

possible target locations, a small amount of attention is

engaged prior to the presentation of a cue or target

enabling detection of these stimuli.

Posner et al. (1987) investigated whether selective

attention to visuospatial information was managed by an

isolated resource or by a general resource that also

selectively attends to language. In addition to the COVAT,

Posner et al. used one of two tasks involving language for

the dual-task condition. One task required subjects to

count the number of auditorily presented words beginning

with a particular phoneme. The other task required subjects

to count backwards by one from an auditorily presented

three-digit number. The phoneme counting task was performed

silently, while the counting backward task was performed

aloud.

Eight neurologically normal subjects and nine patients

with parietal lesions performed the COVAT paired with the

phoneme counting task. Eight different neurologically

normal subjects and five patients from the parietal lesion









group performed the COVAT paired with the counting backwards

task. Before and after the dual-task condition, all

subjects completed the COVAT alone.

Posner et al. (1987) hypothesized that if visuospatial

information is processed by an isolated attentional

resource, then subjects should still benefit from the cue on

valid trials during the dual-task condition. However, a

general increase in reaction times would be anticipated for

the dual-task condition "due to interference with output or

reliance on some very general common resource" (p. 108). If

visuospatial information is processed primarily by a general

attentional resource, then subjects should no longer benefit

from the cue on valid trials during the dual-task condition.

A general increase in overall reaction times would also be

expected.

Because patients with lesions of the parietal lobe have

been shown to have difficulty re-directing attention to the

side of space contralateral to their lesion (Posner et al.,

1984; Posner et al., 1987), these patients were expected to

respond even slower to invalid trials with contralateral

targets during the dual-task condition. This deficit was

expected, if visuospatial information and language are

processed by a general attentional resource. If slower

response times do not result for these trials during the

dual-task condition, then "the attention system common to

language and spatial orienting is quite different from that

used by spatial orienting alone" (p. 109).









Regarding results for the neurologically normal

subjects, responses were slower during the dual-task

conditions than during the COVAT condition. When the

phoneme counting task was paired with the COVAT, subjects

continued to benefit from the cue on valid trials. When the

counting backwards task was paired with the COVAT, normal

subjects benefited less from the valid cues. Posner et al.

(1987) suggested that the counting backward task was more

difficult than the phoneme counting task, thus interfering

with orienting toward valid COVAT cues.

Performance of the parietal damaged patients varied

with the target-onset interval. When the target appeared

100 ms after the cue, patients did not benefit from the cue

on valid trials during the dual-task conditions. However,

at 500 or 1000 ms, the patients benefited from the valid

cues. Response times tended to be larger during the dual-

task conditions than the COVAT condition. The parietal

damaged patients did not exhibit extraordinarily slow

responding to the invalid trials with contralateral targets.

Posner et al. (1987) suggested that orienting toward valid

cues at short intervals (100 ms) was more difficult than

orienting toward valid cues at longer intervals (500 ms).

Posner et al. (1987) concluded that the "spatial

orienting system must share some operations with the two

secondary tasks, causing a delay in orienting when they are

sufficiently difficult" (p. 112). According to Posner et

al., the results suggest an interaction of two attention









systems: (a) an isolated system and set of resources for

selective processing of visuospatial information and (b) a

general system and set of resources for selective processing

of visuospatial information and language. The general set

of resources reportedly directs orienting to all types of

information, while the resources specific to visuospatial

information permit subjective report of this information.

Posner et al. hypothesized that the frontal lobes were

important in the general system for selective processing of

visuospatial information and language.

During the dual-task conditions in Posner et al.

(1987), it is difficult to know what the response times to

the COVAT reflect because presentation of the language tasks

was not controlled. While performing the COVAT, subjects

responded to an audiotape presenting stimuli for the

language tasks. The language stimuli were not coordinated

with the COVAT; therefore, the COVAT data in the dual-task

condition probably reflect multiple influences. If a COVAT

trial occurred prior to processing language stimuli, then

some of the results may reflect the interference of the

COVAT on language. If a language stimulus occurred prior to

processing the COVAT, then some of the results may reflect

the interference of language on the COVAT. Additionally,

some of the results probably reflect that lack of

interference between the COVAT and language.

Although Posner et al. (1987) reportedly wanted to

study selective attention, the vigilance component of









attention was likely influenced by including a large number

of trials (e.g., 100, 300) in each block in the various

conditions possibly inducing fatigue and reducing vigilance

in the later trials within each block. Surprisingly, Posner

et al. (1987) neglected to report whether the neurologically

normal subjects responded differently to the COVAT targets

appearing on the left and right side of space during the

dual-task conditions. While concurrently performing the

COVAT and a language task, differential processing demands

are made on the left and right cerebral hemispheres. The

left hemisphere primarily processes right-sided COVAT

targets as well as language, whereas the right hemisphere

primarily processes only left-sided COVAT targets. Because

of this difference in the amount of cerebral hemispheric

processing, slower reaction times would be expected for

right-sided COVAT targets, when compared to left-sided

targets, during a concurrent language task.

In a subsequent study, Posner et al. (1988) did examine

differential responding to left- and right-sided COVAT

targets during concurrent performance with a language task.

The language task required subjects to repeat aloud, with

minimal lagtime, material from an audiotaped book (i.e.,

"Lincoln"). Before and after completion of the COVAT alone,

20 normal subjects performed the COVAT paired with this

repetition task.

Results from Posner et al. (1988) revealed that

subjects responded slower to the 100 ms trials of the COVAT









during dual-task performance than during single-task

performance. In the single-task condition, subjects

responded comparably to left- and right-sided COVAT targets.

In addition, they responded faster to valid than to invalid

trials when the COVAT was performed alone, demonstrating the

validity effect. In the dual-task condition, Posner et al.

reported that subjects responded slower to invalid trials

with right-sided targets, after the no cue trials were

excluded from the dataset. During these particular trials,

subjects received a left-sided cue followed by a right-sided

target. Posner et al. interpreted this finding as evidence

of the effect of attention to language on visuospatial

orienting. Additionally, this latter dual-task finding was

similar, but less severe, to the pattern obtained by

schizophrenics performing the COVAT alone.

For slower responding to invalid trials with right-

sided targets during the dual-task condition, an alternative

interpretation (Posner & Early, 1990) has been offered

instead of the one provided by Posner et al. (1988).

Instead of subjects responding slower to invalid trials with

right-sided targets, alternatively subjects responded faster

to invalid trials with left-sided targets. The basic

finding was a significant difference between left- and

right-sided targets on invalid trials with faster reaction

times to left-sided targets and slower reaction times to

right-sided targets. Therefore, this finding of slower

responses to invalid trials with right-sided targets can be









interpreted as difficulty shifting attention from the left-

sided cue to the right-sided target (Posner et al., 1988).

Alternatively, this same finding as well as the finding from

Petry et al. (1994) can be interpreted as limited response

(i.e., attraction of covert attention) to right-sided cues,

resulting in faster responses to left-sided targets on

invalid trials (Posner & Early, 1990). When the difference

between valid and invalid trials is compared, comparable

differences are obtained for single-task performance with

left-sided targets and for single-task and dual-task

performance with right-sided targets. The difference in

reaction times for valid and invalid trials is reduced only

for dual-task performance with left-sided targets (i.e.,

absence of a validity effect). As a result, the

interpretation that limited response (i.e., attraction of

covert attention) to right-sided cues resulting in faster

responses to left-sided targets on invalid trials may be

more viable. However, subjects do not appear to display

slower reaction times to valid trials with right-sided cues

and targets. These subjects appear to benefit from right-

sided cues, facilitating fast reaction times to subsequent

right-sided targets (valid trials with right-sided cues and

targets). Additionally, subjects respond comparably to

valid trials with right- and left-sided cues and targets.

In comparison to Posner et al. (1987), the study

conducted by Posner et al. (1988) is an improvement because

differential responding to left- and right-sided targets was









explored. However, Posner et al. (1988) still retained the

following two methodological flaws: (a) uncontrolled

presentation of the language task in relation to the COVAT

during the dual-task condition and (b) large number of

trials (i.e., 240) in each block of the various conditions.

When presentation of the language stimuli is not controlled

with regards to presentation of stimuli in the COVAT task,

the data from the COVAT in the dual-task condition likely

reflect multiple influences (i.e., interference of the COVAT

on language, interference of language on the COVAT, and lack

of interference between the COVAT and language). When a

large number of trials are administered, vigilance may be

reduced in the selective attention task.

In addition to Posner and his colleagues (Posner et al.

1988; Posner et al., 1987), Pashler and O'Brien (1993) as

well as Milner et al. (1982) measured reaction times to

lateralized targets during concurrent performance of a

language task. These latter two experiments differed from

the Posner et al. studies in that sensory cues were not

administered prior to the presentation of left- and right-

sided targets. During dual-task performance, Pashler and

O'Brien found no significant differences in reaction times

to left- and right-sided targets. In contrast, Milner et

al. found significantly slower responses to right-sided

targets as compared to left-sided targets, during dual-task

performance. Pashler and O'Brien required subjects to read

nonwords while they also responded to one of four possible









lateralized targets. Milner et al. required subjects to

count backwards while they responded to one of two possible

lateralized targets. Posner et al. (1988) did not report

significant differences between left- and right-sided

targets on the uncued trials or on the valid trials, during

dual-task performance. However, a significant difference

between left- and right-sided targets was reported for

invalid trials during dual-task performance. Because the

cues utilized in the Posner et al. studies are sensory

(i.e., the temporary brightening of a likely location for

subsequent target presentation), the COVAT is a useful

paradigm for studying the effects of language processing on

visual selective attention. The methodology proposed for

the present study is most similar to the methodology

utilized by Posner et al. (1988), as the proposed study

paired a reading task and a semantic association task with

the COVAT while Posner et al. paired a text repetition

(shadowing) task with the COVAT.

The current project utilized a similar but improved

methodology to one used by Posner and his colleagues (Posner

et al., 1987; Posner et al., 1988). While investigating

whether a specific or a general selective attention

mechanism exists within the left hemisphere for processing

language, the present study controlled the presentation of

both tasks during the dual-task conditions. The language

tasks were presented prior to the COVAT on each trial, so

that, the effects of language on the COVAT would be









reflected in the dual-task COVAT data. In addition, the

effect of vigilance was reduced by decreasing the number of

trials in each block. The present study also examined

whether subjects responded differently to left- and right-

sided COVAT targets, while they concurrently performed a

language task.



Experiment and Hypotheses

A dual-task paradigm was employed in order to

investigate whether a specific or a general selective

attention mechanism existed within the left hemisphere for

processing language. With regard to theories about the

operation of attention, the proposed study examined whether

a single or multiple resources of attention existed and

whether or not the resources) were shared between multiple

attentional resources and or cognitive functions. This main

question was assessed within the context of a proposed model

for selective processing of language and visuospatial

information based on assumptions about the anatomical system

needed to engage these cortical regions of the brain.



Experiment

In the current study, language was examined at the

lexical level with single written words with minimal or

maximal emphasis on semantic processing. Although all the

language tasks required lexical access (i.e., access to

words), one task emphasized semantics (i.e., producing









semantic associates) while the other task did not (i.e.,

reading familiar words). The following two language tasks

were administered: reading a familiar word, and producing a

similar meaning or a highly associated word. Selective

attention was examined at two levels within the present

study. At one level, the COVAT assessed selective attention

to visuospatial information, appearing to the left and right

of a central fixation point. At another level, selective

attention was examined during the dual-task condition; that

is, the effect of semantic and non-semantic lexical

processing on visual selective attention. Since the purpose

was to examine the effects of lexical or lexical-semantic

processing on visual selective attention, the language tasks

were emphasized as primary and were presented before the

visuospatial attention task (COVAT).

The study utilized neurologically normal subjects.

Prior to the dual-task condition, all subjects performed the

two language tasks and the COVAT alone. All subjects then

performed each of the two language tasks paired with the

COVAT. During the dual-task condition, single words

replaced the central fixation point while subjects then

performed the COVAT (see Figure 1-3). Accuracy of response

was recorded for the language tasks, while reaction time was

recorded for the COVAT. Subjects responded with their non-

dominant left hand while performing the COVAT alone and

paired with a language task, in order to facilitate

comparison to other COVAT studies utilizing neurologically









normal adults and left-hemisphere brain damaged patients

(Petry et al., 1994).

To insure concurrent performance during the dual-task

conditions, the stimulus onset asynchronies (SOAs) between

presentation of the language task (i.e., single words to be

read aloud or single words for subsequent generation of a

semantic associate) and the COVAT (i.e., brightened box

attracting covert selective attention to the cued area and a

star signaling a key-press response) were determined based

on a review of the event-related potential literature.

Event-Related Potentials

Event-related potentials (ERPs) are changes in membrane

potential for a group of cells occurring before, during, or

after a physical or psychological event (Picton, 1988). To

summarize the results from a survey of relevant event-

related potential studies, exposure to a high contrast

pattern visual stimulus results in processing from the

peristriate cortex at 100 ms post-stimulus onset (Galaburda

& Livingstone, 1993; Vaughan & Gross, 1969). At this point,

it appears that the first output from the stimulus leaves

the occipital cortex for further processing in other

cortical areas. If the pattern stimulus is psychologically

significant, then the potentials appear to reflect such

influences. In the context of a visual selective attention

task, Rugg, Milner, Lines, & Phalp (1987) reported larger

occipital potentials during an attend than during an

unattend condition. These potentials occurred 20 ms later









than the ones recorded to pattern stimulation lacking

psychological significance (Galaburda & Livingstone, 1993).

Potentials recorded from the frontal cortex during a

visual selective attention task (Rugg et al., 1987) and from

the frontal and parietal cortices during a category priming

task (Boddy, 1981) yielded a similar significant finding.

During both tasks, an enlarged negative wave peaked around

130 and 145 ms. Rugg et al. interpreted this potential as

reflecting further cortical processing of the visual

stimuli. Boddy interpreted this similar potential as

reflecting selective attention to words associated with a

particular category prime. Thus, Boddy proposes an effect

of selective attention as well as early access to semantics

(i.e., further cortical processing).

From additional recordings of frontal and parietal

cortices, Rugg et al. (1987) reported enhanced negativity of

a negative wave peaking at 250 ms and of a positive wave at

350 ms during an attend rather than during an unattend

condition. These potentials were also interpreted as

reflecting further information processing (Hillyard & Munte,

1984; Picton, 1988). From additional recordings of the

frontal cortex in Boddy (1981), an enlarged positive wave

peaking at 216 ms was registered. This wave was interpreted

as reflecting effects of selective attention and early

access to the meanings of words.

Neville, Kutas, & Schmidt (1982) reported an enhanced

negative wave peaking at 409 ms from left frontal and









temporal cortices during unilateral and bilateral

presentation of vertically-oriented words. This potential

was proposed to reflect processes related to reading.

During the reading condition in Stuss, Sarazin, Leech, &

Picton (1983), two negative waves were recorded: One

peaking at 262 ms and the other at 421 ms.

Regarding presentation of visual stimuli in the present

experiment in order to insure concurrent task performance,

appropriate intervals between the language task and the

COVAT would be 100 ms and 250 ms; thus, following

computerized presentation of a word, the COVAT cue would

randomly begin after a 100 or 250 ms delay followed by the

COVAT target after the typically used 100 or 800 ms delay

cuedd trials). Thus, on cued trials, the COVAT target

randomly appeared after a 200 ms, 350 ms, 900 ms, or 1050 ms

delay from the "start" of the language task in the dual-task

conditions (see Figure 1-4). On no cue trials, the COVAT

target randomly appeared after a 200 ms, 350 ms, 900 ms, or

a 1050 ms delay after word onset.

With a 100 ms delay between the presentation of a word

from the language tasks and the COVAT, the word would nearly

have completed processing within the occipital cortex when

the COVAT cue began. Thus, this 100 ms interval would

assess the effects of early single word reading or semantic

association on visual selective attention. With the 250 ms

delay, words would be in the midst of processing by cortical

areas other than the occipital cortex, when the COVAT cue









began. This COVAT cue would complete processing within the

occipital cortex at 370 ms. With the 900 ms or 1050 ms

intervals, primary cortical processing of the word would

have ceased, when the COVAT began.

Anatomy Proposed to be Primarily Involved in Selective
Attention and Subsequent Processing of Lancuage and
Visuospatial Information

Based on a review of studies involving lesions of the

thalamus, Nadeau and Crosson (in press) described a system

within each hemisphere involving the frontal cortex, nucleus

reticularis, inferior thalamic peduncle (which connects the

frontal cortex with the nucleus reticularis), and

centromedian that selectively engages the specific cortical

areas necessary for information processing while maintaining

unneeded cortical areas in a state of relative

disengagement. With regard to language, the fronto-nucleus

reticularis-centromedian system in the language dominant

hemisphere (i.e., the left hemisphere for the majority of

right-handed individuals) differentially activates needed

areas for language processing while keeping other cortical

regions (i.e., those not necessary for language processing)

relatively disengaged. This left intrahemisphere system is

also proposed to activate other structures and pathways of

the brain that are needed for attention to other cognitive

activities, for example, detection of right-sided

visuospatial information.









The intrahemisphere system, proposed by Nadeau and

Crosson (in press), for selective engagement of attention

and corresponding anatomical structures and pathways for

cognitive processing can be considered an example of a

multiple resources theory of attentional operation, even

though it was not originally conceptualized in this manner.

As implied by their theory, separate systems of attentional

engagement are proposed for each hemisphere. Although

resources may be shared within a hemisphere, it will be

assumed that they cannot be shared between hemispheres;

thus, their proposal satisfies the conditions for a multiple

unshared (between hemispheres, not between tasks within a

single hemisphere) resources model of attention.

Alternatively, if these anatomical systems are not separate

but shared between tasks within a hemisphere, then dual-task

performance should be impaired relative to single-task

performance because single or multiple anatomical regions

would be overloaded within the left hemisphere.

With regard to the proposed experiment, the anatomical

systems for selective processing of language and right-sided

visuospatial information are purported to share common

anatomical structures and pathways, resulting in

demonstrable impairments during concurrent performance of

these tasks relative to single-task performance. In

addition to the shared use of the selective engagement

mechanism in the left hemisphere, the language tasks and the

right-sided cues and targets of the COVAT will share use of









the primary sensory and motor areas as well as unimodal and

multimodal cortical areas (i.e., occipital cortex, angular

gyrus, premotor cortex, and motor cortex). During dual-task

performance, it was hypothesized that complex processing and

responding required at the level of the left frontal cortex

overloads the selective engagement attentional system,

leading to impairment of language and right-sided COVAT

performance. In addition to a decrement in language

performance during dual-task conditions, impairment of only

right-sided COVAT cues and targets would support a general

system for attention within the left hemisphere where

attentional resources are shared among cognitive tasks

within this hemisphere but not between hemispheres. If the

anatomical systems for selective processing of written

language and visuospatial information are separate and not

shared within the left hemisphere, then dual-task

performance should be comparable to single-task performance

in the present study. This latter finding would also be

predicted by a multiple resources theory of attention where

all resources share attention among cognitive tasks.

If performance to left- and right-sided COVAT cues and

targets are equally impaired during dual-task performance in

addition to a decrement in language performance, then a

single resource rather than a multiple resources model of

attention would be bolstered. According to the single

resource theory of attention, dual-task performance is

always worse than single-task performance. It is









hypothesized that the performance deficits would interfere

with processing in both hemispheres regardless of whether

structures and pathways were primarily engaged in one

hemisphere over the other hemisphere. If the attentional

resource was not shared between cognitive tasks, then marked

impairments would occur for all dual-task combinations

regardless of the attentional demands inherent in each task

because of interference from rapid switching of attention

between tasks. If the resource was shared between cognitive

tasks, then dual-task impairments would be graded as based

on each task's demands for attention. As an example of

possible performance on the COVAT paired with the proposed

language tasks, more marked deficits would be expected in

response to left- and right-sided COVAT stimuli as well as

to language stimuli when the COVAT was paired with the

semantic association task in comparison to the reading task

if the single resource was shared. With a more attention

demanding task, like semantic association, paired with the

COVAT, more deficits would be expected because more

interference is hypothesized in order to achieve successful

performance of both tasks. Less attentional interference is

hypothesized for the reading task when it is performed

concurrently with the COVAT, therefore, less impairment of

responses to left- and right-sided COVAT stimuli and to

language stimuli would be expected, in comparison to dual-

task performance of the COVAT with the semantic association

task.









Hypotheses

Based on the proposed relationship between attention

and language as well as findings from the dual-task

literature (particularly Posner et al., 1988), a general

left intrahemispheric mechanism for engagement of attention

and subsequent cognitive processing is proposed. This

mechanism can be conceptualized as a multiple unshared

resources theory of attention where resources are shared

among cognitive tasks executed within a particular

hemisphere but not between hemispheres. Given the above

conceptual framework for selective attention, the following

hypotheses were proposed:

1. Dual-task performance would show effects of

interference; that is, slower reaction times to COVAT

targets and increased language errors (mispronunciation on

reading task, incorrect answer or no response on semantic

association task).

2. More specifically, slower reaction times would be

anticipated for invalid COVAT trials with right-sided

targets during dual-task performance. In addition, subjects

would respond significantly slower to invalid trials as

compared to valid trials with right-sided targets,

demonstrating the validity effect.

3. Faster reaction times would be expected for invalid

COVAT trials with left-sided targets during dual-task

performance. Additionally, a validity effect would not be

anticipated for left-sided targets during dual-task









performance; thus, no significant difference in reaction

times for valid and invalid trials with left-sided targets.

4. Because the semantic association task was

hypothesized to require more processing than the reading

task, more interference would be expected when the COVAT was

paired with this task than with the reading task; thus, the

pattern of responding identified in (2) and (3) would be

more pronounced. Within each dual-task condition, subjects

would respond slower to low frequency words than to high

frequency words. A significant difference in reaction time

would be anticipated between low frequency words of the

semantic association-COVAT condition and high frequency

words of the reading-COVAT condition.

5. Although more language errors were expected during

the dual-task conditions in comparison to the single-task

conditions, this result would not be significant, only a

trend (Herdman, 1992; Hiscock et al., 1989). More errors

would be anticipated for the semantic association task than

for the reading task. In addition, subjects would be

expected to make more errors with low frequency rather than

high frequency words.









Figure 1-1. Model for the recognition, production, and
comprehension of written words, based on
Ellis and Young (1988).







Heard Word Written Word

I I
Auditory Visual
Analysis Analysis
System System


Auditory Visual
Input Input
Lexicon Lexicon


Semantic
System Grapheme-
_Phoneme
Conversion
Speech
Output
Lexicon


Phoneme
Level



Speech









Figure 1-2. Schematic of the covert orienting of visual
attention task.







Valid Trial


Fixation
Cue
Target


Invalid Trial


Fixation
Cue
Target


Fixation
Target


D
D


No Cue Trial
+
+


D


D
D
D


D

D


D


D




Full Text
xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID ERBKP0S8B_2M2D8A INGEST_TIME 2012-02-20T21:03:29Z PACKAGE AA00009037_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES