|UFDC Home||myUFDC Home | Help|
This item has the following downloads:
1 CREATING A NOVEL MEASURE OF FUNCTIONAL PROBLEM SOLVING IN INDIVIDUALS WITH TRAUMATIC BRAIN IN JURY (TBI) USING A RASCH ANALYSIS PROCEDURE By JULIA KAY WAID-EBBS A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2008
2 2008 Julia Kay Waid-Ebbs
3 To John, Ceara and Savanna, for all your patience and support, and especially to my mother, who showed me the importance of continuing to learn. Remember: Wisdom is supreme; therefore get wisdom, though it cost all you have, get understanding Proverbs 4:17.
4 ACKNOWLEDGMENTS I would like to thank my advisors and committ ee members. First I would like to thank Dr. Shaw for believing in me and encouraging me to just get started. I thank Dr. Velozo for tearing apart draft after draft, until I got it right. I thank Dr. Kendall for making me feel good about getting torn apart. Finally, I thank Dr. Ro senbek for cutting to the heart of every issue and asking the one-hundred-thousand-dolla r question. I am honored to have had the opportunity to work with such high-caliber people.
5 TABLE OF CONTENTS page ACKNOWLEDGMENTS...............................................................................................................4 LIST OF TABLES................................................................................................................. ..........7 LIST OF FIGURES................................................................................................................ .........8 ABSTRACT....................................................................................................................... ..............9 CHAPTER 1 CURRENT ASSESSMENTS OF PROBLEM SOLVING IN INDIVIDUALS DIAGNOSED WITH TBI......................................................................................................11 Introduction................................................................................................................... ..........11 The Devastation of TBI...................................................................................................11 The Concept of Problem Solving....................................................................................15 Problem Solving Model...................................................................................................16 Measurement of Problem Solving...................................................................................18 Review of Current Problem Solving Measures...............................................................22 Conclusion..................................................................................................................... .........23 2 PSYCHOMETRICS OF THE DEVELOPE D PROBLEM SOLVING ITEMS....................32 The Devastation of Traumatic Brain Injury (TBI).................................................................32 Assessment of Functional Problem Solving...........................................................................33 Methods........................................................................................................................ ..........35 Research Participants.......................................................................................................35 Item Deve lopment...........................................................................................................36 Data Analysis.................................................................................................................. .36 Results........................................................................................................................ .............39 Model Fit...................................................................................................................... ...40 Confirmatory factor analysis....................................................................................40 Principal component analysis...................................................................................40 Item Level Psychometrics...............................................................................................41 Rating scale analysis................................................................................................41 Fit statistics...............................................................................................................41 Item difficulty hierarchy...........................................................................................42 Person separation......................................................................................................42 Match of item difficulty to person ability................................................................43 Discussion..................................................................................................................... ..........43 Aim and Summary of Results..........................................................................................43 Model Fit...................................................................................................................... ...43 Item Analysis.................................................................................................................. .45 Fit Statistics................................................................................................................. ....46
6 Overall Measurement Qualities of Instrument................................................................47 Limitations.................................................................................................................... ...47 Conclusions.................................................................................................................... .........48 Future Directions.............................................................................................................. ......48 3 CONCURRENT VALIDITY OF THE 19 DEVELOPED ITEMS........................................59 Introduction................................................................................................................... ..........59 Methods........................................................................................................................ ..........61 Research Participants.......................................................................................................61 Procedures..................................................................................................................... ..61 Measures....................................................................................................................... ...62 Data Analysis.................................................................................................................. .66 Results........................................................................................................................ .............66 Discussion..................................................................................................................... ..........68 Limitations.................................................................................................................... ...71 Future Directions.............................................................................................................72 4 CHALLENGES OF CREATING A F UNCTIONAL PROBLEM-SOLVING MEASURE AND FUTURE PLANS.....................................................................................76 Current Problem Solving Measures........................................................................................76 Overview of Study.............................................................................................................. ....77 Answers to Research Questions..............................................................................................79 Future Research Needs.......................................................................................................... .80 APPENDIX A COMPLETE MEASUREMENT INSTRU MENT IN WHICH 19 PROBLEM SOLVING ITEMS WERE INCLUDED................................................................................82 B CONCURRENT VALIDITY EXTERNAL MEASURES.....................................................98 REFERENCES..................................................................................................................... .......104 BIOGRAPHICAL SKETCH.......................................................................................................112
7 LIST OF TABLES Table page 1-1 Measurement categories.....................................................................................................26 1-2 Review of current problem solving measures....................................................................27 1-3 Categories of current problem solving measures...............................................................28 2-1 Instrument................................................................................................................. .........50 2-2 Subject demographics....................................................................................................... .52 2-3 Confirmatory Factor Analysis Results...............................................................................53 2-4 Patient/Caregiver variances...............................................................................................53 2-5 Patient rated items Rotated factor pattern.......................................................................53 2-6 Caregiver rated itemsRotated factor pattern....................................................................54 2-7 Summary of category structure..........................................................................................54 2-8 Patient rated item infit statistics........................................................................................ .55 2-9 Caregiver rated item infit statistics....................................................................................55 3-1 Descriptive statistics of measure........................................................................................73 3-2 Correlations between the 19 developed items and traditional measures of problem solving........................................................................................................................ ........74 3-3 Correlations between traditional measures of problem solving.........................................75
8 LIST OF FIGURES Figure page 1-1 Problem solving and the cognitive domains commonly affected by TBI..........................25 1-2 DZurilla & Nezus (1982, 1990, & 1999) Problem-Solving Model................................25 3-1 Hypothesized Hierarchy.....................................................................................................56 3-2 Patient item map........................................................................................................... .....57 3-3 Caregiver item map......................................................................................................... ...58
9 Abstract of Dissertation Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy CREATING A NOVEL MEASURE OF FUNCTIONAL PROBLEM SOLVING IN INDIVIDUALS WITH TRAUMATIC BRAIN IN JURY (TBI) USING A RASCH ANALYSIS PROCEDURE By Julia Kay Waid-Ebbs December 2008 Chair: Linda Shaw Cochair: Craig Velozo Major: Rehabilitation Science Problem-solving abilities allow individuals wi th TBI to lead produc tive and independent lives. However, the measurement of the everyday problem solving in individuals diagnosed with severe TBI has significant limita tions. Either measurement instruments fail to capture the everyday problem solving, or they measure atti tudes toward problem solving instead of the problem solving abilities, or they only use th e individual diagnosed with TBI as the rater. Therefore the aim of this st udy is to create a functional m easure of problem solving in individuals diagnosed with severe TBI that ca ptures the everyday problem solving difficulties they face. The study will include the following steps to accomplish this goal: 1) items will be developed that are objective and easy to interpre t and reflect the everyday challenges in problem solving that individuals diagnosed with severe TBI face; 2) the items will be reviewed by individuals diagnosed with TBI, their caregiver s and professionals that work with patients diagnosed with TBI; 3) the items will be administ ered to the individuals diagnosed with severe TBI and their caregivers to rate the individual s behavior; 4) The unidimensionality of the
10 measure will be determined and followed by a Rasch analysis to determine the psychometrics of the instrument; finally 5) the instrument will be compare to external problem solving measures to determine whether the developed measure is valid. This research will provide both researchers and clinicians with a valuable measurement tool to assess functional problem solving skills of individuals with TBI. Current measurement tools are often difficult to administer, lack ecolo gical validity or sensi tivity. The development and use of this instrument may provide research ers with a more sensitive measurement tool and enable clinicians to better id entify the challenges that indivi duals with TBI face every day.
11 CHAPTER 1 CURRENT ASSESSMENTS OF PROBLEM SO LVING IN INDIVIDUALS DIAGNOSED WITH TBI Introduction The Devastation of TBI A severe traumatic brain inju ry (TBI) has devastating eff ects on both the individual and his/her family. In the United St ates alone; more individuals are diagnosed with TBI than spinal cord injuries, HIV, multiple sclerosis and breast cancer, combined. TBI is a rising social concern, with direct and indirect costs estimated to be at 60 billion do llars per year in 2000 (Finkelstein, 2006). TBI is defined as a blow, jolt or penetration to the head that disr upts brain function. A commonly accepted method of classifying the seve rity of TBI is the Glasgow Coma Scale (GCS); with severity levels of mild, modera te and severe (Hannay, 2004). A severe TBI is classified as having a Glasgow Coma Score of 8 upon admission to the hospital and having a period of unconsciousness or even coma (Rimel Giordani, Barth, & Jane, 1982). An estimated 10% of TBIs are severe which result in the most persistent deficits. Outcomes of individuals with TBI are affected by a myriad of variable s including secondary insults that cause further brain tissue damage, such as fevers, anoxia, se izures, and a neuro-chemical cascade which can continue for several months. Other variables that affect outcome include severity of injury, mechanics of injury, age, repeat injury, polytrauma (injuries involving multiple systems), socioeconomic status, premorbid psychiatric diagnosis, and premorbid alcoholism(Lezak, 2004). Characteristics of the individua l may influence the effect of a TBI on that individual. For example, individuals who are 65 years of age an d older have poorer outco mes, while individuals 15-21 years of age have more behavior and emo tional problems, and indi viduals 22-44 years old have slower rates of functi onal change (Thomsen, 1990) (Cif u et al., 1996). Additionally,
12 individuals who sustain multiple TBIs (a comm on occurrence) exhibit cumulative effects and individuals who sustain a polytraum a injury tend to have more deficits. A history of alcoholism generally results in worse outcomes than individu als without a history of alcoholism(S. Dikmen, Machamer, & Temkin, 1993). Research has revealed that many variables a ffect the outcome of TBI and has revealed the extremely heterogene ous nature of this population (CDC consensus report). The typical individual with a TB I is a male between the ages of 15 to 24 years, from a lower socioeconomic status, who is unemployed with a low education level and who sustained his injury from a fall and/ or assault (Jag er, Weiss, Coben, & Pepe, 2000; Naugle, 1990). Females outnumber men in the over 65 year old category while males outnumber females two to one in the 15 to 44 year old category. The mechan isms of injury are: fa lls (25%), car accidents (20%), struck by or against objects (19%) and assaults which account for 11% of the injuries (Langlois et al., 2003). Blasts are the leading cause of head injury in military personnel (Warden & French, 2005). The deficits that result from a severe TBI include physical and cognitive difficulties that may be evident during acute hospitalization but also may be hidden for weeks after injury. Patients with severe TBI are more likely than moderate and mild TBI to require rehabilitation after acute hospitalization (Lezak, 2004). Alt hough each individual will vary in the abilities affected, a severe TBI commonly affects abilitie s such as attention, memory, problem solving, planning, self-awareness, processing speed, and emotional control. The length of recovery varies for each patient. However, patients with severe in jury rarely return to their previous level of functioning and are likely to need rehabilitation prior to return ing to their homes(Lezak, 2004). Because the heterogeneity of this population can e ndanger the internal valid ity of studies on TBI,
13 many variables that may affect outcomes must be considered during research design and analysis. Severe TBI results in deficits in every as pect of cognition (Ha nnay 2004). Domains of cognition commonly affected by TBI are: A ttention, Processing Speed, Memory, Social Communication, Emotional Control, and Executi ve Functioning (Lezak, 2004; Millis et al., 2001) (see Figure 1-1). Individuals may experience deficits in aspects of each domain or may have a deficit synthesizing skills from one or more cognitive domains. Common deficits in attention may be evident when the individual is un able to resist distractions when working on a task, switching tasks or attending to more than one thing at a time (Whyte, Hart, Bode, & Malec, 2003) Likewise, processing speed deficits may resu lt in taking longer to complete tasks or not being able to keep up with conversations. An in dividual with memory deficits may be unable to keep a goal in mind (working memory), or remember details from an event that occurred earlier in the day (short-term memory)(Lezak, 2004). Ultim ately, the disruption of relatively basic cognitive functions, mentioned previously, may cau se or exacerbate problems in more complex cognitive functions such as ex ecutive function or social comm unication (Arciniegas, Held, & Wagner, 2002). The more complex cognitive domains such as so cial communication may result in socially inappropriate behavior, such as standing too clos e when talking, speech being too loud, or tone being inappropriate. The individu al may not be able to maintain a topic in a conversation or understand non-verbal gestures (Mateer. M. M. Sohlberg, C.A., 2001). Similarly, emotional control deficits become obvious when verbal or physical outbursts occu r for little apparent reason, or an individual may be easily frustrated when attemp ting to complete challenging tasks. Finally, executive functioning deficits (executiv e dysfunction) most frequently result in
14 complaints of being disorganized (Sohlberg & Mateer, 1987). Executive functioning integrates cognitive, self-regulatory, and emotional behavi or (D. T. Stuss & Alexander, 2007). The selfregulation of behavior is necessary in well-learne d activities that become automatic, but also is necessary in new activities that re quire inhibition of an over-learn ed plan so that a new problemsolving strategy can be developed. A new plan requires the ability to maintain intentions as well as the execution of those inten tions (Levine, Dawson, Boutet, Sc hwartz, & Stuss, 2000). After frontal injury, intention and action can become dissociated; indi viduals with frontal lobe damage may be able to tell you the appropriate action to take, but do not follow that action(Luria, 1966). The course of recovery for cognition after a se vere TBI starts with acute confusion that may last for days. The acute confusion may be accompanied with agitation and motor restlessness, uncooperativeness, ev en assault behavior (Fugate et al., 1997 ). Many aspects of cognition often improve within the next weeks to months, sometimes remarkably. Dikmen (S. S. Dikmen, Ross, Machamer, & Temkin, 1995) states how ever, that in general, the more severe the injury the more pervasive the deficits. Impa irments in memory and learning are the most frequent cognitive deficits exhibited after a se vere TBI, followed by impairments in attention, processing and executive functioning. The extent of recovery from cognitive deficits is more spontaneous during the first thr ee to six months, while after on e year, the recovery is more gradual and is a function of ne w learning and use of compensato ry strategies (Hannay 2004). Some individuals show improvement for many y ears after injury (Sbordone, Liter, & PettlerJennings, 1995). Millis (2001) reported 10% of th eir patients improved in problem solving, while another 10% declined in problem solving over the first year (Millis et al., 2001). Indeed patients with executive dysfunction show a fluctuation in testing during th e first year, perhaps due to a
15 lack of internal stability and self-regulation (Kay, 1986) Therefore, one year post injury appears to be the time span when most individuals with severe TBI begin to perf orm with some stability. The Concept of Problem Solving Of the cognitive domains discussed above, pr oblem solving falls within the cognitive domain of executive functioning (D. T. Stuss, Benson, DF, 1986). Problem solving can be defined as the process of attaining a goal, wh en the appropriate res ponse is not immediately apparent or available (Luria, 1966) Problem solving is generall y considered the most complex of all intellectual functions, and involves the modulation and control of routine or fundamental skills (Goldstein & Levin, 1987). Consequentl y, problem solving is considered a higher cognitive process that requires the synthesis of abilities that may lie within several cognitive domains. Deficits in lower c ognitive processes can disrupt th e higher cognitive process of problem solving. However, problem solving ca n also remain intact even when specific receptive, expressive and memory dysfunctions are present. Individuals w ith excellent executive functioning skills will typically be good problem solvers, however individuals with one aspect of executive functioning impaired may or may not continue to be good problem solvers(Manes et al., 2002). Furthermore, individuals with deficits in real life problem solving may demonstrate normal intellect, memory and normal problem so lving in a laboratory setting (Lezak 2004). Problem solving is difficult to operationalize beca use it involves not only disc rete skills, but also cognitive structures and processe s that control the us e of the skills (Cicer one et al., 2000). Problem solving tasks can range from simple to complex and abst ract. Lezak, (Lezak 2004) illustrate this range by gi ving an example of simple probl em solving such as knowing what to do if there is no soap in the soap dish and giving an example of complex problem solving where Einstein accounted for light distortions in the solar system (Lezak 2004). A modified version of the Functional Independence Meas ure + the Functional Assessment Measure
16 (FIM+FAM), called the United Kingdom FIM/FAM includes examples of simple problem solving such as given a tray with no cutlery or getting someth ing out of reach and complex problem solving such as planning a three c ourse meal or being given the wrong change (Turner-Stokes, Nyein, Turner-Stokes, & Gate house, 1999). For some people the distinction between simple and complex problem solving is vi ewed as no more than the difference between easy and difficult. Whether simple, complex eas y or difficult, the step s that are taken to implement a solution have been mapped out in various models of problem solving. Within the TBI literature, the most common model cited has been the DZurilla & Nezus model of problem solving (Nezu. D'Zurilla,, 1982, 1999; Nezu, D' Zurilla, 1990; Foxx, Martella, MarchandMartella., 1989; Levine, Roberts on et al., 2000; Rath et al., 2004; Tisdelle & St Lawrence, 1986; von Cramon, 1991) Problem Solving Model DZurilla & Nezus (1982, 1990, & 1999) mode l breaks down problem solving into two parts: problem orientation and problem so lving proper (see Figure 1-2 below). The steps for problem orientation are: 1) pr oblem perception; 2) belief that they can change a problem; 3) ability to evaluate the significance of the problem; 4) belief that the problem is solvable; and 5) belief that solving the problem is worth the tim e and effort involved. After problem orientation the individual moves to problem-solving proper, which involves: 1) problem definition and formulation, in which as much relevant and f actual information is gathered as possible, the nature of the problem is clarified, realistic goals are set and the significanc e of the problem is reappraised; 2) alternative solutions are generated (the more the better); 3) decision-making (which solution will be worth the effort) and finally 4) solution implementa tion and evaluation. The steps of the process are not necessarily linear. Individuals may jump back and forth between steps prior to solution implementation.
17 The Problem Orientation phase deals with an emotional evaluation of, 1) the problem; 2) the solution; and 3) the individua ls ability, while, the problem so lving proper phase is the actual process of solving the problem. The behaviors th at occur during the Problem Orientation phase are behaviors that are not directly observable; on ly the individual can repo rt on these behaviors. However, during the Problem Solving Proper phas e the Solution Implementation step would be directly observable, as well as the Alterna tive Solutions step (if the first solution did not work). The Definition/Formation steps and Decision Making steps would not be directly observable, unless the indi vidual stated out loud the steps of their thought process. The technique of having the individual state out loud the thought process for each step of their problem solving is precisely how several researchers have manage d to indirectly observe the cognitive process involved in problem solving (Foxx 1989; von Cramon 1991; Levine, Robertson et al. 2000). DZurilla & Nezus (1982, 1990, & 1999) model offers a cognitive decomposition of the problem-solving process into constituent compon ents, effectively mapping the elemental steps within a complex cognitive process. The model also makes reference to the emotional aspects of problem solving in the step of Decision Making Bechera and Damasio (2003) have devoted much study to this emotional aspect of problem so lving; detecting that in dividuals with frontal lobe lesions have difficulty overriding an automatic impulsive response with a response that first calculates the probabilities of success of potential solutions (Bechara, Damasio, & Damasio, 2003). For example, an impulsive response may be to buy an item that is desirable, even though there is only enough money to pay the bills that month. Unfortunately, there has been little empirical data to support the use of DZuri lla & Nezus (1982, 1990, & 1999) model in solving complex or simple problems ,(Bellack, Morrison, & Mueser, 1989; Cicerone et al., 2000; R. F. Foxx, GD (2000), 2000). In fact, DZurilla has since revised his model in response to
18 exploratory and factor analys is he conducted (Maydeu-Oliv ares, 1995, 1996). The analysis demonstrated moderate support for the two parts of the original model (Problem Orientation and Problem Solving Proper). However, the better f itting model was a five factor model consisting of two different problem orientat ion dimensions and three differe nt problem solving styles. The two factors listed under the general heading Problem Orientation are positive and negative, while, the other three factors listed under the he ading of Problem Solving Style were rational problem solving, impulsivity/carelessness style a nd avoidance style. Only the rational problem solving factor addresses problem solving skills in the new model, which were the same skills listed under Problem Solving Proper of the mode l in Figure 1-2. Theref ore, since observable problem solving ability is better reflected in pr oblem solving skills we will focus on the model in Figure 1-2. Measurement of Problem Solving Individuals who have sustained a severe TBI often have diffi culties problem solving once they return to their community (Bechara, Da masio, Damasio, & Anderson, 1994). Without the ability to solve problems, family members or fr iends become burdened with the responsibility of supervising the individuals to keep them safe in the home and community (Lezak 2004). Individuals diagnosed with TBI a nd their caregivers cite difficulties with problem solving as one of the top unmet needs one year after injury (Corrigan, Whiteneck, & Mellick, 2004). In order to address this unmet need, clinicians and researcher s need to be able to define and measure the construct called problem solvi ng. Currently there are no gold st andard measures of problem solving. Measures that are curr ently available either do not m easure real world functioning, or are crude measures that lack sensitivity (Burgess, Alderma n, Evans, Emslie, & Wilson, 1998; .Hall, Gordon, Zalser, 1993). Researchers lack access to a problem solving measure that is
19 sensitive to the range of problem solving abilities that individuals with TBI exhibit in their day to day lives. Several challenges are involved in measuring problem solving and limit our understanding of the problem solving process in i ndividuals with TBI. Fo r instance, individuals may test normally on measures that are applied in a laboratory setting (or clinical setting), yet demonstrate poor problem solving skills in the real world. Conversely, some individuals may perform poorly on measures in a laboratory sett ing yet demonstrate adequate problem solving skills in the real world. Seve ral reasons for this phenomenon are suspected (Lezak, 2004; Sani, Bennett, & Soutar, 2005). First, the laboratory provides clear cues when there is a problem to be solved, while in the real world the cues may be unclear or varied from situation to situation. Second, real world problem solving is complex and may require several steps over an extended period of time, while the tasks presented in the laboratory may be only a single component within a finite period of time. Third, real worl d problem solving often has consequences that are more emotionally laden than problem solving in a contrived setting, such as the laboratory. Fourth, problem solving in the real world may have been experienced before or the individual is able to use compensatory strategies, while in the laboratory the task may be novel and compensatory strategies are not allowed. For al l of these reasons and more, measuring problem solving ability has been extremely difficu lt for both the researcher and clinician. Current measurement instruments of problem so lving can be categorized into three basic types of measurement: neuropsychological, simu lation, and observation (Table 1-1). The neuropsychological measures have been in exis tence for many years, while the simulation and observation measures have been developed in mo re recent years. Therefore, the relative advantages and disadvantages of each of the categories are discussed below.
20 Neuropsychological measures are typically based on theoretical cognitive concepts and are conducted under strict control c onditions. Administration is conducte d in a laboratory setting to provide strict experimental control, and a licen sed psychologist or neur opsychologist typically administers the tests under strict guidelines. The psychometrics of neuropsychological measures has been rigorously studied, resulting in standa rdized norms based on normal subjects and other specific populations, such as TBI. Single compon ents of cognition are measured to allow for experimental control. Measures are typically used to determin e whether impairment is present and the degree to which the individual is able to perform on measures of individual cognitive abilities (Lezak 2004). While these strict experimental controls in the neuropsychological measures are beneficial in determining the capacity of the individual in the laboratory setting, they may also hinder detecting the functional ab ility of the individual in the real setting (Burgess et al., 1998; Chaytor & Schmitter-Edgecombe, 2003). Therefore, simulati on measures have been developed to better capture the functional ability of an individual (Wilson, 1996). For ex ample, while the traditional neuropsychological measure focuses on a single cognitive component, for example, planning in the Wisconsin Card Sorting Test (WCST), a s imulation measure utilizes an everyday activity that may require the coordination of several c ognitive components. For example, the Behavioral Assessment of the Dysexecutive S yndrme (BADS) Zoo task requires the subject to plan a route to visit six out of twelve locati ons at a Zoo. In the simulation ac tivity the subject is allowed to use some compensatory strategies ava ilable to them in the real world. Like the neuropsychological measures, the si mulation tasks are conducted in a controlled setting and frequently are standardized. Typica lly they are administered by a professional, but not always a licensed psychologist. The scoring may be quite labor intensive, especially if two
21 raters are required. Additionally, the test re-test reliability may be poor after the subject learns the strategy of the measure. However, the advant age of the simulation measure is that it captures the effect of the cognitive proce ss required in integrating more than one cognitive component. Unfortunately, the simulation does not reflect the diverse environment of the real world and the capacity of the subject may be task specific an d not generalize to the r eal world (Geusgens, Winkens, van Heugten, Jolles, & van den Heuvel, 2007). To address these shortcomings, observation m easures have been developed that observe the individual in the real world (Votruba et al., 2008). Observat ion measures typically utilize a rater who observes the in dividuals behavior in the real wo rld setting. Raters may include professionals, the individuals themselves, or prox ies such as family members, who observe the individual on a frequent basis. However, while these raters are able to observe the individuals real world performance, there are also limitations in utilizing the perceptions of individuals with TBI and their family member s. Individuals with TBI may l ack self awareness as part of their injury, while the ca regivers ratings may be affected by caregiver burden (Rath et al., 2004; Zasler, 2003). Thus, the objectivity and accuracy of these raters is frequently questioned. Additional limitations to observation measures may include confounding variables such as the variability that may exist in the difficulty of tasks with the individu als natural setting. For example, while one individual may have twenty monthly bills to pay another individual may only have one. Therefore comparing ability to manage money may be more difficult for one individual versus another. Additionally, proxy rate rs may not recognize that failure to complete a task was due to fatigue versus inability. Th erefore, given the limita tions an observational instrument that has strong psychom etric properties w ould invaluable.
22 In conclusion, each of the categories of m easuring problem solving has advantages and disadvantages. For example, while the neuropsyc hological measures are more stringent, they may not capture the real life ability of the individual like the observa tional measures. In contrast the simulation measures may not captu re the real life behaviors but they do incorporate the coordination of cognitive components, while offering greater control over confounding variables. While, observational meas ures may have questionable psychometrics. Therefore, when selecting an instrument, one should consider which aspect of a measurement instrument is most important. Review of Current Problem Solving Measures A review of the current measures of problem solving in adult individuals diagnosed with TBI was conducted by searching PubMed, as well as two review articles (Cicerone et al., 2005; Kennedy et al., 2008). The review ar ticles were used to search for other articles that used problem solving measures within the past 10 year s. Only measures classified to specifically measure problem solving are listed from each of the articles in Table 1-2. For example, measures used to characterize the sample or measures that were classified by the author as executive functioning measures rather than pr oblem solving measures, were not included. Thirteen studies resulted from the literatur e search, all published fr om 2000 forward. Four of the articles address intervention, three addr ess measurement development and the remaining articles range from comparison to proposal articles. The list of meas ures is quite diverse with the most common measures being the Wisconsin Card Sorting Test (WCST), Tower tests, Modified Six Elements and the Functional Independence Measure (FIM). Measures were compiled and listed under th e categories: performance, simulation, and observation (Table 1-3). The measures listed under performance were highly controlled and were classified in the literature as neuropsychologica l tests. Measures that were classified as
23 simulation measures in the literature were placed in the simulation category, although a few measures such as the BADS, rule shift cards an d the Porteus Maze test have many similarities with neuropsychological measur es such as the WCST. The total number of observation measures were five, while simulation measures nu mbered twelve (six are from the BADS test battery) and only two measures are categorized as performance measures. Limitations of each test are listed in the right hand column. Conclusion Overall, in our review of the literature ther e have been several relatively recent simulation and observation measures developed to capture the real world behaviors. Perhaps this productivity is in response to Burgesss (1998 ) criticism of the poor ecological validity of measures of executive functioning. While the ma jority of the new m easures fall into the simulation category (BADS, TOFEA, PSR PT, Grouping and Room Layout) no new neuropsychological measures were cited and onl y two new observational measures have been developed, the Problem Solving Questionnaire (P SQ) and Frontal Systems Behavior Scale (FsBSe). Of the newly developed observationa l measures, one measures executive functioning (FsBS) while the other measures probl em solving specifically (PSQ). The addition of the simulation measures has increased the information about the difficulties individuals diagnosed with TBI experi ence with tasks that re quire coordination of cognitive processes. However, there continues to be limitation with this type of testing. Performance may be task specific and not gene ralize to other tasks (Lezak 2004). Simulation tests can be work-intensive in administration and scoring, whil e the psychometrics has not yet been studied rigorously. Additionally, test re-tes t reliability may be poor due to the individual learning the strategy. Therefore, simulation tests have limited value in intervention studies where repeat testing is essential.
24 For these reasons, observational measures ma y offer a method of capturing real world functioning that is related to the tasks the indivi dual experiences. However, the measures cited in the literature have limitations. The Dysexecu tive Questionnaire (DEX) and the FsBS both measure executive functioning and do not have a s ubscale that is specific to problem solving. The Problem Solving Inventory (PSI) and the PSQ are questionnaires that were based on the DZurillas problem solving model, however th ey both measure attitudes toward problem solving and not the problem solving proper dime nsion of the model. A dditionally, the PSI and PSQ are self-rated and do not have a proxy rater. Therefore we propose to develop a measure to address these limitations and provide researchers and clinicians with a measure th at is easy to use and captures the everyday problem solving abilities of adults with TB I. The following research questions were: Research question 1: Is everyday problem solving a uni dimensional construct, or does it represent multiple constructs? Research question 2: Does the empirically derived item -difficulty hierarchy structure of problem solving validate the hypothesized hierarchy? Research question 3: How well do the developed items separate individuals based on their problem solving abilities. Research question 4: Who is the better rater of problem solving abilities in individuals diagnosed with TBI; the car egiver or the patient? Research question 5: What is the relationship between traditional measures and the developed measure of problem solving?
25 Figure 1-1: Problem solving and the c ognitive domains commonly affected by TBI Figure 1-2: DZurilla & Nezus (1982, 1990, & 1999) problem-solving model Problem Orientation Problem Perception Belief in Ability to Change Evaluate Problem Significance Belief in Solvability Cost/Benefit of Solving Problems Problem Solving Proper Definition & Formation Alternative Solutions Decision Making Solution Implementation/Evaluation
26Table 1-1: Measurement categories Measure Advantages Disadvantages Neuropsychological Based on cognitive concepts Identifies impairment and capacity Strong statistical properties and norms Standardized Laboratory setting Measures cognitive components, often without the cognitive process. Do not reflect real life functioning, doesnt take into consideration environmenta l demands, compensatory strategies or family support. Requires a licensed psychologist/neuropsychologist to interpret results Simulation Measures cognitive process. Frequently standardized May be administered by unlicensed professional Capacity may be task specific May require labor intensive codi ng of videotaped behavioral records May not be standardized Statistical properties and norms may not be strong Test re-test may be poor Observation Reflects environmental demands and compensation May use a Professional/self/proxy report May have strong statistical properties May not be based on cognitive concepts Impairment and capacity may not be clear due to confounding variables Vulnerable to observer bias
27 Table 1-2: Review of curre nt problem solving measures Authors Type of Study Problem Solving Measures (Bamdad, Ryan, & Warden, 2003) Measure Development -Test of Functional Executive Abilities (TOFEA) (Cazalis et al., 2006) Functional Magnetic Resonance Imaging -Tower of London (Chan, Chen, Cheung, Chen, & Cheung, 2004) Comparative study between TBI, Schizophrenia and normal participants. -Tower of Hanoi (Gordon, Cantor, Ashman, & Brown, 2006) Theoretical Article -Behavioral Assessment of Dysexecutive Syndrome (BADS) -Frontal Systems Behavior Scale (FsBSe) -Problem Solving Inventory (PSI) -Functional Independence Measure (FIM) (Greve et al., 2002) Psychometrics of WCST with severe TBI -Wisconsin Card Sorting Test (WCST) (Hammond, Hart, Bushnik, Corrigan, & Sasser, 2004) Predictors of change between 1 and 5 years -FIM Cognitive Subscale: Problem solving item (Hewitt, Evans, & Dritschel, 2006) Intervention-TBI: improve autobiographical memories to improve planning. -Brixton Test -Modified Six Elements Test (Levine, Robertson et al., 2000) Intervention-TBI -Proof reading & Grouping -Room layout (Marshall, Karow, Morelli, Iden, & Dixon, 2003) Measurement development -Rapid Assessment of Problem Solving (RAPS) (McDonald, Flashman, & Saykin, 2002) Review -Porteus Maze Test -Tower of London, Hanoi and -Toronto (Rath, Simon, Langenbahn, Sherr, & Diller, 2000) Measurement comparisons -PSI -WCST -Wechsler Adult Intelligence Scale-Revised/III Comprehension subtest -Rusk Problem Solving Role Play Test (Rath et al., 2004) Intervention Problem Solving Questionnaire (PSQ) PSI Problem Solving Role-play Test (PSRPT) (Turkstra & Flora, 2002) Intervention Report Writing (a specific vocational tasks)
28 Table 1-3: Categories of curre nt problem solving measures Category of Measure Description Administrator Limitations Performance WAIS-R/III Comprehension (Wechsler, 1997) A series of orally presented questions that require the examinee to understand and articulate social rules and concepts or solutions to everyday problems Licensed Psychologist -Effected by premorbid learning and education (Strong, Donders, & van Dyke, 2005) Wisconsin Card Sorting Test (WCST) (Heaton, 1981) Measures executive functioning such as nonverbal problem solving and learning, more specifically: concept formation, abstraction, set shifting, set maintenance, planning, selfmonitoring, and divided attention. The patient sorts cards with symbols and colors on them, while the tester provides feedback whether they are right or not, while after the patient gets 10 cards right the rules change without the patients knowledge. Scoring can measure categories achieved, perseverative responses and nonperseverative errors. Licensed Psychologist -Laboratory setting -Does not represent real life functioning (Poor ecological validity) (Burgess, 1998). -Test retest is poor in individuals with normal memory, even with alternate versions (Bowden, 1998) Simulation BADS: Action program (Wilson, 1996) Measures practical problem solving to remove a cork from a tall tube using only the provided materials. Licensed Psychologist -Reliability not reported in BADS manual (Malloy & Grace, 2005) BADS: Key search (Wilson, 1996) A test of strategy formation, by demonstrating a functional strategy to find lost keys in a field. Licensed Psychology -No standardized norms. BADS: Modified Six Elements (Wilson, 1996) Measures planning and time management, by scheduling time to work on six tasks over a ten minute period. Licensed Psychology -Measures planning, not problem solving specifically BADS: Rule shift cards (Wilson, 1996) Measures inhibition of old learning, response pattern is established according to a simple rule, than the rule is changed. Licensed Psychology -Not much different than WCST BADS: Temporal Judgment (Wilson, 1996) Measures ability to judge how long an event will take, such as a dental appointment. Licensed Psychology -No standardized norms. BADS: Zoo map (Wilson, 1996) Measures planning in both an open ended and more structured method. Plan a route to visit six out of 12 locations in a zoo. Licensed Psychology -No standardized norms
29 Table 1-3 Continued. Everyday Tasks (Levine, Robertson et al., 2000) Paper and pencil tasks that measures goal retention, sub-goal analysis and monitoring. Proof reading & grouping A Paragraph of text and instructions to underline, circle and cross out words according to a set criteria and within a time limitation. Room Layout : A grid representing a seating layout for a meeting is given to the participant. Questions are asked regarding the relative positions of company employees, such as,: What company is just above the B in Row 2? Each question is more difficult and scored on how long it took and was it correct. Grouping: Participants are asked to group individuals based on age and sex (information was on separate sheets of paper) as quickly as they can. Each question is scored by how much time is spent on reading instructions and time to complete the task, along with number of errors. Two observers score video -Early development, labor intensive scoring. -unknown whether previous experience, educational level effects performance. Porteus Maze Test (Porteus, 1959) Measures planning by using paper and pencil test. Test is a series of mazes that become more difficulty to complete. The test is not timed and may take hours to complete. Scoring is based on the number of errors. Licensed Psychology Questionable generalizability (Lezak, 2004) Problem-Solving Roleplay Test (PSRPT) (Sherr, 1996b) Measures objective observer-rating responses of participant when confronted with face to face interpersonal problems in five brief role playing scenarios. The interaction is videotaped and scored by an independent rater. Two observers score video Early development, labor intensive scoring system TOFEA (Bamdad et al., 2003) Measures organizing and planning. Patient is asked to plan a vacation and gather information for pre-set criteria. Two observers score video Early development, labor intensive scoring system
30 Table 1-3 Continued. Tower of London/ Tower of Hanoi/ Tower of Toronto (TOL/TOH/TOT) (Goel, 1995; SaintCyr, 1992; Shallice, 1982) Measures planning and is considered a brain teaser puzzle. Participant is presented with rings and three upright sticks, and then is asked to move the rings to a predetermined arrangement. The participant must look ahead to plan the fewest moves necessary to come up with the best solution to the puzzle.TOH is similar, but uses different size pieces and measures inhibition rather than just planning. While the TOT adds color to the TOH pieces and, measures strategy development. Licensed Psychology Poor test re-test Cued problem to be solved and rules are given Observation BADS: Dysexecutive Quesitonnaire (DEX) (Wilson, 1996) Measures Dysexecutive Functioning and includes 20 items measuring emotional, motivational, behavioral and cognitive changes. Participants rate 0-4 never to very often. Examples of items are I have trouble making decisions, or deciding what I want to do and I sometimes get over-excited about things and can be a bit over the top at times. Self and Informant Does not have standardized norms (Malloy & Grace, 2005) FIM Problem Solving Item (Granger, Hamilton, Linacre, Heinemann, & Wright, 1993) Measures problem solving, which is described as making reasonable, safe and timely decisions regarding financial, social and personal affairs; and initiating, se quencing and selfcorrecting tasks and activities to solve problems. Ratings indicate the amount of assistance that is needed and range from 1 (complete assistance required) to 7 (total independence). A decision making tree is available to assist in selecting the correct score for each item. For example one arm of the tree for problem solving asks does the patient need help to solve complex problems like managing a checking account or confronting interpersonal problems; based on the answer to this question, then another question is selected until the final score is determined. The measure is typically rated by a consensus of professional providers that have directly observed the behavior. Professional -Only one item out of a total scale-not intended to stand alone. -Professional rater rarely observes the patient in the home environment. -Total scale has a ceiling affect and is not a reliable measure post discharge from inpatient rehabilitation (Gurka JA, 1999)
31 Table 1-3 Continued. Frontal Systems Behavior Scale (FrSBe ) (Grace, 2002) Measures frontal lobe functioning (specifically apathy and disinhibition) using a 46 item questionnaire. Participants are asked to rate each item based on their behavior before the injury and now. The rating scale is 1 -5 ranging from almost never to almost always Examples of items: I am easily angered or irritated: I have emotional outbursts without good reason. And Show poor judgment, poor problem solver. Self and informant Frontal lobe functioning versus specifically problem solving. Problem-Solving Inventory (PSI) (Heppner, 1988) Measures problem solving behaviors and attitudes toward problem solving. Questionnaire has 32 items with a 6 point rating scale. There are three dimensions: problem solving confidence, approach-avoidance style and personal control (control over emotions and behavior while problem solving). Self -Only has a selfreport -Not developed for TBI -Measures attitudes toward problem solving Problem-Solving Questionnaire (PSQ) (Sherr, 1996a) Measures emotional self-regulation and logical thinking based on DZurilla and Nezus (1982, 2001) model. Questionnaire has 34 items, with three possible everyday problems presented, the participant is asked to think about how they solved these problems over the past two weeks and rate each question on a 6 point scale, from rarely at all to more than once a day. Items are worded negatively and positively. Two dimensions are selfregulation and clear thinking. Examples items: Having emotional reactions that are out of proportion to situations, such as crying easily or yelling over minor problems, and Starting to act on a possible solution to a problem without first thinking about if it will work. Self -Only has a selfreport -Measures attitudes toward problem solving
32 CHAPTER 2 PSYCHOMETRICS OF THE DEVELOPE D PROBLEM SOLVING ITEMS The Devastation of Traumatic Brain Injury (TBI) A Traumatic Brain Injury (TBI) results from a blow to the head that disrupts the normal functioning of the brain, ranging from mild to severe. Disruptions ma y be subtle and transient, or result in permanent disability. An estimated 1.4 million traumatic brain injuries occur annually, resulting in an estimated 5.3 milli on United States citizens that live with a disability due to TBI (Langlois, 2006). Falls (28%) are the most common cause of TBI followed by car accidents (20%). While on the military front, estimates ranging up to 22% of the 20,000 wounded in action during the Operation Enduring Freedom /Operation Iraqi Freedom (OEF/OIF) have sustained a TBI (Okie, 2005). The typical individual with a TBI is characterized as being male (1.5 times more than female) and 15-24 years of age (Langlois et al., 2003). Costs of the longterm impairments and disabilities associated with TBI are estimated at 56.3 billion dollars annually (Thurman & Guerrero, 1999), while th e full human cost is incalculable(McMahon, West, Shaw, Waid-Ebbs, & Belongia, 2005). The disabilities that result from TBI include a wide range of physical, cognitive and emotional deficits. The physical deficits are easy to see; a wheelchair or assistive device alert society that the individual has been changed as the result of an injury. However, cognitive deficits are not as evident and individuals that have slow pr ocessing or short-term memory deficits may be judged as being slow, intellectua lly impaired, or unmotivated. While society has made many gains in adapting the ph ysical environment for individua ls with physical disabilities, cognitive deficits pose more complex challenges. Cognitive deficits such as short-term memory, attention, processing speed, so cial communication, and executiv e functioning are the most persistent of the deficits following TBI. In fact, improving memory and problem solving was
33 cited as one of the most frequent unmet needs one year after injury by individuals who were initially hospitalized for a TBI (Corrigan et al., 2004)). Cognitive deficits, such as problem solving have the most profound impact on an in dividuals ability to live independently and maintain gainful employment (Warden, 2005). For example, living independently requires that an individual is able to determine what to do when something goes wrong, such as a fire on the stove. Similarly, maintaining employment as a contra ctor requires an indi vidual to be able to determine what to do if one of his/her tools breaks while building a house. The assessment and treatment of functional problem solving remains one of the most important issues facing rehabilitation of individuals with TBI, today (Cicerone, 2000). Assessment of Functional Problem Solving Given the very deleterious e ffects of impaired problem solving on the ability of the individual with TBI to live independently, it is essential th at the individuals who show impairments in functional problem solving be accura tely identified and their deficits in problem solving be precisely measured. Unfortunately, ne uropsychological rehabil itation has struggled to create effective measures of f unctional problem solving in indivi duals with TBI. The complexity of problem solving has made it a difficult concep t to define and measure, since the problem solving process involves several executive functioning domains, such as response to changing contingencies, planning and sequencing, strategy application and decision-making (Bechara et al., 2003). Traditional neuropsychological tests measure the ability level of each executive domain but do not assess the indi viduals ability to synthesize skills from different domains. Additionally, the testing situation provides a cue th at a problem needs to be solved. Therefore, traditional testing will not reflect an individua ls ability to function in settings that are unstructured and where he/she has a variety of problems to solve( Burgess et al., 1998).
34 Functional tests rate the individuals ability based on observations in real life situations. The advantages of rating ability in real life versus laboratory sett ings include the influence of a variety of cues that may exist in different setti ngs in which the individual functions (well learned and novel, support of family, and the use of comp ensatory strategies. Additionally, the complete process of problem solving can be rated rather than the individual cogni tive components within problem solving that are measured by traditional neuropsychological tests such as the Wisconsin Card Sorting Task (WCST)(Chaytor & Schmitter-Edgecombe, 2003). One of the most commonly used functional te sts in TBI, is the Functional Independence Measure (FIM) (Hall, Bushni k, Lakisic-Kazazic, Wright, & Ca ntagallo, 2001; Hawley, Taylor, Hellawell, & Pentland, 1999). Even though the FIM has advantages over traditional neuropsychological tests, there remain significant limitations. The FIM has only one item (with a seven point rating scale) that ad dresses problem solving and uses the professional as the rater in an inpatient rehabilitation setting. Consequent ly, the FIM has a ceiling effect (failure to measure higher ability levels). That is, the high er ability levels of individuals within the community following discharge from an inpatie nt hospital are unlikely to be effectively measured by the problem solving item of the FIM. For this reason, the Functional Assessment Measure (FAM) items were added to the FIM. The FAM items are more representative of behaviors that occur in the comm unity instead of the hospital se tting, such as the item Safety Judgment that reads: Includes orientation to on es situation, awareness of ones deficits and their implications, ability to plan ahead, ability to understand the nature of situations involving potential danger and to identify risks involved, freedom from impul sivity, ability to remember safety related information, and ability to respond appropriately if danger arises (Wright, 2000). This item expands the construct of problem solvi ng to include dangerous s ituations an individual
35 may encounter in the community that requires timely problem solving. The FAM improved the ceiling effects of the FIM slightly, howev er, there continues to be difficulty in discriminating higher ability levels (Hall, High, Wright, Kreutzer, Wood, 1996). Additionally, the FIM/FAM requires a significant amount of traini ng to administer reliab ly, the rehabilitation professional is often unable observe the patien ts performance in the home and community, and must rely on patient and/or family report. While the FIM/FAM offer advantages over traditional testing, the limitations include: not measuring higher abilities, amount of training needed to administer, and the inabili ty of trained professionals to obs erve the daily performance in the community. These limitations suggest a need for a better measure of problem solving for individuals diagnosed with TBI. The following study was conducted to develop a theoretically based measure of everyday problem solving in individuals with TBI. We hy pothesized that 1) the developed problem solving items would load onto the four constructs of pr oblem solving skills from a theoretical model of problem solving; 2) the item di fficulty hierarchy would match the hypothesized hierarchy; and that 3) the caregiver would produce the most psychometrically sound ratings. Methods Research Participants Participants included individuals diagnosed with severe TBI (n=80) and their caregivers (n=80) (Table 2-2). Caregivers were defined as a friend or family member who observes the individuals behavior at least twice a week. Half of the participants diagnosed with TBI were either attending outpatient therapy within the first year of recovery (n=40), or were at least one year post injury (n=40). Participants were recr uited from three rehabilitation centers in the southeastern region of the United States, using the following inclusi on criteria: 1) a diagnosis of severe TBI who is currently either in outpatient therapy (less than one year post injury) or more
36 than one year post injury; 2) 18 to 85 years of age; 3) no previous diagnosis of schizophrenia or psychotic disorder; 4) no prior diagnosis of me ntal retardation and 5) English is the first language. Item Development To capture problem solving ability, items we re generated to reflect each step of the Problem Solving Proper arm of DZurilla & N ezus problem solving m odel (Table 2-1). The individuals diagnosed with TBI a nd their caregivers were chosen as raters of the items, since they observe the behavior in the community and home settings, while the professional does not. Therefore, the items were written to facilitate identifying skills that had been performed in the past two weeks. The original items generated by the resear ch team were reviewed by focus groups consisting of individuals diagnos ed with TBI, caregivers and professionals that work with patients diagnosed with TBI. Items were adde d or changed based on the focus group feedback, resulting in the final 20 items listed in Table 2-1. The items were than administered to 80 participants and their correspondi ng caregivers. The 20 items were embedded in a questionnaire of 226 items from a larger National Institute of Health study. Participants were asked to circle the following ratings based on the individual patient s behavior over the past two weeks: Never, Sometimes, Often, Always, and Not Applicable (N/A ). A rating of N/A was chosen if the rater had not observed that behavior in the past two weeks. The total length of testing within the larger study for individuals diagnosed with TBI was approximately three hours, and one to two hours for caregivers. Data Analysis Factor analyses was conducted to determine whether the developed items fit the DZurilla and Nezus four factor model of problem solvi ng and to determine the relationship of the items
37 to each other. First, a confirmatory factor analysis (CFA) using MPlus (version 4.1) was conducted to determine the goodness of fit of the ite ms to the four factor problem solving model, followed by a CFA to determine a goodness of fit to a one factor (unidimensional) model of problem solving. Next, a princi pal components analysis (PCA) using SAS (version 9.1) was conducted to explore the relationships of items fo r future studies. Finally, an item analysis was conducted using Rasch analysis (WinSteps versio n 3.57.2) to determine item-difficulty hierarchy structure, and item-level psychometrics for both the individual and their caregiver. Prior to analysis, the following issues with the data were addressed: 1) items that were rated not applicable were c onsidered missing data and were imputed using SAS version 9.1, prior to conducting only the CFA. Patient rated items had 141 missing data points and caregiver rated items had 254 missing data points; 2) the item, Plans a short trip using public transportation, was deleted (prior to all the an alyses and missing data calculations), since the response from both patient and ca regiver groups was less than 50%; 3) Items that were worded negatively were reverse-scored for both PCA a nd item analysis; and 4) the 80 patients and 80 caregivers were analyzed separately to reduce additional variance that may have resulted from different raters. The following criteria were used to determin e goodness of fit to the four and one factor model: 1) chi-square p-value of > .05, indicates a significant fit; 2) comparative fit indices (CFI) and Tucker-Lewis Index (TLI), the closer to 1.0, th e better the fit; 4) root mean square error of approximations (RMSEA), < 0.06; and 5) weight ed root mean square residual (WRMR), < 0.1, (Brown, 2006). The PCA was conducted to explore the relati onship of the items to possible components (factors). The following criteria was used to retain and interpret factors: 1) factors with
38 eigenvalues > 1, were retained; 2) factors that did not account for more than 10% of the total variance were not retained; 3) f actors that had less than 4 items loading at least > .30 were not retained; and 4) factors that we re clinically interpretable were retained (Brown, 2008; Portney, 2008). Items that load highly onto more than one f actor or had small loadings on all factors were considered to be factorial complex (not explai ned by factors) and were closely examined for possible revision or future elimination (Bro wn, 2006). The PCA results provided additional information over the confirmatory factor analys es about the relationship of the items to the concept of problem solving ability. A Rasch analysis was conducted to determ ine the psychometrics of the patient and caregiver ratings of the 19 items. The Rasch analysis model is an item response theory, 1parameter model that calculates the probability of a person with a certain ability level of performing on an item of a calcu lated difficulty level. The mathematical equation is: Log [Pnik/Pni(k-1)] = BnDi Fk (where: Pnik = probability of person n being rated at step k on domain i; Pnik-1 = probability of person n being rated at step k-1 on domain i; Bn = ability of person n; Di = difficulty of item i; and Fk = diffi culty making the transition from rating step k-1 to k). For example an individual with high ab ility has a high probability of performing well on easier items, while an individual with low abil ity has a high probability of performing poorly on difficult items (Lincacre, 2002). The analysis provide s information on: 1) the performance of the four rating options (never, sometimes, often and always); 2) how well items fit the Rasch model; 3) the hierarchy of the items from easy to di fficult, and 4) how well the measure separates persons of different abilities; a nd 5) the match of the item difficu lty to the person ability. First, performance of the rating options never, some times, often, and always were examined using Linacres three essential criteria for rating scales (Linacre, 2002). The criteria are: 1) there must
39 be ten responses to each option; 2) category measures advance (e.g., the average measure for rating category two is greater th an the average measur e for rating category one); and, 3) each rating category is performing clos e to mean randomness (mean squa re outfit statis tic of < 2.0). Next, the examination of the inf it statistics provided information about whether an individual item fits the Rasch model. Items with an Infit mean square (Mnsq) of > 1.4 reflect greater than 40% more variance than predicted by th e Rasch model (Bond & Fox, 2001). Items with an infit of greater than 1.4 were su spect and considered for possible revision, or elimination. Third, the hypothesized item difficulty hierarchy wa s compared to the empirically derived item difficulty hierarchy produced by the Rasch anal ysis. Fourth, how well the items separate individuals into different categories of ability (such as poor ability, average ability and good ability) was determined by calcula ting the number of strata. The formula used to calculate the strata of the sample was: Hp= (4Gp+1)/3, wh ere Gp represents th e person separation value provided by the WinSteps output (Wright & Masters, 1982, p.106). A strata of two is desired to separate the sample into two ability levels. Fina lly, the match of the difficulty of the item to the ability of the person was examined by visually inspecting the WinSteps person-item map. The desired distribution is to have the person and item means within two standard deviations from each other and should not show ceiling or floor effects (Velozo, Choi, Zylstra, & Santopoalo, 2006). Results The demographic and clinical characteristics of the study participants are presented in Table 2-2. The average age is 34 years for bot h outpatients and 1-year post participants diagnosed with TBI, while that of the caregiver is 49. Individuals diagnos ed with TBI have less education than caregivers. Indi viduals diagnosed with TBI who ar e at the 1-year or more post
40 phase of recovery has more educa tion than those in the outpatient phase of recovery. The sample is predominately White, with 17.5% or less African American participants. Model Fit Confirmatory factor analysis The results of the confirmatory factor analyses indicated th at the 19 items did not fit the four factor hypothesized model or the one factor model for eith er patient or caregiver rated items. Chi Square p value and other fit indices, such as comparative f it indices (CFI), Tucker-Lewis Index (TLI), root mean square error of approximations (RMSEA), weighted root mean square residual (WRMR) did not reach the criteria levels for either the four or one factor models (indicated in the left colu mn of Table 2-3). Principal component analysis To provide further information on the dimens ions of the items, a principal components analysis (PCA) was conducted on both the patient a nd caregiver rated items. Five factors have eigenvalues greater than one for both patient and caregiver data (T able 2-4). However, only two factors explain at least ten percent of the varian ce for patient and caregiver rated items, (see shaded cells in Table 2-4) (Portney & Watkins, 2008). Therefore only two factors were retained to evaluate the relationship of the items. Tables 2-5 and 2-6 present the factor loadings for patient and caregiver ratings, respectively (factor loadings > .30 are shaded gray). The majority of the patient rated items (16) load onto Factor 1, while 13 of the caregiver rated items load onto factor one. However, six of the patient items and eleven caregiver items load onto Factor 2. Of these items three of the patient items and five of the caregiver items ar e factorial complex (loading onto more than one factor). While the CFA did not support a one fa ctor solution, the PCA show s that a majority of the items load onto the first factor for patient rated items and to a lesser degree for caregiver
41 rated items. Thus, for purposes of this manuscript, all further analyses will treat all 19 items will be treated as a single factor. Item Level Psychometrics Rating scale analysis The rating scale (1= never, 2= sometimes, 3= often and 4= always) was evaluated according to Linacres three essential criteria; which are: 1) at least 10 responses to each category rating; 2) average measure of each category increases incrementally, and 3) each category has a mean square (MNSQ) outfit of < 2.0. Both patient and caregiver rating scales met all three criteria. The first crite rion was met with at least 247 pa tient responses per category and at least 78 caregiver responses per category (Table 2-7). The second criterion was met as rating categories one through four increased incrementally (Table 2-7). The third and final criterion (outfit MnSq < 2.0) indicates that that each ra ting-scale category is performing close to mean randomness. Each of the four rating categor ies ranged in MNSQ outfit from .83 to 1.11 for patients and .81 to 1.32 for caregivers, wh ich is well below the 2.0 criterion. Fit statistics Fifteen of 18 items for the patient and 13 of 18 items for the caregivers fit the Rasch model, (infit/outfit MNSQ of <1.4, standardized Z 2.0)) (Tables 2-8 and 29. Also of note, two of the patient rated items and one of the caregiver rated items, with high in fit also had very low point measure correlations with the other items (<|.15|), (Tables 2-8 and 2-9). While one of the patient rated items makes errors has an accepta ble infit and outfit, it shows a point measure correlation of .00. Two items that share a high inf it for both patient and caregiver rated items are Problems managing money and overly trusting.
42 Item difficulty hierarchy The hypothesized item difficulty hierarchy is pr esented in Figure 3-2, and the empirical item difficulty hierarchy for patient rated items a nd caregiver rated items is presented in Figures 3-2 and 3-3, respectively. The peopl e are depicted by X on the left side of the figure while the items are listed on the right side of the figure with lower able people and easier items at the bottom and more able people and more difficult items at the top. The easiest items in both the patient and careg iver rated hierarchies are follows safety rules and notice warning light. While, the most difficult item s are different for patient and caregiver rated hierarchies, with gives up firs t attempt and problem managing money the most difficult items for the patient rated hierarchy and Tries a different approach, completes a complex task and overly trusting the most difficult items for caregiver rated hierarchy. These two empirically derived hierarchies are compared to the hierarchy that was hypothesized prior to analysis (Fi gure 3-1). Easier items such as notices a warning light; and follows safety rules are at the bottom of each of the hierarchies. Items such as problems managing money are at the top of the hierarchy for the patient rated items and the hypothesized hierarchy, but not the caregiver rated hierarchy. The item overl y trusting was hypothesized to be a lower problem solving ability item, while it falls toward the top of both patient and caregiver hierarchies. Person separation The items are effective in separating individu als into different ability levels. Person separation for patient rated items is 1.68 while the caregiver rated item s person separation is 2.26. The resulting strata of person separation ar e 3.57 for the patient rated measure and 4.35 for the caregiver rated measure.
43 Match of item difficulty to person ability The distribution of the person ability level is depicted on the left in Figures 3-2 and 3-3. The mean of the person ability is the M to the left of the verti cal line, while the mean of the item difficulty is the M on the right of the line in the figures. One and two standard deviations are represented by S and T, respectively on the left of the vertical line for person ability and the right of the vertical line for item difficulty. Both patient and caregiv er difficulty means for items are below the person ability means but are within two standard deviations of each other. There are no ceiling or floor effects for either the patient or caregiver rated measure. Discussion Aim and Summary of Results The aim of this study was to create a measur e of everyday problem solving skills based on a theoretical model. In summary, while CFA coul d not support a four-fac tor or single factor model, the PCA lends some support to the use of a single factor model. It ems show a predictable rating scale structure and the majority of ite ms showed good fit sta tistics. While the item difficulty hierarchy only partiall y supported the existing theoreti cal models, the 19 items were successful in separating participants into severa l levels of ability. These initial findings have implications for problem-solving theo ry development and measurement. Model Fit The factor analytic results of this study br ing into question the use of a problem solving model that has been cited in the TBI literature for over 20 years. The four steps (dimensions) of the problem solving proper aspe ct of DZurilla and Nezus pr oblem solving model: definition and formulations, alternative solutions, deci sion making, and solution implementation could not be confirmed using CFA of our newly develope d instrument. Similarly, Maydeau-Olivares and DZurilla (1996) were unable to confirm the four dimensions of the original problem solving
44 model using confirmatory factor analysis on the SPSI when administered to college students. Instead, based on exploratory analysis they c oncluded that the four steps fell under a single factor. Unfortunately, in the pr esent study, we were also unable to confirm a single factor solution using CFA. There could be a number of possible reasons for not confirming MaydeauOlivares and DZurillas single-factor results. First and fore most, the present study did not have adequate power for CFA. That is, the sample size for the present analysis was not adequate for factor analysis. Grosuch (1983) recommends at least five subj ects per item when the factor loading is high. This may explain why neither the f our factor nor the one factor models could be confirmed. The power limitation will be discu ssed later in the disc ussion. Another possible reason that our results are inconsistent with that of Maydeau-Olivares and DZurilla, is that our instrument may have had different dimensions th an did their instrument. Maydeau-Olivares and DZurillas instrument by design in cluded all of the components of the original model, while our instrument was designed to capture only the four steps of the probl em solving proper aspect of the model. Furthermore, while Maydeau-Olivares and DZurill a found that items representing the four steps fell under a single f actor, this was based on an explor atory factor analysis and they did not go further to confirm the single factor. Fi nally, differences in results may be due to the different populations analyzed. Maydeau-Olivares and DZurilla tested college students, while in contrast our study focused on the TBI populat ion. Individuals diagnosed with TBI may not demonstrate the pattern of problem solving skills demonstrated by college students. For example individuals diagnosed with TBI fr equently have problems with me mory that few college students experience. This could lead to individuals w ith TBI forgetting the solution they chose to implement if distracted during a task, creating a different pattern of challenges than typically experienced by a college student. While we were unab le to confirm the four or one factor model,
45 we were able to examine the relationship of the items using a PCA. We expected the steps of the problem solving model would load heavily onto one factor versus another. There is some evidence that this may be happening in the patient items. Three items that address alternative solutions load onto the first factor and four that address decision making items group onto factor 2. While this provides some support to D Zurilla and Nezus problem solving model, it should be noted that no clear pa ttern existed for caregivers. Fi nally, while there may be little evidence of the items loading onto the steps of the model, the majority of the items for both patient and caregiver did load ont o the first factor. Therefore, th e most parsimonious solution in the PCA was a one factor model. Item Analysis On this assumption that the items represent a si ngle construct, all the items were included in the Rasch analysis. Overall both the patient an d caregiver rated items demonstrated reasonable measurement qualities on the aspects evaluated. The rating scale of never, sometimes, often, and always performed well within the Linacres three e ssential criteria. Each category had at least 10 responses to each category, increased increm entally, and performed close to the mean randomness. While the rating categories performed in the expected manner, of particular concern was the number of missing ratings. The N/A wa s used 254 times for caregivers and 141 times for patients out of 1520 potential responses. Both the patient a nd caregiver rated the following items N/A most frequently: problems managing money; goes between written instructions and task; makes errors when solving problems; and completes complex tasks. Similarly, the item Plans a short trip using public transportation was dropped prior to analysis due to less than 50% response; indicating that perhaps the sample did not engage in this activity as well. Possible reasons for not engaging in th ese activities may be because opportunities to do these activities
46 were not available, or that i ndividuals with cognitive deficits were not allowed to do certain activities, such as use public transportati on because of the poten tial dangers involved. Fit Statistics Following the rating scale analysis, item fit statistics were determined. The majority of the items performed well within the Rasch model, w ith 84% of the patient items and 74% of the caregiver items fitting the Rasch model. Two items that misfit for both patient and caregiver rated items were, problems managing money a nd overly trusting. A high fit statistic may occur for the following reasons. First the item ma y not be worded clearly, for example overly trusting may be interpreted by raters as a de sired behavior while ot hers recognize it as an undesirable behavior. Second, items may be interpreted differently depending on environmental supports. That is environmental supports may be in place to aid so me patients with low problem solving ability making this item easier for them than expected, such as having someone manage paying the bills and only allowing the individual spending money fo r lunch, or always having a caretaker with them to prevent the patient from being vulnera ble to perpetrators. Third, the item may not represent the same construct as the other items. Fo r example, the item jumps to a solution may misfit for the patient rated items, since it loads mo re in factor 2 than in factor 1.In summary, while, the majority of the items fit the Rasch mo del, those that did not may be considered for modification, elimination, or consideration for reflecting a sub construct of problem solving. Of particular interest in the present st udy, are the item calibrations generated by Rasch analysis. That is, the analysis a rranged items into a difficulty hi erarchy. In general, both the patients and caregivers empiri cally driven hierarchy reflected the hypothesized hierarchy. For example items such as notice a warning light a ppeared at the bottom of the hierarchy (easier items) for both patient and caregiver, while items such as problems managing money appear at
47 the top (difficult items) of the hierarchy for pa tients, and complete complex tasks for the caregivers. However some items did not appear in the empirically driven hierarchies as anticipated. For example, Problems managing m oney in the caregiver hierarchy appeared easier than expected and overly trusting in both patient and caregiver hierarchies appeared more difficult than expected. The above empirical findings provide initial support for modifying our thinking and theories about problem solving. Overall Measurement Qua lities of Instrument Several psychometric findings support futu re research and development of our newly developed problem solving measure. First person separation reliability (w hich is analogous to Cronbachs alpha) for patient and caregiver ra tings approached acceptable levels (.68 and .84, respectively). Second, the instrument was successful in discriminating participants into different levels of problem solving. Patient items separated into three statistical st rata of problem solving abilities while caregiver items ar e separating into four statisti cal strata of problem solving abilities. Finally, the categories of person ability were fairly wellmatched with the levels of item difficulty. There were no ceiling or floor effects. Person ability means, for both patient selfratings and caregiver ratings, were within two sta ndard deviations of the item difficulty means. Limitations With respect to all of the above findings, the following limitations to this study should be taken into consideration. First, the sample size was small. The recommended minimum number of subjects needed to conduct a factor analysis, such as CFA and PCA is five subjects per item when the factor loading is high and there are many items for each factor (Gorsuch, 1983). The sample size might be considered sma ll for a Rasch analysis. Wang and Wang, (2005) recommend at least 100 subjects for stable calib rations within a 20 item instrument. A second limitation is that much of the sample selected n ot applicable for four of the items hypothesized
48 to be more difficult, which created missing data for those items. While, the missing data was imputed for the CFA, it was not for all other anal yses. Thus, information about these activities (hypothesized to be more complex activities) wa s reduced. Finally, since the results of the CFA did not support a one factor model and the PCA suggest that the instrument is multi-dimensional, the Rasch model (which assumes unidimensiona lity) might not be the appropriate model to analyze this data. A multi-dimensional Rasc h or a multi-dimensional Item Response Theory model may be a more appropriate model to use. Conclusions In conclusion, we have taken a complex cogni tive process, problem solving, and generated a measure with reasonable measurement characteris tics. This study is the first to attempt to measure everyday measure of problem solving speci fically for individuals diagnosed with TBI. While the present findings could not support DZurill a and Nezus model, it should be noted that others have failed to produce em pirical findings to support this m odel. It is possible that this model may be inappropriate for individuals with TBI. That is, the specific cognitive deficits presented by individuals with TBI may preclude the stages of problem solving typically presented by individuals without TBI. This first attempt at creating a measure of problem solving has resulted in a reasonable meas ure with reasonable psychometric qualities and has opened a path to further define the aspects of everyday problem solving in TBI. Future Directions A multi-dimensional analysis may result in a more sensitive analysis that would reveal the relationship of the items with several underlyi ng dimensions. Alternatively, we may need to explore other models of problem solving. While the current model has been used for over 20 years to explain this complex concept called probl em solving in individuals diagnosed with TBI, it may require a more complex model, or a mode l that captures the uni que patterns of problem
49 solving difficulties that these individuals experience. Finally, while the measurement characteristics are adequate, we can take steps to improve the psychometrics of this instrument. For example, the misfitting items can be reviewed to determine whether rewording or breaking the item down into smaller components may result in individuals rating these items. For example, the item problems managing money c ould be broken down into smaller components, such as: able to make purchases, such as buying lunch; able to pay monthly bills; and able to manage a credit card to create items that better fit individuals w ith different levels of money management skills. Finally we should cons ider eliminating items. For example the item Overly Trusting had high infit and outfit statis tics for both caregiver and patient rated items. This may indicate that the raters may be interp reting this item differently. Another method to consider for future studies is differential item f unctioning (DIF). DIF is a statistical approach that identifies differences in the difficulty level of items relative to a nother factor. For example DIF may provide us a further understanding of how individuals with TBI view their own problem solving skills versus how a caregiv er views these problem solving skills. This study has provided the initial step in testing the psychometric properties of a measure of everyday problem solving that can be rated by individuals diagnosed w ith a TBI, as well as their caregivers. The concept of problem solvi ng may be inherently multidimensional and require further evaluation using a multi-dimensional model. Nevertheless, the psychometric qualities such as person separation and pe rson-item match are good. While further research is needed, this instrument provides a basis for the empirical study of everyday problem solving skills in individuals diagnosed with TBI.
50 Table 2-1: Instrument DZurillas problem solving model Problem solving items Problem Solving Proper 2nd Section of Model 1. Definition & Formation a. Notices when a warning light appear s on the dashboard (for example, seat belt, door ajar, emergenc y brake, engine service). b. Initiates a discussion about future needs (asking about fina ncial issues or return home after hospitalization, asking about financial issues, or resuming work or school). c. Plans a short trip using public transportation (for example, bus or subway). d. Overreacts to frustrating situations (for example, tool does not work, someone takes parking place). 2. Alternative Solutions a. Comes up with an alternative solutio n when the first solution does no t work (for example, when a dr ain cleaner does not work, calls a plumber). b. Suggests or attempts a solution to a problem. c. Tries a different approach to a prob lem when the first one does not work. d. Gives up if first attempt to solve a problem is not successful.
51Table 2-1 Continued. 3. Decision Making a. Follows safety rule s (for example, locks wheelchair brakes when stopped, not opening doors to strangers, looks both ways before crossing street). b. Able to make quick, simple decision (for example, where to go to dinner). c. Allows others to solve problems for them when they could have done it themselves. d. Jumps to a solution when attempting to solve a problem. e. Is overly trusting (does not recognize when being taken advantage of). f. Has problems managing money (for example, tries to make purchase without enough money, overdrawing checking account, running up credit cards). 4. Solution Implementation/Evaluation a. Goes back and forth between reading instructions and do ing a task (for example, read ing a recipe while cooking, looking at a manual to repair a car, looking at instructions to put together a new purchase). b. Complete a complex task that has several steps (for example, cooking a complete dinner, doing a house repair or building something). c. Tells someone or takes action when something goes wrong (for example, water on the floor, shoes untied, or pot boiling over). d. Seeks help when needed. e. Makes reasonable attempts to solve problems before asking for help. f. Makes errors when solving a problem that has several steps (for example, cooking/following a recipe, shopping, car maintenance.
52 Table 2-2: Subject demographics Outpatient 1-year Post TBI (n = 40 ) Caregiver (n = 40 ) TBI (n = 40 ) Caregiver (n = 40 ) Age 33.93.5 49.513.6 33.913.5 4915.8 Gender (Frequency) F 32.5% (13) M 67.5% (27) F 85% (34) M 15% (6) F 32.5% (13) M 67.5 (27) F 72.5% (29) M 27.5% (11) Age 33.9 13.5 49.5 13.6 33.9 13.5 49 15.8 Gender (Frequency) F 32.5% (13) M 67.5% (27) F 85% (34) M 15% (6) F 32.5% (13) M 67.5 (27) F 72.5% (29) M 27.5% (11) White (NonHispanic 72.5% 72.5% 87.5% 92.5% African American 17.5% 17.5% 5% 7.5% Hispanic American 7.5% 7.5% 2.5% 0% Other 2.5% 2.5% 5% 0%
53 Table 2-3: Confirmatory Factor Analysis Results Four-factor model One-factor model Fit indices (criterion) 80 patients 80 caregivers 80 patients 80 caregivers Chi-square of model fit Degrees of freedom P-value (p > .05) 112.71 32 0.000 102.41 35 0.000 112.56 32 0.00 103.73 35 0.000 CFI/TLI (the closer to 1.0 the better) 0.80 0.84 0.80 0.84 TLI (the closer to 1.0 the better) 0.82 0.89 0.82 0.89 RMSEA (< .06) 0.18 0.16 0.18 0.16 WRMR (< 0.1) 1.45 1.07 1.48 1.09 Table 2-4: Patient/Caregiver variances Patient variance Caregiver variance Eigenvalue Proportion% Cumulative% Eigenvalue Proportion% Cumulative% Factor 1 6.00 32 32 7.53 40 40 Factor 2 2.47 13 45 2.92 15 55 Factor 3 1.77 9 54 1.78 9 64 Factor 4 1.48 8 62 1.04 7 71 Factor 5 1.13 5 68 1.03 5 77 Table 2-5: Patient rated ite ms: Rotated factor pattern Model Items Factor 1 Factor 2 Dec Able to make a quick simple decision 0.78 0.23 Alt Comes up with an alternative solution 0.78 -0.23 Alt Suggests or attempts a solution to a problem 0.72 0.11 Alt Tries a differen t approach to a problem 0.71 0.19 Sol Makes reasonable attempt to solve problem 0.70 0.25 Sol Tells someone to take action 0.68 0.01 Def Notices when a warning light appears 0.61 -0.09 Def Initiates a discussion about future needs 0.57 -0.21 Dec Follows safety rules 0.56 0.04 Sol Breaks a job into smaller parts 0.54 0.13 Alt Gives up if first attempt to solve a 0.54 0.25 Sol Goes back and forth be tween written material 0.53 0.14 Sol Seeks help 0.51 -0.12 Dec Allows other to make decisions -0.10 0.71 Dec Problems managing money 0.34 0.70 Dec Is overly trusting -0.22 0.70 Dec Jumps to solution -0.37 0.56 Sol Makes errors when solving problems 0.36 0.52 Def Over Reacts to frus trating situations -0.01 0.40
54 Table 2-6: Caregiver rated it ems: Rotated factor pattern Table 2-7: Summary of category structure Factor 1 Factor 2 Dec Allows other to make decisions 0.87 0.21 Def Notices when a warning light appears 0.83 -0.07 Dec Is overly trusting 0.74 -0.07 Sol Makes errors when solving problems 0.70 0.06 Sol Goes back and forth betw een written material 0.70 0.13 Dec Problems Managing Money 0.68 0.03 Sol Breaks a job into smaller parts 0.65 -0.01 Alt Gives up if first attempt to solve a 0.64 0.48 Sol Makes reasonable attempt to solve problem 0.53 0.47 Dec Follows safety rules 0.52 0.04 Alt Suggests or attempts a solution to a problem 0.20 0.85 Def Over reacts to frustrating situations -0.26 0.82 Def Initiates a discussion about future needs 0.02 0.75 Sol Tells someone to take action 0.35 0.65 Dec Able to make quick simple decision 0.10 0.64 Alt Comes up with an alternative solution 0.56 0.58 Alt Suggests or attempts a solution to a problem 0.12 0.47 Sol Seeks help -0.34 0.41 Dec Jumps to solution 0.13 -0.63 Patient category structure Caregiver category structure Category label Observed count % Observed Average Outfit MNSQ Observed count % Observed Average Outfit MNSQ 1 Never 249 -1.15 1.11 78 -.26 1.32 2 Sometimes 430 .36 .93 325 .12 .88 3 Often 306 .64 1.02 439 .79 .81 4 Always 403 1.40 1.03 391 1.80 1.03
55 Table 2-8: Patient rated items: Infit statistics INFIT OUTFIT Patient rated item INFIT statistics Person measure Model S.E. MNSQ ZSTD MNSQ ZSTD PTMEA CORR. Person: Real sep: 1.47 Reliability: .68 Problems_managing_money 1.79 .20 2.09 4.4 2.28 5.0 -.03 Gives_upfirst_attempt 2.08 .20 1.43 2.1 1.56 2.6 -.18 Overly_trusting .93 .15 1.53 2.9 1.50 2.7 .30 Other_make_decisions 1.35 .17 1.29 1.6 1.21 1.2 .29 Notice_warn_light -1.60 .20 1.24 1.1 1.15 .6 .36 Overreacts_to_frustr 1.58 .17 1.05 .4 1.07 .4 .34 Complete_complex_task -.29 .15 1.00 .0 1.05 .4 .34 Makes_errors 1.42 .18 1.02 .2 .99 .0 .00 Quick_simple_decision -.79 .1 4 .98 -.1 .98 -.1 .46 Jumps_to_solutions .64 .14 .94 -.3 .90 -.6 .50 Initiates_discus_needs -.67 .14 .93 -.5 .93 -.4 .53 Seeks_help -. 95 .15 .88 -.8 .85 -.9 .48 Goes_btw_written_task -.26 .16 .85 -1.0 .84 -1.0 .56 Makes_attempt -.93 .14 .77 -1.7 .84 -1.0 .57 Follows_safety_rules -1.38 .16 .75 -1.5 .68 -1.7 .59 Tries_diff_approach -.70 .1 4 .68 -2.6 .74 -1.9 .58 Tells_some_take_action -.88 .14 .68 -2.6 .68 -2.2 .67 Suggests_solution -.60 .1 4 .68 -2.6 .68 -2.4 .61 Alternative_solution -.74 .1 5 .55 -3.7 .56 -3.3 .75 Table 2-9: Caregiver rate d items: Infit statistics INFIT OUTFIT Caregiver rated item INFIT statistics Person measure Model S.E. MNSQ ZSTD MNSQ ZSTD. PTMEA CORR. Person: Real sep: 2.26 Reliability: .84 Jumps_to_solutions .36 .16 1.72 3.8 2.03 4.9 .10 Complete_complex_task .61 .20 1.83 3.5 1.95 3.6 .43 Problems_managing_mone -.32 .21 1.75 3.2 1.49 1.9 .56 Overly_trusting .60 .17 1.45 2.5 1.52 2.7 .51 Notice_warn_light -.74 .20 1.42 2.1 1.36 1.4 .49 Initiates_discus_needs .17 .16 1.34 2.1 1.28 1.7 .54 Other_make_decisions -.05 .17 .89 -.7 1.11 .7 .51 Quick_simple_decision .35 .15 .99 .0 .96 -.2 .54 Seeks_help -.45 .16 .95 -.3 .91 -.4 .50 Overreacts_to_frustr -.17 .1 7 .90 -.6 .86 -.7 .54 Goes_btw_written_task .53 .21 .86 -.7 .81 -.9 .67 Follows_safety_rules -.67 .1 8 .84 -.9 .79 -1.0 .61 Makes_attempt -.08 .16 .82 -1.2 .77 -1.4 .68 Tells_some_take_action -.75 .17 .77 -1.5 .6 8 -1.6 .62 Tries_diff_approach .70 .1 6 .69 -2.2 .68 -2.2 .66 Suggests_solution .30 .1 6 .64 -2.8 .62 -2.6 .70 Alternative_solution .40 .1 9 .62 -2.4 .60 -2.3 .75 Gives_upfirst_attempt -.45 .1 6 .60 -3.0 .56 -2.9 .67 Makes_errors -.34 .21 .34 -4.7 .41 -3.3 .63
56 Figure 3-1: Hypothe sized Hierarchy
57 PERSONS MAP OF ITEMS
58 PERSONS MAP OF ITEMS
59 CHAPTER 3 CONCURRENT VALIDITY OF THE 19 DEVELOPED ITEMS Introduction Currently there are no gold standard measures that capture problem solving ability in the real world setting. The existing measures can be categorized as tradit ional neuropsychological measures, simulation measures and observational que stionnaires. The limitations of the measures in each of these categories are discussed. First, the traditional neuropsychological measur es are criticized for lacking ecologically validity (Burgess, Alderman et al. 1998; Chaytor and Schmitter-Edgecombe 2003). For example, individuals who test within normal limits on traditional neuropsychological tests conducted in the laboratory setting such as, the Wisconsin Card Sorting Task (WCST) and the Tower of London test, often exhibit problem solving defici ts in real world set tings (von Cramon 1992; Levine, Robertson et al. 2000). Second, simulation tests are often task speci fic, lack correlatio n with performance measures, and require work intensive scoring. Th e Rusk Problem Solving Roleplay Test is an example of a simulation test, which requires two ra ters using an elaborate rating system, to score videotaped behavior and lacked correlation with performance and self-report measures (Rath, Langenbahn et al. 2004). Third, standardized questionnaires either have only a few items regarding problem solving, or use raters that may lack obj ectivity. Examples of standardized questionnaires used to measure problem solving are the Functional Independenc e Measure and Functional Assessment Measure (FIM+FAM), Social Problem Solving InventoryRevised (SPSI-R), and the Behavior Inventory of Executive Functioning Adult (BRIEF-A). The FIM+FAM uses only one item to measure problem solving and a related item called saf ety judgment. The rater for the FIM+FAM is
60 typically a professional who rarely observes the patients behavior in the home, while, the SPSIR uses only the patient as the rate r, which is controversial since th e patient diagnosed with severe TBI frequently lacks self-awareness and is viewed as an unreliable rater of his/her own behavior (Hart, Whyte, Kim, & Vaccaro, 2005; Prigata no & Altman, 1990; Zasler, 2003). Alternatively, the BRIEF-A measures executive functioning, has onl y seven questions out of 75 that relate to problem solving and uses both the caregiver a nd the patient as the raters (Roth RM, 2005). In summary, the current measures are limited in their ability to elicit the information needed to assess the everyday problem solving skills of individuals diagnosed with TBI. A measure is needed that captures the everyday pr oblem solving skills in a standardized, reliable and easy to use method. We developed a 19 item problem solving measur e for individuals diagnosed with TBI that may resolve many of the current measurement limitations in the follow ways. The details of the psychometrics of the developed 19 items can be re viewed in the previous psychometric article. The measure was developed based on a theoretical model of problem solving in order to capture the everyday problem. Raters of the items are the patient and the care giver, which are the individuals who observe the beha vior in the everyday setting. Since two raters are used, psychometrics can be used to determine which rater is more psychometrically sound. Additionally, the administration and scoring of the 19 items ta kes less than 15 minutes to administer and even less time to score. As a result, the 19 item measure will provide a measure specifically for assessing everyday problem solving skills in indivi duals diagnosed with TBI that can be administered to the patient and caregiver with relative ease. While the 19 item measure has undergone item-level psychometri c analysis, correlations with external measures of everyday problem solving have not been conducted. Theref ore, the next step is to compare both the
61 caregiver and patient measures to other tradi tional measures of everyday problems solving, in order to confirm that the developed measure is capturing the concept of everyday problem solving. Methods Research Participants Participants included individuals diagnosed with severe TBI (n=80) and their caregivers (n=80) (Table 2-2). Caregivers were defined as a friend or family member who observes the individuals behavior at least twice a week. Half of the participants diagnosed with TBI were either attending outpatient therapy within the first year of recovery (n=40), or were at least one year post injury (n=40). Participants were recr uited from three rehabilitation centers in the southeastern region of the United States, using the following inclusi on criteria: 1) a diagnosis of severe TBI and is currently either in outpatient therapy (less than one y ear post injury) or more than one year post injury; 2) 18 to 85 years of age; 3) no previous diagnosis of schizophrenia or psychotic disorder; 4) no prior diagnosis of me ntal retardation and 5) English is the first language. Procedures Individuals diagnosed with TBI along with their caregivers were administered a battery of assessments as part of a larger NIH study. All me asures were either admi nistered in person or over the phone. The 19 problem solving items were embedded in a larger 228 item questionnaire administered in-person along with the BRIEF-A. The FIM problem solving item and FAM safety judgment item and the SPSI-R-S were administer ed over the phone, for al l but 17 patients and their caregivers. Those 17 patients and caregiv ers completed all assessments in-person. The primary author received inte r-rater training prior to admi nistering the FIM+FAM to all participants. Therefore only one rater was us ed to administer the FIM+FAM, for all the
62 participants. Additionally, decision trees were used during administrati on of the FIM+FAM to assist the caregiver in selection of the rati ng (Wright 2000). The patien t and caregivers were administered different measures. Both respond ed to the 228 item questionnaire and the BRIEFA, while only the patient responded to the SPSI-R -S and only the caregiver responded to the FIM+FAM items. Measures The Problem Solving item and the Safety Judgment item from the FIM + FAM, the Social Problem Solving Inventory and the Behavior Ra ting Inventory of Executive Functioning-Adult (BRIEF-A) were used to validate the 19 problem solving items developed in this study (see Appendix A for copies of measures). The 19 developed problem solving items, listed in Table 2-1 were developed based on the problem solving proper dimension of the DZuri lla and Nezus (D'Zurilla 1982; D'Zurilla 1990; D'Zurilla 1999) model of problem solving that has been cited in the TBI literature for over 20 years. The items were then administered to 80 participants and thei r corresponding caregivers. Participants were asked to circle the following ra tings based on the indivi dual patients behavior over the past two weeks: Never, Sometimes, Often, Always, and N/A. A rating of N/A was chosen if the rater had not observed that behavior in the past two weeks. The developed items resulted in reasonabl e measurement psychometrics. First person separation reliability (which is analogous to Cronbachs alpha) for patient and caregiver ratings approached acceptable levels (.68 and .84, respec tively). Second, the instrument was successful in discriminating participants in to different levels of problem solving. Patient-rated items separated into three statistical strata of problem solving abilities while caregiver-rated items separated into four statistical st rata of problem solving abilities. Finally, the categories of person ability were fairly well matched on the levels of item difficulty. There were no ceiling or floor
63 effects. Person ability means for both patient self -ratings and caregiver ratings were within two standard deviations of item difficulty means. The FIM is widely used in rehabilitation. The FAM added 12 items to the FIM and has been used in TBI populations (Hall, 1992; Ha ll, 1994; Hall, Gordon, Zalser, 1993; Hall, High, Wright, Kreutzer, Wood, 1996; Ha wley et al., 1999). The enti re FIM+FAM measure has 30 items, of which five represent cognitive f unctioning (Problem Solving, Memory, Attention, Orientation, and Safety Judgment). Of the five cognitive items, two items were used in this study. The first item measures problem solvi ng, which is described as making reasonable safe and timely decisions regarding financial, social and personal a ffairs; and initiating, sequencing and self-correcting tasks and activit ies to solve problems (Linacre, 1993). The second item is Safety Judgment, which is referr ed to as the ability to pursue al l activities independently using proper safety awareness skills (Wright 2000). FIM+FAM items are rated by trained professionals. Ratings in dicate the amount of assistance th at is needed and range from 1 (complete assistance required) to 7 (total independence). A deci sion making tree is available to assist in selecting the correct score for each ite m. For example one arm of the tree for problem solving asks does the patient need help to solve complex problems like managing a checking account or confronting interpers onal problems? Based on the answ er to this question, another question is selected until the final score is de termined. An example of one arm of the decision making tree for Safety Judgment asks would subj ect need some degree of supervision with new or complex activities? The measure is typically rated by a consensus of professional providers who have directly observed the behavior. The ov erall measurement characteristics of the total FIM+FAM measure are good, with an intercorre lation coefficient reported at .83 in a TBI population (Donaghy & Wass, 1998). Inte rrater reliability has been reported at 89% when a
64 trained professional is using a decisi on making tree (Wright 2000). Likewise, phone administration has been reported to have little variation from traditiona l administration (Smith, Illig, Fiedler, Hamilton, & Otte nbacher, 1996). The total scale score correlates with other outcome measures such as the Community Integr ation Questionnaire, Sickness Impact Profile, SF-36, amount of supervision a nd return to work (Corrigan, Smith-Knapp, & Granger, 1997; Gurka, 1999; Hall, Gordon, Zalser., 1993). However, the correlations of individual items with external measures have not been reported. The next measure used in this study is th e Social Problem Solv ing Inventory-RevisedShort (SPSI-R-S), which is a questionnaire that was also based on DZurilla & Nezus (D'Zurilla 1982; D'Zurilla 1990; D'Zurilla 1999) model. Th e 25-item self-report questionnaire has five subscales: Positive Problem Orientation, Nega tive Problem Orientation, Rational Problem Solving, Impulsivity/Carelessness Style, and Avoidance Style. Only the Rational Problem Solving subscale reflects the problem solving abili ties, versus the problem solving style of other subscales. Examples of items are: I feel th reatened and afraid when I have an important problem to solve and When making decisions, I do not evaluate all my options carefully enough. Items are worded positively and negativel y. There are five response options: not at all true, slightly true, moderately true, very true, and extremely true. While the overall measurement characteristics of the SPSI-R-S are good, there is no reported reliability or external correlation studies using individuals diagnosed with TBI. Standardized scor es are divided into three age groups: young adults (17-39) middle aged (40-55) a nd elderly (60-80). The total reliability scores based on a normative sample for each of the ag e groups were .89, .93, and .88 and the reliability scores of the Rational Problem Solving subscale, were .78, .88 and .72 for each of the age groups. The SPSI-R (long version of SPSI-R-S) demonstrated concur rent validity with a measure
65 called Problem Solving Inventory (Heppner, 1988) The Rational Problem Solving subscale of the SPSI correlated at -.58 with the PSI. The SPSI-R also correlates w ith other measures of psychological distress and well being, such as th e Becks Depression In ventory, Brief Symptom Inventory (Derogatis, 1982) and Ca regiver Burden Interview (Zar it SH, 1980). Correlations with the Rational Problem Solving subscale were -.40 with the BDI, (p <.001) ; -.16 with the BSI (p <.05), and -.26 with the CBI (p <.01). The Behavior Rating Executive Function-Adult version (BRIEF-A) is a 75 item self and proxy questionnaire about behaviors that measure executive dysfunction. All items are worded negatively, and examples of problem solving items are: I have trouble accepting different ways to solve problems with work, friends, or tasks and I make decisions that get me into trouble (legally, financially, socially) The individual and his/her pro xy are asked if they are having problems with the items within the past month by circling: often, sometimes, or never. Lower scores indicate better performa nce. Both the self and proxy ve rsions of the BRIEF-A contain nine subscales, two sub-scores and a total score. Subscales are i nhibit, self-monitor, plan/organize, shift initiation, task monitor, emotiona l control, working memory, and organization of materials. The sub-scores are Behavior Regulation Index (BRI) which is the summary of first four subscales and the Metacognition Index (MI) which is the summary of last five subscales. The total score is called the Global Executive Composite (GEC), which is the summary of the BRI and MI sub-scores. Test properties of the BRIEF-A were assessed on healthy adult controls, resulting in reliability ranges from .73 to .90 for the subscales and .93 to .96 for the total scores. Standardized scores ar e calculated in various age ranges from 18 to 90. The BRIEF-A also correlates with other meas ures of executive functioning, depression and anxiety. For example the BRIEF-A correlate s with the self and proxy reports on the
66 Dysexecutive Questionnaire at .84 and .87 (W ilson, 1996); Frontal Systems Behavior Scale (Grace, 2002) at .67 and 74 (Roth, 2005); Depression Inventory II at .59 (Beck, 1996); the State-Trait Anxiety Inventory at .30 for state anxiety (Speil berger, 1970); and .55 for trait anxiety (Roth, 2005). A small sample of 23 individua ls diagnosed with TBI were correlated with 23 matched normal participants and the only subscal es to correlate significantly were inhibit (r=0.89, p = .001) and emotional cont rol (r=0.76, p =.02), (Roth, 2005). Data Analysis Concurrent validity was assessed using P earson Correlation between the 19 items and external criterion (BRIEF-A, SPSI-R, and the FI M Problem Solving items and the FAM Safety Judgment item). Correlations of .25 to .50 are considered a fair rela tionship, .50 to .75 a good relationship and above .75 an ex cellent relationship (Portney, 2008). The following total scores and sub-scale scores from the measures were used: 1) the 19 problem solving items person measures from the WinSteps program for both patient and caregiver, with higher measures indicate better performance; 2) the SPSI-R-S total calculate d raw scores and the Rational Problem Solving calculated subscale scores with a potential of 20 point s, where higher scores indicate better performance; 3) the BRIEF-A Global Executi ve Composite (GEC) total raw score with a maximum score of 210, where lower sc ores indicate higher performance, since all items are negatively stated; and 4) the FIM problem solving item and FAM safety judgment item caregiver ratings, ranging from 1 (total assistance) to 7 (complete independence). Results Sample characteristics are depicted in Tabl e 3-2, and the descriptive statistics of each measure is found in Table 4.1. The sample of individuals diagnosed w ith TBI had twice the number of males as females. The average age is 34 years for both outpatients and 1-year post
67 participants diagnosed with TBI, while that of the caregiver is 49. Indi viduals diagnosed with TBI have less education than caregivers. Indi viduals diagnosed with TBI have more education, at the 1-year or more post phase of recovery, co mpared to the outpatient phase of recovery. The sample is predominately White, with 17.5% or less African American participants. Overall, the descriptive statistics for the caregiver ratings had more variability when compared to the patient ratings. The means of the Rasch person measure, rated by patient and caregiver were (.23 .65) and (.89 1.20) re spectively, (higher meas ures indicate higher performance). The mean and standard devi ation was (113.21 23.24) for the patient-rated BRIEF-A items, and (125.71 28.47) for the caregiver; (lower scores on the BRIEF-A indicate better performance). The total sc ore of the SPSI-R:S compared to the Rational Problem Solving subscale reveals a mean of 14.26 2.42, compared to 12.90 4.11 (higher scores indicate better performance). While the caregiver rated problem solving items had a slig htly lower average of 4.95 1.55 compared to the Safety Judgment ite m of 5.29 1.54 (lower sc ores indicate lower performance), with comparable stan dard deviations (Table 4.1). Correlations between the 19 developed items a nd other measures of problem solving and executive functioning are found in Table 3-2. Four different comparisons between the patient and caregiver rated items are discussed below. Firs t, the patient rated 19 items had a significant correlation at .28 with caregiver rated items. A lternately, the patient ra ted BRIEF-A showed a good correlation (.49) with the ca regiver rated BRIEF-A (Table 3-3). The SPSI-R-S total score and the rational problem solving subscale correlated significantly with the patient rated items (.29 and .36), but not with the patient rated BRIEF-A (Table 3-2). In co ntrast to the patient ratings, the caregiver ratings co rrelated with analogous outcome measures in the fair to good range. The Problem Solving item and the Safe ty Judgment item of the FIM+FAM correlated
68 significantly (.56 and .46) with th e caregiver rated items. In part icular, the correlation between the caregiver rated items and the caregiver ra ted BRIEF-A showed the strongest relationship with a correlation of -.70 (Table 3-2). While, our main interest is in the concu rrent validity of our 19 item problem-solving measure, we explored the correlations acro ss our criterion measures The SPSI-R-S and the subscale RPS, along with the BRIEF-A were all rated by the patient. The patient-rated SPSI-R-S total score showed significan t, good correlations with the SPSP-R-S RPS (.59) and fair correlations with the Brief-A (.43), though the SPSI-R-S RPS showed virtually no correlation with the BRIEF-A (.02). The caregiver rated FIM problem solving item showed a significant fair correlation with the FAM safety item (.48) a nd a good correlation with the BRIEF-A (-.58). The caregiver rated FAM safety item showed a signi ficant weak correlation with the BRIEF-A (-.24). Discussion Comparisons between the rater descriptive statis tics can only be made across measures that both the patient and caregiver ra ted, such as the 19 developed items and the BRIEF-A. As a result the 19 items reveal that on the average, the caregiver rated the pa tients as being better at problem solving than did the patients themselves (.89 1.20, versus 23 .65). Inversely, on the BRIEF-A, the caregivers rated the patients as pe rforming more poorly than did the patients themselves (lower scores indicate better performance) (125.71 28.47 versus 113.21 23.54). The 19 developed items were correlated with four measures to determine concurrent validity. In general, th e correlations between assessments rate d by the patient were either weak or none, while correlations between assessmen ts rated by the caregiver were moderate. Comparisons between the patient and caregiver ratings of the 19 developed items were surprisingly low, with only a fair relationship (.28) for the 19 de veloped items. We would have expected the relationship to have been stronger, considering the fact that both the patient and
69 caregiver were rating the same behavior. In contrast, the correlation between patient and caregiver ratings for the BRIEF-A, showed a good relationship (.49). The re lationship was not as strong as we would have expecte d, but better than the relationship between the raters for the 19 developed items. While not correlating with the BRIEF-A, the pa tient rated items did demonstrate a fair correlation with the SPSI-R-S ( .29) and the RPS (.36) subscale. We anticipated the relationship to be higher between the 19 deve loped items and the RPS subscale, since they both represent the concept of problem solving ability. The weaker relationship with the total scale was expected, since the total scale measures includes attitudes toward problem solving, which is not incorporated into our 19 item meas ure. In contrast, we expected a relationship between the patient-rated 19 developed items and the BRIEF-A since the concept of problem solving falls within the executive functioning domain, along with the finding that the caregiver-rated items had a good relationship with the BRIEF-A. Howeve r, there was virtuall y a zero correlation between the patient-rated 19 items and the BRIEF-A. One possible reason the patient rated ability measures do not have a strong relationship with the BRIEF-A or the SPSI-R total scale, is that both contain the additional component of emotion. The BRIEF-A contains a subscale title d emotional control while the SPSI-R total scale measures attitudes toward problem solvi ng and correlates with anxiety and depression measures. Patients diagnosed with severe TBI fr equently lack self-aw areness (Hart et al., 2005; Prigatano & Altman, 1990; Zasler, 2003). Theref ore, perhaps judging emotions is affected by lack of self awareness, mo re so than judging more objectiv e problem solving abilities. The caregiver rated items de monstrated a strong relationship with the BRIEF-A, a good relationship with the problem solving item of the FIM, and a fair relati onship with the Safety
70 Judgment item of the FAM. It is somewhat surp rising that the relations hip with the BRIEF-A items was stronger than the relationship with e ither the FIM problem solving or FAM safety judgment items. A relationship was expected with the BRIEF-A since the concept of problem solving falls within the domain of executive f unctioning. However, since the 19 developed items were intended to measure functional problem solving, we expected these items to show a stronger relationship to problem solv ing than executive functioning. Comparison of our results with others is limited, since to our knowledge, the SPSI-R and the BRIEF-A have not been correlated with any measures of problem solving in the TBI population, nor have the individu al items of FIM+FAM been correlated with measures of problem solving. However, a study by Rath and colleagues (Rath et al., 2004) did compare problem solving measures in a TBI population. Th ey compared the Problem Solving Inventory (Heppner, 1988), the Problem Solving Questio nnaire (Sherr, 1996b), the Problem Solving Roleplay Test (Sherr, 1996a) and the Wisconsin Card Sort Test (WCST) (Heaton, 1981) to a sample of 61 individuals diagnosed with severe TBI. While the questionn aires (that contain an emotional component) correlated with each other, they did not correlate with the role playing test or the Wisconsin Card Sort Test. Again the l ack of correlation may be due to the emotional component in the questionnaires, as is reflec ted in the SPSI-R and the BRIEF-A. Additionally each of the questionnaires used in by Rath and colleagues were self-repo rts, perhaps confirming our suggestion that the patient has more difficult y rating emotions than abilities. Furthermore, the social cognitive subscale of the FIM, which contains an item called emotional control has poor inter-rater reliability as compared to the othe r items of the FIM, and is thought to be due to the possibility that this item is more difficu lt to observe (Ottenbacker, 1996 and Donoshy, 1998).
71 In short, individuals with TBI may have more di fficulty judging their em otions, due to lack of self-awareness. Limitations There are several limitations to our study that sh ould be taken into consideration. First, our sample is not representative of most individuals who sustain TBI. Most TBIs occur during the age range of 15-24 years, while the average ag e of the study sample was 34 years of age. Individuals diagnosed with TBI in the study sample, were more educated and predominately white. Alternately, caregivers compared to indivi duals with TBI were more often female as compared male, and caregivers were older and more educated. In addition to sample differences, there are methodological limitations in the study. It is important, if not critical to note that there are no gold st andard measures in which to relate the 19 developed problem solving ite ms. Each of the measures has the following limitations. The FIM+FAM items have not been used individually outside of the cognitive subscale or total scale to asse ss patients diagnosed with TBI. The SPSI-R have no reported reliability or external correlations with indivi duals diagnosed with TBI. There are challenges in interpreting correlations of our problem solving measure with the BRIEF. The BRIEF-A is measuring the concept of executive functioning. The executive functioning domain includes prob lem solving, but the subscales of BRIEFA do not include problem solving, and only a sma ll portion of the items relate directly to problem solving abilities. Wit hout a clear gold standard for pr oblem solving, judgments about whether or not our 19 item measure truly capture s the concept of problem solving is tenuous. Another limitation in our methodology is that we did not have both the caregiver and the patient rate all the problem solving measures. Th is limits our ability to directly compare raters on all measures. Finally, our study can be cri ticized for using self /proxy reports for our
72 concurrent validity measures. Self-report of the individual diagnosed with severe TBI has often been criticized due to problems with self-awa reness (Hart et al., 2005; Prigatano & Altman, 1990; Zasler, 2003). Family ratings, on the other ha nd, may be subject to biases (Fleming et al. 1998). In conclusion, while direct comparisons of self-report and caregi ver proxy concurrent validity cannot be made, caregiver proxy ratings show a stronger relationship with external measures of problem solving and executive f unctioning, than self-report measures. Based on methodological limitations and the lack of a gol d standard for problem solving it would be premature to be definitive about these conclusions. Further research comparing patient and caregiver rating is warranted to provide a better understanding of these different perceptions of problem solving following TBI. Future Directions Further research is needed to ascertain with a better degree of certainty that the 19 items are measuring the concept called problem solvi ng. Collecting responses from both the patient and the caregiver for every measure will improve th e ability to compare the patient ratings to the caregiver ratings. Additionally, the inclusion of a performanc e measure such as the WCST (Heaton, 1981) or a simulation task such as th e Zoo Test (Wilson, 1996), would provide external measures that may alleviate concerns that the pa tients lack self-awareness and are not able to accurately judge their own behavior. The results of future studies, including both self reports, proxy raters, and performance measures, will provi de additional valuable information to extend our understanding of our new m easure of problem solving.
73 Table 3-1: Descriptive statistics of measure Patient Mean SD range Caregiver Mean SD range 19 Developed Items .23 .65 -1.54 to 1.97.89 1.20 -1.39 to 5.59 BRIEF-A 113.21 23.54 74.00 to 171.00 125.71 28.47 71.00 to 193.00 SPSI-R:S Total Raw Score 14.26 2.42 7.00 to 19.20 SPSI:R:S RPS Raw Score 12.90 4.11 5.00 to 20.00 Problem Solving Item from the FIM 4.95 1.55 1.00 to 7.00 Safety Judgment Item from the FAM 5.29 1.54 1.00 to 7.00
74Table 3-2: Correlations among the 19 developed ite ms and traditional measures of problem solving 19 Developed Items SPSI-R-S Total SPSI-R-S RPS FIM Problem Solving Item FAM Safety Item BRIEF-A Pearson Correlations Sig. (1-tailed) N=80 Caregiver Patient Patient Caregiver Caregiver Patient Caregiver Patient .28** .006 .29** .004 .36** .001 .03 .385 19 Developed Items Caregiver .56** .001 .46** .001 -.70** .001 indicates p > .05; ** indicates p >.01
75Table 3-3: Correlations among traditi onal measures of problem solving SPSI-R-S Total SPSI-R-S RPS FIM Problem Solving Item FAM Safety Item BRIEF-A Pearson Correlations Sig. (1-tailed) N=80 Patient Patient Caregiver Caregiver Patient Caregiver SPSI-R-S Total Patient .59** .001 -.43** .001 SPSI-R-S RPS Patient .02 .437 FIM Problem Solving Item Caregiver .48** .001 -.58** .001 FAM Safety Item Caregiver -.24* .016 BRIEF-A Patient .49*** indicates p > .05; ** indicates p >.01
76 CHAPTER 4 CHALLENGES OF CREATING A FUNCTION AL PROBLEM-SOLVING MEASURE AND FUTURE PLANS Current Problem Solving Measures The current state of problem solving measur es for individuals diagnosed with TBI is tenuous at best. We conducted a lit erature search of the problem solving measures used in the last ten years. The results of our literature search revealed th ree basic categories of measures: performance, simulation, and observational. Th e performance measures (also referred to as neuropsychological measures) measure a single cognitive ability in a highly controlled environment. While simulation measures use ta sks that are encountered in everyday life to measure the coordination of cognitive abilities in a controlled environment. Finally, observational measures capture th e actual performance of the i ndividual in the real world. Only two measures were refe rred to as neuropsychologica l measures, the Wisconsin Card Sort Task (WCST) and the WAIS-III co mprehension subscale. Neuropsychological tests have long been criticized as not capturing everyday functioning. For example, measures such as the WCST and the WAIS-III yield normal resu lts while the individual displays obvious executive functioning and problem solving deficits in the real world. This limitation of not capturing real world deficits, l ead to a body of research in the 1990s that criticized the lack of ecological validity (Burgess, 1998, Wilson, 1996). These criticisms resulted in the development of several measures to captur e the everyday problem solving ab ilities of individuals diagnosed with a TBI. Over eleven measur es were identified from our literature search that was referred to as simulation of everyday tasks. The advantages of simulation measures are that the individual has to coordinate several cogniti ve abilities to complete a task. While these measures may have merit, there are limitations in their ability to capture the everyday functioning that they were purported to measure. Simulation tests are cond ucted in an artificial environment where clear
77 start and stop times are provided. Compensatory st rategies that are used in the real world are limited in simulated tasks. The information that is needed to solve a problem has often been provided, and the additional activit ies that interfere with everyday life have been removed. To overcome these limitations of the simulation meas ures, another category of measurement exists. The third category of measurement referred to as observational measur es resulted in five measures from the literature search. The adva ntages over neuropsychological measures and simulation measures are that the r eal world behavior is observed rather than a single cognitive ability or the simulation of the real world. Ho wever, the current observational measures have limitations. Two measures are used to measure concepts other than problem solving, such as executive functioning or frontal lobe functions. One measure has only one item that addresses problem solving and the two remaining measures use only the patient as the rater and measure attitudes toward problem solving, rather than actual problem solving ability. We concluded from the evaluation of our literature search that the most effective means of capturing the real world abilitie s of individuals diagnosed with TBI would be the observational method. However, the current st ate of observational problem so lving measures had serious limitations. Therefore, the development of a ne w measure of problem solving was needed; a measure that captured the concept of problem solv ing, with more than one item and utilized more than one observer. Overview of Study In order to address these limitations we took the following steps to develop a measure of everyday problem solving abilities. A theoretical model of problem solving that has been used for over 20 years was used to develop an observati onal measure. Items were developed to reflect objective problem solving abilities versus attitudes toward problem solving. Patients, caregivers and professionals who live the everyday life of TBI were in volved in the development of
78 items. We chose individuals diagnosed with TBI a nd their caregivers to ra te their ability, since they are more aware of everyday functioning. Confirmatory factor a nd principal components analyses were used to determine the dimensions of the problem solving concept. Rasch analysis was conducted to determine the item-level psycho metrics of the 19 developed problem solving items. Concurrent validation of the developed items was conducted with other measures of problem solving. Analyses resulted in a measur e that showed moderately good measurement characteristics when rated by the patient and caregiver. The rate d items did not confirm the theoretical problem solving model, a finding that has been paralleled by earlier empirical studie s. Since a majority of the items loaded onto one factor, we assume d unidimensionality and proceeded with Rasch analysis. The resulting item-level measurement characteristics were reasonable for both the patient and caregiver. However, each rater had advantages over the other rater. While the caregiver has higher reliability, and a better person separation, the patient had fewer misfitting items and showed a better match of person ability to item difficulty. A second analysis was conducted to investigat e the concurrent validity of the newly developed measure. In general, the caregiver-rated measure show ed a moderate-good correlation with external criterion while the patient-rated measure showed a zero to weak correlation with external criterion. Unfortunatel y, there are a number of limitations to this validation study. Only observational measures were used to compare to the developed items. A performance measure would have allowed for a comparison of percei ved performance with actual performance and dispelled concerns over rater bias. Additionall y, both caregiver and the patient only rated one common problem solving measure. Therefore, comparisons could not be made between the patients and caregivers performance on our m easure and other problem solving measures.
79 Answers to Research Questions Overall our psychometric and concurrent va lidity studies addressed and answered the following questions: Research question 1: Is everyday problem solving a uni dimensional construct, or does it represent multiple constructs? We were unable to confirm a one factor analys is as the items represented more than one factor, however, the principal component analysis loaded primarily onto one factor. Therefore, further analysis is needed with either a multi-fact or model or modification of the items to reflect a single factor. Research question 2: Does the empirically derived item-d ifficulty hierarchy structure of problem solving validate the hypothesized hierarchy? Overall, both the patient and caregiver hi erarchies reflected the hypothesized hierarchy, although there are items that di d not appear in the easy or difficult positions as expected. Research question 3: How well do the developed items separate individuals based on their problem solving abilities. The caregivers separated into f our separate categories while the patients separated into three separate categories. Research question 4: Who is the better rater of problem solving ab ilities in individuals diagnosed with TBI; the ca regiver or the patient? As discussed above, each of the raters had adva ntages. The caregiver is the more reliable rater, and separates individuals into four abili ty levels, while the patient had fewer misfitting items and more closely matched person ability to item difficulty. Additionally, the caregiver showed a moderate-good relationship with ex isting problem solving and executive function
80 measures, while the patient had, at best, a weak relationship with these measures. Unfortunately, the patient and caregivers did not rate the same problem solving measures, so direct comparisons could not be made. Research question 5: What is the relationship betw een traditional measures and the developed measure of problem solving? The relationship was moderate-good for the caregiver and zero-w eak for the patient compared to other observations measures of problem solving. Future Research Needs Research is indicated to further clarify the fi ndings of this study and to further explore the concept of problem solving and how best to meas ure it. First, to clarif y the findings of this research, a multi-dimensional analysis coul d be conducted to allow for the potential multidimensionality of the measure. Further ite m analyses could also be conducted to further refine the psychometrics of the items. These an alyses might include: 1) revision or elimination of misfitting items and items with low correlations with other items; 2) development of more items to strengthen the theoreti cal hierarchy; or conduct differe ntial item functioning to compare how the patient and the caregiver perceive the difficulty of each item. The item analysis could also lead to future research to compare the perc eived performance of patie nt and caregiver to the actual performance of the patient to determine who is more accurate. This ideally would be done by generating external cr iteria of real world behavior sa mples, perhaps with videotape. Indeed, future research is indicated to suppor t or challenge the conc urrent validity of the developed measures. The caregiver and the patient should both rate all external problem solving measures so that direct comparisons can be made. Additionally, a neuropsychological performance measure of problem solving, such as the WCST, could provide another layer of information in evaluating the concurrent valid ity of the newly developed measure.
81 Our findings also present a challenge as to wh ether the current model of problem solving is valid, or whether a new model should be develo ped which better reflects the problem solving abilities of individuals diagnosed with severe TB I. In particular, a crit ical question is whether emotion (attitudes toward problem solving) should be included in an already complex model. For example cognitive models such as memory do not include an emotional component even though we know that anxiety interf eres with memory. We would not expect to find an item in a memory measure that states do you feel fr ightened and afraid when you have something important to remember. It may be that a m easurement of problem solving ability should not include an emotional component, but rather should be based on ability, regardless of the emotional overlay? Another critical question is how to address executive func tioning in studies of problem solving. For example, can problem solving abilities be adequately captured with measures of executive functioning? Since problem solvi ng is a complex concept, might the planning, organizing and cognitive control aspects of executive functioning better capture an individuals abilities? Future studies may clarify the role of executive functioning in problem solving by comparing executive functioning measures to ob servational and performance measures of problem solving. The concept of problem solving is not well understood and has not been well researched. Existing measures are limited, especially for indi viduals diagnosed with TBI. The present study shows evidence that a measure of everyday proble m solving abilities can be developed and rated by individuals diagnosed with TBI and their caregivers. While this is an in itial step in the development of an instrument, the findings show promise for future development and research.
82 APPENDIX A COMPLETE MEASUREMENT INSTRUMENT IN WHICH 19 PROBLEM SOLVING ITEMS WERE INCLUDED
83 Computer Adaptive Measure of Functional C ognition (CAMFC) for Traumatic Brain Injury PURPOSE: The purpose of the survey is to better understand the experiences of individuals recovering from a head injury. INSTRUCTIONS: Attached are 228 statements about some daily thin king skills. For each item, please consider how often you (or the person you are rati ng) have been able to do that activity in the past week Rate how you (or the person you are rating) have been doi ng by circling one of the responses. For example: 1. Able to write own name. Never Sometimes Often Always N/A If you are not comfortable rating an item or if the item refers to an activity yo u (or the person you are rating) have not done in the past week you should respond N/A for Not Applicable (do NOT respond Never). 1.Correctly answer questions about himself/herself (for example, What is your name? How old are you? Where are you? What year is it?). Never Sometimes Often Always N/A 2.Goes directly from his/her room to a specific location (for example, dining room, therapy room) without wandering. Never Sometimes Often Always N/A
84 3.Greets a familiar person when that person enters the room. Never Sometimes Often Always N/A 4.Selects meal items from a menu. Never Sometimes Often Always N/A 5.Copies daily schedule correctly. Never Sometimes Often Always N/A 6.Turns toward a ringing phone. Never Sometimes Often Always N/A 7.Writes down a short phone message correctly. Never Sometimes Often Always N/A 8.Participates in a structured activity for 30 minutes with rest break (for example, a therapy session). Never Sometimes Often Always N/A 9.Participates in a structure act ivity for 30 minutes without rest break (for example, a therapy session). Never Sometimes Often Always N/A 10.Completes a self-care activity (fo r example, brushes teeth, gets dressed) without getting distracted. Never Sometimes Often Always N/A 11.Stays focused on a 5 to 10 minutes activity in a noisy environment. Never Sometimes Often Always N/A 12.Completes 2 to 3 minute conversation using the phone. Never Sometimes Often Always N/A 13.Completes a meal with distractions (for example, conversations, TV). Never Sometimes Often Always N/A 14.Has a conversation with a small group (family or few friends). Never Sometimes Often Always N/A 15.Has a conversation in a noisy environment (therapy room). Never Sometimes Often Always N/A 16.Watches TV without being dist racted by people talking. Never Sometimes Often Always N/A 17.Talks with someone without bei ng distracted by a TV on in the background. Never Sometimes Often Always N/A 18.Correctly writes down message fr om an answering machine. Never Sometimes Often Always N/A 19.Locates a phone number or address in the telephone book. Never Sometimes Often Always N/A 20.Locates particular item or br and in the grocery store. Never Sometimes Often Always N/A 21.Locates particular size of clot hing on a department store rack or shelf. Never Sometimes Often Always N/A
8522.Selects meal items from a complex menu (for example, restaurant menu). Never Sometimes Often Always N/A 23.Locates needed information in a section or article in the newspaper. Never Sometimes Often Always N/A 24.Selects outfit from a dresser (ch est of drawers) or closet. Never Sometimes Often Always N/A 25.Sorts important mail from junk mail. Never Sometimes Often Always N/A 26.Locates items in the refrigerator. Never Sometimes Often Always N/A 27.Sits through an hour-long TV program without getting distracted. Never Sometimes Often Always N/A 28.Reads 30 minutes without taking a break. Never Sometimes Often Always N/A 29.Listens for 15-30 minutes quietly and with focus (for example during a religious service or class lecture). Never Sometimes Often Always N/A 30.Participates in a 10-20 minute c onversation, staying on topic. Never Sometimes Often Always N/A 31.Participates in a structured activity for one hour with only a short rest break. Never Sometimes Often Always N/A 32.Returns to an activity wit hout a reminder after a short interruption. Never Sometimes Often Always N/A 33.Maintains speed and accuracy when doing a task in a distracting environment (for exam ple, people walking in and out the room or people talking). Never Sometimes Often Always N/A 34.Picks out important information fr om a lecture/instruction. Never Sometimes Often Always N/A 35.Continues to work on an extended project (for example, one that takes several days). Never Sometimes Often Always N/A 36.Notices when a warning light appears on the dashboard (for example, seat belt, door ajar emergency break, engine service). Never Sometimes Often Always N/A 37.Maintains safe driving while talk ing to a person in the car. Never Sometimes Often Always N/A
8638.Maintains safe driving wh ile answering a cell phone. Never Sometimes Often Always N/A 39.Writes down a phone message wh ile talking on the phone at the same time. Never Sometimes Often Always N/A 40.Goes back and forth between r eading instructions and doing a task (for example, reading a r ecipe while cooking, looking at a manual to repair a car, looking at instructions to put together a new purchase). Never Sometimes Often Always N/A 41.Locates items in a store using a shopping list. Never Sometimes Often Always N/A 42.Finishes one task before starting another. Never Sometimes Often Always N/A 43.Able to work on multiple th ings at the same time (for example, preparing a second dish while something is already cooking on the stove, keeping an eye on children while doing other things). Never Sometimes Often Always N/A 44.Looks toward person after being touched lightly. Never Sometimes Often Always N/A 45.Answers the phone when it rings. Never Sometimes Often Always N/A 46.Participates in a structure act ivity for 5 to 10 minutes (for example, short therapy session or simple grooming activity). Never Sometimes Often Always N/A 47.Able to use map or follow writ ten direction to get to an unfamiliar location. Never Sometimes Often Always N/A 48.Leaves out steps of a task (fo r example, does not remove shaving cream completely from face). Never Sometimes Often Always N/A 49.Stops in the middle of a task when distracted (for example, by someone talking). Never Sometimes Often Always N/A 50.Pays attention to the wrong conversation or activity (for example, listening to nearby conversation rather than the person they are talking to). Never Sometimes Often Always N/A 51.Makes more mistakes as the length of the task increases. Never Sometimes Often Always N/A 52.Stop chewing food when distracted. Never Sometimes Often Always N/A
87 53.Recalls a meal later in the da y (for example, remembers what they had for breakfast when asked in the afternoon). Never Sometimes Often Always N/A 54.Knows the current month. Never Sometimes Often Always N/A 55.Recalls a visit from a familiar person (friends, family) earlier in the day. Never Sometimes Often Always N/A 56.Recalls what he/she di d before the injury (job/school/homemaking). Never Sometimes Often Always N/A 57.Recalls basic instructions (for example, using equipment in their room, using call button to call nurse, turning on TV). Never Sometimes Often Always N/A 58.Recalls a simple routine (for example, doing an exercise, using memory book). Never Sometimes Often Always N/A 59.Recalls more than one appoint ment (for example, multiple health care appointments or social activities) in a single day. Never Sometimes Often Always N/A 60.Recalls a visit by a familiar pers on (friends, family, therapist) from the previous day. Never Sometimes Often Always N/A 61.Recalls to take medicine at th e right time and right amount. Never Sometimes Often Always N/A 62.Recalls where to find something wh en it is not put in its usual place (for example, looking for keys). Never Sometimes Often Always N/A 63.Recalls the steps in doing a simple activity (for example, gathering needed materials an d items for making a sandwich or cooking breakfast, washing a car, loading and starting a dishwasher). Never Sometimes Often Always N/A 64.Recalls to move laundry from washer to dryer. Never Sometimes Often Always N/A 65.Recalls to put food away in the refrigerator when finished. Never Sometimes Often Always N/A 66.Recalls to turn off the stove or oven. Never Sometimes Often Always N/A 67.Recalls to lock the door when leaving the house. Never Sometimes Often Always N/A 68.When driving, remembers to take the key when getting out of the car. Never Sometimes Often Always N/A
88 69.Recalls to give someone a telephone message. Never Sometimes Often Always N/A 70.Recalls familiar route without assistance (for example going from home to a local store). Never Sometimes Often Always N/A 71.Recalls a newly learned route without assistance. Never Sometimes Often Always N/A 72.Recalls where the car is par ked in the mall/grocery store parking lot. Never Sometimes Often Always N/A 73.Recalls to use a calendar to keep track of appointments from week to week. Never Sometimes Often Always N/A 74.Recalls information given at a previous therapy or doctor appointment. Never Sometimes Often Always N/A 75.Recalls birthdays, holidays or anniversaries. Never Sometimes Often Always N/A 76.Recalls frequently used phone numbers. Never Sometimes Often Always N/A 77.Recalls to get an item at the st ore that was not written down. Never Sometimes Often Always N/A 78.Recalls the story line in a book from one reading to the next. Never Sometimes Often Always N/A 79.Recalls to go to doctors appointments. Never Sometimes Often Always N/A 80.Recalls events from last birthday/vacation. Never Sometimes Often Always N/A 81.Recalls to do weekly chores. Never Sometimes Often Always N/A 82.Recalls to pay bills (for example, rent, electric, phone or credit card). Never Sometimes Often Always N/A 83.Recalls upcoming deadlines, assignments, or meetings. Never Sometimes Often Always N/A 84.Goes to a room to get something, but forgets what to get. Never Sometimes Often Always N/A 85.Begins to do something and fo rgets what was to be done. Never Sometimes Often Always N/A 86.Loses train of thought in a conversation. Never Sometimes Often Always N/A 87.Repeats a story that has already been told. Never Sometimes Often Always N/A 88.Answers the phone within at least 3 rings. Never Sometimes Often Always N/A 89.Says come in in response to knock on the door. Never Sometimes Often Always N/A
8990.Completes menu selection in a timely manner (less than 5 minutes). Never Sometimes Often Always N/A 91.Writes name in a timely manner (within 5 seconds). Never Sometimes Often Always N/A 92.Copies schedule in a timely manner (within 5 minutes). Never Sometimes Often Always N/A 93.Begins to answer open-ended questions within 2 seconds (for example, answers Wha t did you do today?") Never Sometimes Often Always N/A 94.Gets dressed within 15 minutes. Never Sometimes Often Always N/A 95.Completes tasks or chores by a set deadline. Never Sometimes Often Always N/A 96.Makes a simple breakfast within 5-10 minutes, if physically able (for example, toast or coffee). Never Sometimes Often Always N/A 97.Keeps up with a conversation without asking people to repeat. Never Sometimes Often Always N/A 98.Follows simple directions wi thout asking people to repeat. Never Sometimes Often Always N/A 99.Washes a car within 30 minutes. Never Sometimes Often Always N/A 100.Unloads the washing machine within 10 minutes. Never Sometimes Often Always N/A 101.Puts away clean dishes within 15 minutes. Never Sometimes Often Always N/A 102.Takes a phone message without asking the caller to repeat more than one time. Never Sometimes Often Always N/A 103.Takes a phone message off the answering machine without having to replay the messag e more than one time. Never Sometimes Often Always N/A 104.Gets money from an ATM within 5 minutes. Never Sometimes Often Always N/A 105.Follows an automated phone menu (instruction/choices) successfully (for example, Press 1 for___). Never Sometimes Often Always N/A 106.Keeps up with the story of a 30-minute TV show without asking others what is going on. Never Sometimes Often Always N/A 107.Sorts daily mail within 5 minutes. Never Sometimes Often Always N/A 108.Writes a check in a grocery stor e without holding up the line. Never Sometimes Often Always N/A
90 109.Places a food order in a drive -through without holding up the line. Never Sometimes Often Always N/A 110.Pays for a fast-food order within 30 seconds. Never Sometimes Often Always N/A 111.Reads a restaurant menu and makes a selection within 5 minutes. Never Sometimes Often Always N/A 112.Reads a one page letter within 5 minutes. Never Sometimes Often Always N/A 113.Keeps up the pace required of school or work setting. Never Sometimes Often Always N/A 114.Shops for a few items in a reasonable amount of time (for example, gets 5-10 items at th e grocery store in about 20 minutes). Never Sometimes Often Always N/A 115.Opened-ended questions need to be asked more than once (for example, What do you want to drink?", "What do you need?"). Never Sometimes Often Always N/A 116.Takes a long time to finish eat ing a meal (for example, over 20 minutes). Never Sometimes Often Always N/A 117.Takes a long time to get dr essed (for example, over 20 minutes). Never Sometimes Often Always N/A 118.Needs repeated requests to respond (for example, "open your eyesopen your eyes."). Never Sometimes Often Always N/A 119.Makes mistakes when trying to keep up (for example, when trying to finish within a time limit). Never Sometimes Often Always N/A 120.Reacts slowly in driving situat ions (for example, reacting to stop lights, pedestrians, sudden stops in traffic). Never Sometimes Often Always N/A 121.Complete a simple task that has several steps (for example, chooses clothes and gets dressed). Never Sometimes Often Always N/A 122.Complete a complex task that ha s several steps (for example, cooking a complete dinner, doing a house repair or building something). Never Sometimes Often Always N/A
91 123.Plans a common daily activity (for example, gathers items needed for dressing or grooming). Never Sometimes Often Always N/A 124.Plans a new activity (for exampl e, gathers items needed for cooking or craft project). Never Sometimes Often Always N/A 125.Plans ahead in order to get to an appointment on time. Never Sometimes Often Always N/A 126.Fills free time with activit ies without being told. Never Sometimes Often Always N/A 127.Starts an activity without be ing told (for example, starts getting dressed after getting up in the morning). Never Sometimes Often Always N/A 128.Make careless errors during a new activity (for example, doing things out of order or leaving a step out of an activity). Never Sometimes Often Always N/A 129.Does not recognize limitations when attempting a task (for example, tries to walk when unable). Never Sometimes Often Always N/A 130.Recognizes and corrects mistakes. Never Sometimes Often Always N/A 131.Readily changes behaviors when an error is pointed out. Never Sometimes Often Always N/A 132.Talks at the wrong time (interr upts conversation, talks when he/she should be listening). Never Sometimes Often Always N/A 133.Does not ask embarrassing questions, or make hurtful/inappropriate comments. Never Sometimes Often Always N/A 134.Stays seated until a task is done. Never Sometimes Often Always N/A 135.Gets started on homework/chores without being told. Never Sometimes Often Always N/A 136.Catches own mistakes while working on a task. Never Sometimes Often Always N/A 137.Starts a task early enough to get it done (for example, starts to get ready for school/ work/a ppointment in order to arrive on time). Never Sometimes Often Always N/A 138.Comes up with ideas for things to do during free time. Never Sometimes Often Always N/A 139.Chooses clothes based on the weather. Never Sometimes Often Always N/A
92140.Demonstrates an understanding of own abilities (for example, does not ask to drive or return to work/school if they are not able). Never Sometimes Often Always N/A 141.Gathers materials needed for an activity (for example, necessary materials for work or school). Never Sometimes Often Always N/A 142.Identifies items needed to put together a list (for example, grocery items for the week, shopping list, materials for project or repairs). Never Sometimes Often Always N/A 143.Organizes an activity several da ys in advance (for example, planning a trip, visiting friends, planning holiday activities). Never Sometimes Often Always N/A 144.Organizes a written list (for example, errands list organized by store types, grocery list by sections of store). Never Sometimes Often Always N/A 145.Keeps personal area organized (for example, putting things away in the bedroom, bathroom, kitchen, laundry room). Never Sometimes Often Always N/A 146.Tells someone or takes action when something goes wrong (for example, water on the floor, shoes untied, pot boiling over). Never Sometimes Often Always N/A 147.Seeks help when needed. Never Sometimes Often Always N/A 148.Stops an activity to do something else that needs to get done (for example, stops watching TV to get dressed). Never Sometimes Often Always N/A 149.Makes reasonable attempts to solve problems before asking for help. Never Sometimes Often Always N/A 150.Follows safety rules (for example, locks wheelchair brakes when stopped, not opening doors to strangers, looks both ways before crossing street). Never Sometimes Often Always N/A 151.Tries to do an activity before having the ability to do it (such as standing or walking unassi sted, cooking, driving, returning to school/work). Never Sometimes Often Always N/A
93152.Comes up with an alternate so lution when the first solution does not work (for example, wh en a drain cleaner does not work, calls a plumber). Never Sometimes Often Always N/A 153.Stops talking when a discussion becomes heated. Never Sometimes Often Always N/A 154.Suggests or attempts a solution to a problem. Never Sometimes Often Always N/A 155.Tries a different approach to a problem when the first one does not work. Never Sometimes Often Always N/A 156.Adds a new topic to a conversation. Never Sometimes Often Always N/A 157.Listens to anothers pers pective without arguing. Never Sometimes Often Always N/A 158.Do the things needed to prepare for a bigger project (for example, moving furniture before painting a room). Never Sometimes Often Always N/A 159.Organizes a short written doc ument (for example, letter, memo, email or school paper) Never Sometimes Often Always N/A 160.Estimates the time needed to do a series of tasks to meet a deadline. Never Sometimes Often Always N/A 161.Adjusts schedule to meet a deadline. Never Sometimes Often Always N/A 162.Makes changes to a schedule if needed (for example, postpones doing errands to be on time for work. Never Sometimes Often Always N/A 163.Fills gas tank before it runs out. Never Sometimes Often Always N/A 164.Picks up hints from others that they should end a conversation (for example, other person looks at watch). Never Sometimes Often Always N/A 165.Dresses to match social situ ation (for example, dresses casually to go out with friends and dresses more formally for special events). Never Sometimes Often Always N/A 166.Asks questions to get more information about injury. Never Sometimes Often Always N/A 167.Initiates a discussion about future needs (for example, returning home after hospitaliz ation, asking about financial issues, or resuming work or school). Never Sometimes Often Always N/A
94168.Plans a short trip using public transportation (for example, bus or subway). Never Sometimes Often Always N/A 169.Able to make a quick, simple decision (for example, where to go to dinner). Never Sometimes Often Always N/A 170.Does not readily switch from one activity to another (for example, will not stop watchi ng TV to begin dressing). Never Sometimes Often Always N/A 171.Makes careless errors in daily tasks (for example, misses a button, forgets to put toothpaste on toothbrush). Never Sometimes Often Always N/A 172.Gets stuck on a topic (keeps ta lking about the same thing). Never Sometimes Often Always N/A 173.Does not stop or apologizes when behavior bothers others. Never Sometimes Often Always N/A 174.Does not know what to do next, so stops in the middle of a task. Never Sometimes Often Always N/A 175.Allows others to solve problem s for them when they could have done it themselves. Never Sometimes Often Always N/A 176.Jumps to a solution when at tempting to solve a problem. Never Sometimes Often Always N/A 177.Is overly trusting (does not recognize when being taken advantage of). Never Sometimes Often Always N/A 178.Buys unnecessary items that look appealing (impulse buying). Never Sometimes Often Always N/A 179.Interrupts while someone is talking on the phone. Never Sometimes Often Always N/A 180.Bothers other people while they are working. Never Sometimes Often Always N/A 181.Makes errors when solving a pr oblem that has several steps (for example, cooking/following a recipe, shopping, car maintenance, programming electronics). Never Sometimes Often Always N/A 182.Has problems managing money (for example, tries to make purchase without enough money, overdrawing checking account, running-up credit cards). Never Sometimes Often Always N/A
95 183.Does not start activities on own (for example, must be told what to do). Never Sometimes Often Always N/A 184.Gives up if first attempt to so lve a problem is not successful. Never Sometimes Often Always N/A 185.Blames others for pr oblems or mistakes. Never Sometimes Often Always N/A 186.Gets upset with a change of routine. Never Sometimes Often Always N/A 187.Does not react when people ar e visibly upset (for example, fails to ask "Are you ok?" when someone is crying). Never Sometimes Often Always N/A 188.Accepts constructive criticis m without losing temper (for example, from therapist, fa mily member, employer). Never Sometimes Often Always N/A 189.Accepts help without losing temper. Never Sometimes Often Always N/A 190.Stops an activity and starts a new activity without getting upset (for example, stops watc hing TV and comes to eat dinner). Never Sometimes Often Always N/A 191.Listens to anothers pers pective without arguing. Never Sometimes Often Always N/A 192.Calms down after an argument. Never Sometimes Often Always N/A 193.Becomes tearful easily when upset. Never Sometimes Often Always N/A 194.Has angry or tearful outburst s for no apparent reason. Never Sometimes Often Always N/A 195.Overreacts to challenges (for example, becomes agitated by minor changes in daily routine). Never Sometimes Often Always N/A 196.Gets frustrated or upset when having to wait to do something (for example, go to a st ore; watch TV). Never Sometimes Often Always N/A 197.Overreacts to frustrating situat ions (for example, tool does not work, someone takes parking place). Never Sometimes Often Always N/A 198.Frustration increases to the poin t of getting physical (for example, hitting, throwing or pushing). Never Sometimes Often Always N/A
96 199.Gets a persons attention before starting a conversation (for example, waits for eye cont act before talking). Never Sometimes Often Always N/A 200.Allows others to take a turn in a conversation (for example, gives another person a chance to talk). Never Sometimes Often Always N/A 201.Greets person when someone enters the room. Never Sometimes Often Always N/A 202.Able to talk with more than one person at a time. Never Sometimes Often Always N/A 203.Begins to answer open-ended questions within an appropriate amount of time (f or example, responds to "What did you do today?" within a few seconds). Never Sometimes Often Always N/A 204.Provides enough information when telling someone about something. Never Sometimes Often Always N/A 205.Faces the person when speaking. Never Sometimes Often Always N/A 206.Appropriate eye contact when having a conversation (for example, looks away occasional ly during a conversation). Never Sometimes Often Always N/A 207.Shows interests in what othe r people are saying (for example, keep eye contact, comments or nods). Never Sometimes Often Always N/A 208.Acknowledges another person's point of view (for example, by nodding or commenting). Never Sometimes Often Always N/A 209.Keeps up with a conversation without asking people to repeat. Never Sometimes Often Always N/A 210.Participates in a 10-20 minute c onversation, staying on topic. Never Sometimes Often Always N/A 211.Picks out important information from a lecture/instruction. Never Sometimes Often Always N/A 212.Adds a new topic to a conver sation at the right time. Never Sometimes Often Always N/A 213.Starts a conversation. Never Sometimes Often Always N/A 214.Misunderstands what the speaker intends (for example, does not recognize when someone makes a joke or uses sarcasm). Never Sometimes Often Always N/A 215.Sounds rude or demanding when making a request. Never Sometimes Often Always N/A
97216.Facial expression does not ma tch the conversation (for example, blank expression during an emotional conversation or smiles too much during a serious conversation). Never Sometimes Often Always N/A 217.Gets too close when talking to someone. Never Sometimes Often Always N/A 218.Gets stuck on a topic (keeps ta lking about the same thing). Never Sometimes Often Always N/A 219.Talks at the wrong time (interr upts conversation, talks when he/she should be listening). Never Sometimes Often Always N/A 220.Asks embarrassing questions, or makes hurtful/inappropriate comments. Never Sometimes Often Always N/A 221.Jumps to a topic unrelated to the conversation. Never Sometimes Often Always N/A 222.Walks away from conversation before it is finished. Never Sometimes Often Always N/A 223.Blurts out something off t opic during a conversation. Never Sometimes Often Always N/A 224.Loses train of thought in a conversation. Never Sometimes Often Always N/A 225.Repeats a story that has already been told. Never Sometimes Often Always N/A 226.Interrupts while someone is talking on the phone. Never Sometimes Often Always N/A 227.Interrupts other peopl es conversation. Never Sometimes Often Always N/A 228.Provides too much information when telling someone something. Never Sometimes Often Always N/A
98 APPENDIX B CONCURRENT VALIDITY EXTERNAL MEASURES
104 REFERENCES Arciniegas, D. B., Held, K., & Wagner, P. (2002). Cognitive Impairment Following Traumatic Brain Injury. Curr Treat Options Neurol, 4 (1), 43-57. Bamdad, M. J., Ryan, L. M., & Warden, D. L. (2003). Functional a ssessment of executive abilities following traumatic brain injury. Brain Inj, 17 (12), 1011-1020. Bechara, A., Damasio, A. R., Damasio, H., & A nderson, S. W. (1994). Insensitivity to future consequences following damage to human prefrontal cortex. Cognition, 50 (1-3), 7-15. Bechara, A., Damasio, H., & Damasio, A. R. ( 2003). Role of the amygdala in decision-making. Ann N Y Acad Sci, 985 356-369. Beck A.T, Steer R. A., & Brown, G.K. (1996). Beck Depression Inventory-II (BDI-II), San Antonio, TX: The Psychological Corportation. Bellack, A. S., Morrison, R. L., & Mueser, K. T. (1989). Social problem solving in schizophrenia. Schizophr Bull, 15 (1), 101-116. Bowden, S., Fowler, KS, Bell, RC, et al. (1998). The reliability and internal validity of the Wisconsin Card Sorting Test. Neuopsychological Rehabilitaiton, 8 243-254. Brown, T. (2008). Confirmatory Factor Anal ysis for Applied Research NY: Guileford Publications. Burgess, P. W., Alderman, N., Evans, J., Em slie, H., & Wilson, B. A. (1998). The ecological validity of tests of executive function. J Int Neuropsychol Soc, 4 (6), 547-558. Cazalis, F., Feydy, A., Valabregue, R., Pelegrini-Issac, M., Pierot L., & Azouvi, P. (2006). fMRI study of problem-solving after se vere traumatic brain injury. Brain Inj, 20 (10), 10191028. Chan, R. C., Chen, E. Y., Cheung, E. F., Che n, R. Y., & Cheung, H. K. (2004). Problem-solving ability in chronic schizophrenia. A compar ison study of patients with traumatic brain injury. Eur Arch Psychiatry Clin Neurosci, 254 (4), 236-241. Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: a review of the literature on everyday cognitive skills. Neuropsychol Rev, 13 (4), 181-197. Cicerone, K. D., Dahlberg, C., Kalmar, K., Langenba hn, D. M., Malec, J. F., Bergquist, T. F., et al. (2000). Evidence-based cognitive rehabi litation: recommendations for clinical practice. Arch Phys Med Rehabil, 81 (12), 1596-1615. Cicerone, K. D., Dahlberg, C., Malec, J. F., La ngenbahn, D. M., Felicetti, T., Kneipp, S., et al. (2005). Evidence-based cognitive rehabilitation: updated revi ew of the literature from 1998 through 2002. Arch Phys Med Rehabil, 86 (8), 1681-1692.
105 Cifu, D. X., Kreutzer, J. S., Marwitz, J. H., Rosenthal, M., Englander, J., & High, W. (1996). Functional outcomes of older adults with traumatic brain injury: a prospective, multicenter analysis. Arch Phys Med Rehabil, 77 (9), 883-888. Corrigan, J. D., Smith-Knapp, K., & Granger, C. V. (1997). Validity of the functional independence measure for persons with traumatic brain injury. Arch Phys Med Rehabil, 78 (8), 828-834. Corrigan, J. D., Whiteneck, G., & Mellick, D. (2 004). Perceived needs following traumatic brain injury. J Head Trauma Rehabil, 19 (3), 205-216. Derogatis, L. Spencer., PM. (1982). The Brief Symptom Inventory: Test Manual John Hopkins. Dikmen, S., Machamer, J., & Temkin, N. ( 1993). Psychosocial outcome in patients with moderate to severe head injury: 2-year follow-up. Brain Inj, 7 (2), 113-124. Dikmen, S. S., Ross, B. L., Machamer, J. E., & Temkin, N. R. (1995). One year psychosocial outcome in head injury. J Int Neuropsychol Soc, 1 (1), 67-77. Donaghy, S., & Wass, P. J. (1998). Interrater reli ability of the Functiona l Assessment Measure in a brain injury reha bilitation program. Arch Phys Med Rehabil, 79 (10), 1231-1236. D'Zurilla, T. J. Nezu., A.M. (1982). Social problem-solving in adults. In Kendall (Ed.), Advances in cognitive-behavioral research and therapy (Vol. 1). New York: Academic Press. D'Zurilla, T. J. Nezu., A.M. (1999). Problem-Solving therapy: A social competence approach to clinical intervention (2nd ed.). New York: Springer. D'Zurilla, T. J. Nezu. A. M. (1990). Devleopm ent and preliminary evaluation of the Social Problem-Solving Inventory (SPSI). Psychological Assessment: A Journal of Consulting and Clinical Psychology, 2 156-163. Finkelstein, E. C., P. Miller, T. et.al. (2006). The Incidence and Economic Burden of Injuries in the United States. New York (NY): Oxford University Press;. Foxx, R., Martella RC, Marchand -Martella, NE. (1989). The ac quisition, maintenance, and generalization of problem-solving skills by closed head-injured adults. Behavior Therapy 20, 61-76. Foxx, R. Faw., GD (2000). The pursuit of actual problem-solving behavior: an opportunity for behavior analysis. Behavior and Social Issues, 10, 71-81. Fugate, L. P., Spacek, L. A., Kresty, L. A., Levy, C. E., Johnson, J. C., & Mysiw, W. J. (1997). Definition of agitation following traumatic brain injury: I. A survey of the Brain Injury Special Interest Group of the American Academy of Physical Medicine and Rehabilitation. Arch Phys Med Rehabil, 78 (9), 917-923.
106 Geusgens, C. A., Winkens, I., van Heugten, C. M ., Jolles, J., & van den Heuvel, W. J. (2007). Occurrence and measurement of transfer in cognitive rehabil itation: A critical review. J Rehabil Med, 39 (6), 425-439. Goel, V. Goffman., J. (1995). Ar e the frontal lobes implicated in "planning" functions? Interpresting data from the Tower of Hanoi. Neuropsychologia, 33 623-642. Goldstein, F. C., & Levin, H. S. (1987). Epidemiol ogy of pediatric closed h ead injury: incidence, clinical characteristics, and risk factors. J Learn Disabil, 20 (9), 518-525. Gordon, W. A., Cantor, J., Ashman, T., & Brow n, M. (2006). Treatment of post-TBI executive dysfunction: application of th eory to clinical practice. J Head Trauma Rehabil, 21 (2), 156-167. Grace, J., & Malloy, P.F. (2002). Frontal Systems Behavior Scale (FrSBe). Lutz, FL: Psychological Assessment Resources. Granger, C. V., Hamilton, B. B., Linacre, J. M., Heinemann, A. W., & Wright, B. D. (1993). Performance profiles of the f unctional independence measure. Am J Phys Med Rehabil, 72 (2), 84-89. Greve, K. W., Love, J. M., Sherwin, E., Mathia s, C. W., Ramzinski, P., & Levy, J. (2002). Wisconsin Card Sorting Test in chronic severe traumatic brain injury: factor structure and performance subgroups. Brain Inj, 16 (1), 29-40. Gurka JA, Fekmingham. K., Baguley IJ, Schotte DE, Crooks J, Marosszeky JE. (1999). Utility of the Functional Assessment Measure after discharge from inpatient rehabilitation. J Head Trauma Rehabil, 14 (3), 247-256. Hall, K. (1992). Overview of functional assessment scales in brain injury rehabilitation. NeuroRehabilitation., 2 (4), 98-113. Hall KH, J. M. (1994). Outcomes evaluation in TB I rehabilitation part II: measurement tools for a nationwide data system. Archives of Physical Medicine and Rehabilitation. (75(suppl.):SC10-SC18). Hall, K. M., Bushnik, T., Lakisic-Kazazic, B., Wright, J., & Cantagal lo, A. (2001). Assessing traumatic brain injury outc ome measures for long-term follow-up of community-based individuals. Arch Phys Med Rehabil, 82 (3), 367-374. Hall KM, Hamilton. B., Gordon WA, Zalser ND. (1993). Characteristics and comparisons of functional assessment indices: Disability Rating Scale, Functional Independence Measure, and Functional Assessment Measure. Journal of Head Trauma Rehabilitation, 8 (2), 60-74. Hall KM, Mann. N., High WM, Wright JM, Kreut zer JS, Wood D. (1996). Functional measures after traumatic brain injury: ceiling eff ects of FIM, FIM+FAM, DRS, and CIQ. J Head Trauma Rehabil, 11 (5), 27-39.
107 Hammond F. M., Hart T., Bushnik T., Corrigan J. D., & Sasser H. (2004). Change and predictors of change in communication, cognition, and so cial function between 1 and 5 years after traumatic brain injury. J Head Trauma Rehabil, 19 (4), 314-328. Hannay H., Howieson DB, Loring DW, Fischer, JS & Lezak MD. (2004). Neuropathology for Neuropsychologists. In M. Leza k, Howieson, DB, Loring, DW (Ed.), Neuropsychological Assessment (4th ed.). New York, New York: Oxford University Press. Hart T., Whyte J., Kim J., & Vaccaro M. (2005). Executive function and self-awareness of "realworld" behavior and attention defici ts following traumatic brain injury. J Head Trauma Rehabil, 20 (4), 333-347. Hawley C. A., Taylor R., He llawell D. J., & Pentland B. (1999). Use of the functional assessment measure (FIM+FAM) in head injury rehabilitation: a psychometric analysis. J Neurol Neurosurg Psychiatry, 67 (6), 749-754. Heaton, R. (1981). Wisconsin Card Sorting Test (WCST) Odessa, FL: Psychological Assessment Resources. Heppner, P. (1988). The Problem Solving Inventory Palo Alto, CA: Consulting Psychologist Press. Hewitt J., Evans J. J., & Dritschel B. (2006) Theory driven rehabilitation of executive functioning: improving planning skills in peopl e with traumatic brain injury through the use of an autobiographical episodic memory cueing procedure. Neuropsychologia, 44 (8), 1468-1474. Jager T. E., Weiss H. B., Coben J. H., & Pepe P. E. (2000). Traumatic brain injuries evaluated in U.S. emergency departments, 1992-1994. Acad Emerg Med, 7 (2), 134-140. Kay, T., Ezrachi, O, & Cavallo, M. (1986). Plateaus and consistency: Long-term neuropsychological changes following head trauma. Paper presented at the 94th Annual Convention of the American Psychol ogical Association, Washington, DC. Kennedy, M. R., Coelho, C., Turkstra, L., Ylvisake r, M., Moore Sohlberg, M., Yorkston, K., et al. (2008). Intervention for execu tive functions after traumatic brain injury: A systematic review, meta-analysis and clinical recommendations. Neuropsychol Rehabil, 18 (3), 257299. Langlois J. A., Kegler S. R., Butler J. A., Gotsch K. E., Johnson, R. L., Reichard A. A., et al. (2003). Traumatic brain injury-related hosp ital discharges. Results from a 14-state surveillance system, 1997. MMWR Surveill Summ, 52 (4), 1-20. Levine, B., Dawson, D. Boutet, I., Schwartz, M. L., & Stuss, D. T. (2000). Assessment of strategic self-regulation in traumatic brain inju ry: its relationship to injury severity and psychosocial outcome. Neuropsychology, 14 (4), 491-500.
108 Levine, B., Robertson, I. H., Clare, L., Cart er, G., Hong, J., Wilson, B. A., et al. (2000). Rehabilitation of executive functioning: an experimental-clinical validation of goal management training. J Int Neuropsychol Soc, 6 (3), 299-312. Lezak, M., Howieson, DB, Loring, DW. (2004). Neuropsychological Assessment ( 4th ed. ed.). NY, NY;: Oxford University Press. Linacre, J. M. (2002). Optimizing ra ting scale category effectiveness. J Appl Meas, 3 (1), 85-106. Linacre JM, Heinemann A., Wr ight BD, Granger CV,. (1993). Guide for the Uniform Data System for Medica l Rehabilitation. Buffalo (NY):: State Univ New York at Buffalo;. Luria, A. (1966). Higher cortical functions in man NY, NY: Basic Books. Malloy, P., & Grace, J. (2005). A re view of rating scales for meas uring behavior change due to frontal systems damage. Cogn Behav Neurol, 18 (1), 18-27. Manes, F., Sahakian, B., Clark, L., Rogers, R ., Antoun, N., Aitken, M., et al. (2002). Decisionmaking processes following damage to the prefrontal cortex. Brain, 125 (Pt 3), 624-639. Marshall, R. C., Karow, C. M., Morelli, C. A ., Iden, K. K., & Dixon, J. (2003). Problem-solving by traumatically brain injured and neurologically intact subj ects on an adaptation of the twenty questions test. Brain Inj, 17 (7), 589-608. Maydeu-Olivares, A. D. Z., TJ. (1995). A fact or analysis of the Social Problem-Solving Inventory using polychoric correlations. European Journal of Psychological Assessment, 11 98-107. Maydeu-Olivares, A. D. Z., TJ. (1996). A factor -analytic study of the Social Problem-Solving Inventory: An integrati on of theory and data. Cognitive Therapy and Research, 20 115133. McDonald, B. C., Flashman, L. A., & Sayki n, A. J. (2002). Executive dysfunction following traumatic brain injury: neural substrates and treatment strategies. NeuroRehabilitation, 17 (4), 333-344. McMahon, B. T., West, S. L., Shaw, L. R., Wa id-Ebbs, K., & Belongia, L. (2005). Workplace discrimination and traumatic brain injury: the national EEOC ADA research project. Work, 25 (1), 67-75. Millis, S. R., Rosenthal, M., Novack, T. A., Sherer M., Nick, T. G., Kreutz er, J. S., et al. (2001). Long-term neuropsychological outcome after traumatic brain injury. J Head Trauma Rehabil, 16 (4), 343-355. Naugle, R. (1990). Epidemiology of traumatic br ain injury in adults. In E. Bigler (Ed.), Traumatic brain injury Austin, TX: Pro-ed. Okie, S. (2005). Traumatic brain injury in the war zone. N Engl J Med, 352 (20), 2043-2047.
109 Porteus, S. (1959). The maze test and clinical psychology. Palo Alto, CA: Pacific Books. Portney, L. G., & Watkins, M.P. (2008). Foundations of Clinical Research; Applications to Practice (3rd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. Prigatano, G. P., & Altman, I. M. (1990). Impa ired awareness of behavioral limitations after traumatic brain injury. Arch Phys Med Rehabil, 71 (13), 1058-1064. Rath, J. F., Langenbahn, D. M., Simon, D., Sherr, R. L., Fletcher, J., & Diller, L. (2004). The construct of problem solving in higher level neuropsychological assessment and rehabilitation. Arch Clin Neuropsychol, 19 (5), 613-635. Rath, J. F., Simon, D., Langenbahn, D. M., Sherr, R. L., & Diller, L. (2000). Measurement of problem-solving deficits in adul ts with acquired brain damage. J Head Trauma Rehabil, 15 (1), 724-733. Rimel, R. W., Giordani, B., Barth, J. T., & Jane J. A. (1982). Moderate head injury: completing the clinical spectrum of brain trauma. Neurosurgery, 11 (3), 344-351. Roth RM, Isquith. P., Gioia GA. (2005). Behavior Rating Inventory of Executive Function-Adult Version Lutz, Florida: Psychological Assessment Resources, Inc. Saint-Cyr, J. T., AE. (1992). The mobilization of procedural learning: The "key signature" of the basal ganglia. In I. R. B. Squire, N. (Ed.), Neuropsychology of memory NY: Guilford Press. Sani, F., Bennett, M., & Soutar, A. U. (2005). Th e ecological validity of the "who said what?" technique: an examination of the role of self-involvement, cognitive interference and acquaintanceship. Scand J Psychol, 46 (1), 83-90. Sbordone, R. J., Liter, J. C., & Pettler-Jenni ngs, P. (1995). Recovery of function following severe traumatic brain injury: a retrospective 10-year follow-up. Brain Inj, 9 (3), 285-299. Shallice, T. (1982). Specific impairments of planning. Philosophical Transactions of the Royal Society of London., 298 199-209. Sherr, R. L., Langenbahn, D. M., Simon, D., Rath, J. F., & Diller, L. (1996a). Problem Solving Questionnaire. New York: New York: University School of Medicine, Rusk Institute of Rehabilitation Medicine. Sherr, R. L., Langenbahn, D. M., Simon, D., Rath, J. F., & Diller, L. (1996b). Problem Solving Roleplay Test New York: New York: University School of Medicine, Rusk Institute of Rehabilitation Medicine. Sherr, R. L., Langenbahn, D. M., Simon, D., Rath, J. F., & Diller, L. (1996a). Problem Solving Roleplay Test New York: New York: University School of Medicine, Rusk Institute of Rehabilitation Medicine.
110 Sherr, R. L., Langenbahn, D. M., Simon, D., Rath, J. F., & Diller, L. (1996b). Problem Solving Questionnaire. New York: New York: University School of Medicine, Rusk Institute of Rehabilitation Medicine. Smith, P. M., Illig, S. B., Fiedler, R. C., Hamilton, B. B., & Ottenbacher, K. J. (1996). Intermodal agreement of follow-up telephone f unctional assessment using the Functional Independence Measure in patients with stroke. Arch Phys Med Rehabil, 77 (5), 431-435. Sohlberg, M. M., & Mateer, C. A. (1987). Eff ectiveness of an attention-training program. J Clin Exp Neuropsychol, 9 (2), 117-130. Sohlberg, M. M. Mateer., C.A. (2001). Cognitive Rehabilitation: an integrative neuropsychological approach. NY, NY: Guilford Press. Speilberger, C. D., Gorsuch, R.L., & Lushene, R.E. (1970). State-Trait Anxiety Inventory (STAI Form CI) Palo Alto, CA: Consulting Psychologists Press. Strong, C. A., Donders, J., & van Dyke, S. (2005) Validity of demographically corrected norms for the WAIS-III. J Clin Exp Neuropsychol, 27 (6), 746-758. Stuss, D. T., & Alexander, M. P. ( 2007). Is there a dysexecutive syndrome? Philos Trans R Soc Lond B Biol Sci, 362 (1481), 901-915. Stuss, D. T., Benson, DF. (1986). The frontal lobes NY,NY: Raven Press. Thomsen, I. (1990). Recognizing the development of behavior disorders. In R. Wood (Ed.), Neurobehavioral sequelae of traumatic brain injury. Bristol, PA: Taylor & Francis. Thurman, D., & Guerrero, J. (1999). Trends in ho spitalization associated with traumatic brain injury. Jama, 282 (10), 954-957. Tisdelle, D. A., & St Lawrence, J. S. (1986). Soci al skills training to e nhance the interpersonal adjustment of a speech and hearing impaired adult. J Commun Disord, 19 (3), 197-207. Turkstra, L. S., & Flora, T. L. (2002). Compensating for executive function impairments after TBI: a single case study of functional intervention. J Commun Disord, 35 (6), 467-482. Turner-Stokes, L., Nyein, K., Turner-Stokes, T., & Gatehouse, C. (1999). The UK FIM+FAM: development and evaluation. Functional Assessment Measure. Clin Rehabil, 13 (4), 277287. Velozo, C. A., Choi, B., Zylstra, S. E., & Sant opoalo, R. (2006). Measurem ent qualities of a selfreport and therapist-scored f unctional capacity instrument based on the Dictionary of Occupational Titles. J Occup Rehabil, 16 (1), 109-122. von Cramon, D. Y. v., & Matthes-von Cramon, G. (1991). Problem-solving Deficits in Braininjured Patients: A Therapeutic Approach. Neuropsychol Rehabilitation, 1 (1), 45-64.
111 Votruba, K. L., Rapport, L. J., Vangel, S. J., Jr ., Hanks, R. A., Lequerica, A., Whitman, R. D., et al. (2008). Impulsivity and traumatic brai n injury: the relations among behavioral observation, performance measures, and rating scales. J Head Trauma Rehabil, 23 (2), 6573. Warden, D. L., & French, L. (2005). Trau matic brain injury in the war zone. N Engl J Med, 353 (6), 633-634. Wechsler. (1997). Wechsler Adult Intelligence Scall-III San Antonio: The Psychological Corporation. Whyte, J., Hart, T., Bode, R. K., & Malec, J. F. (2003). The Moss Attention Rating Scale for traumatic brain injury: initial psychometric assessment. Arch Phys Med Rehabil, 84 (2), 268-276. Wilson, B. A., Alderman, N., Burgess, P.W., et.al. (1996). Behavioral Assessment of the Dysexecutive Syndrome. Bury St. Edmunds, UK: Thames Valley Test. Wright, J. (2000). The Functional Assessment Measure. Retrieved The Center for Outcome Measurement in Brain Injury. http://www.tbims.org/ COMBI/FAM ( accessed June 1, 2008 ).* Zarit SH, R. K. B.-P. J. (1980). Relatives fo th e impaired elderly: Correlates of feelings and burden. Gerontologist, 20 649-655. Zasler, N. D., & Martelli, M. F. (2003). Mild tr aumatic brain injury: Impairment and disability assessment caveats. Neuropsychological Rehabilitation, 13 21-41.
112 BIOGRAPHICAL SKETCH My clinical experience as a Masters prepar ed Behavior Analyst has been primarily working with individuals diagnosed with TBI. I spent five years as a behavior analyst on a locked traumatic brain injury (TBI) rehabilitation unit. Then, I was the director of a transitional living center, where patients worked on re-integra tion into their communities. My research goals have been profoundly shaped by these experience s. In particular, I watched TBI patients, therapists, and families struggle with their daily activities, and almost always, their ability to reason through difficulties in organization a nd planning, and initiating functional problem solving behaviors were a core issues in rehabilitation. Tradi tional neuropsychological measures of executive functioning often woul d not detect the subtle defic its that caused disruption in patients lives when they returned home or to a work setting. Even more absent were the treatment strategies for such deficits. Hence, I fe lt the need to further my education and engage in research that would assist the therapists and the individuals impacted by TBI. My first job at the University of Florida was as Clinical Program Coordinator of the Center for Traumatic Brain Injury Studies in the McKnig ht Brain Institute, which allowed me to assist in TBI research. The primary focus of the resear ch center was to identify a biomarker that would determine whether brain damage had occurred an d whether outcomes could be predicted from the biomarker. My responsibility was to assist in the extension of a basic science program to the human participant. I was able to participate in writing grants and coordinating Institutional Review Board submissions, as well as recruit part icipants. The Center collaborated with worldrenowned scientists from Italy, Hungary, as well as Harvard and the Medical College of Virginia in NIH, Department of Defense and private founda tion grants. I served as research coordinator for two clinical trials, conducted at the North Florida/South Georgia Ve terans Affairs Medical Center that involved an experi mental medication to improve me mory for patients with closed
113 head injury. I also served as rese arch coordinator for a clinical tria l to evaluate taste and smell in participants with TBI. This experience in clin ical trials convinced me to pursue my PhD in Rehabilitation Science. I entered the PhD program in th e fall of 2003 with Linda Shaw as my advisor. My primary interest was in conducting singl e subject design research in in dividuals diagnosed with TBI. Using the training in single subject design met hodology from my masters program in clinical behavioral psychology, along with additional courses in single subject design and applied behavior analysis, I prepared to conduct a single subject de sign research project for my dissertation. My interest was in exploring the effect of problem so lving therapies, since this area was most debilitating for patients with whom I had worked. However, I was advised from several fronts that in order to te st the effect of a problem solvi ng intervention, a reliable measure was needed to detect an effect Therefore, my path was turned to Dr. Craig Velozo who was conducting a TBI research project to cr eate a functional cognitive measure. Dr. Velozos research project was an NIH R-21 project to create a measure of six cognitive domains. He agreed to accept me as one of his students so that I could develop a problem solving measure using 20 items, which fell under the domain of executive functioning. So, I joined the Velozo research team in 2005. I a ssisted the research team in recruiting and conducting focus groups to develop a bank of items. The 20 problem solving items were developed along with the items for other domains. Unfortunately during this time, my mother had become ill and required my care. During th e summer of 2005 my mother died during my preparation for my qualifying exam. After a fe w months, I completed my qualifying exam and proceeded to prepare my dissertation proposal.
114 Dr. Velozo assisted me in preparing a VA RR&D Pre-Doctoral Associated Health Rehabilitation Research Fellowship Award to pr epare me for a career as a VA researcher. Following submission of this grant I proposed my dissertation and was awar ded the grant in May 2006. I continued to work with the research team fo r the next phase of the research to recruit and administer the created measure a nd traditional measures to all the subjects in the Gainesville area. I submitted a Career Development Award to the VA in December, 2007 to conduct an Executive Dysfunction intervention using single subject design. Dr. Bruce Crosson was the primary mentor. The submission received a scor e that was not funded; therefore I resubmitted the grant in June 2008. Since my award was not f unded I continued to assist Dr. Velozo with his research project and recruit pa tients. My original proposal called for 50 subjects and their caregivers, however I continued with the proj ect until 80 subjects had been recruited. By extending my dissertation, I was able to collect a larger number of subjects which improved the likelihood of my articles being published. Additi onally, my assistance allowed Dr. Velozo to conclude his research project. Data analysis was conducted du ring the summer of 2008 and my dissertation was defended on August 7, 2008. The knowledge I have gained from this experience has been immeasurable. My dissertation committee and research ers at the VA have given me their time and encouragement to continue with a career in research. I feel that my initial goal to help the clinicians and the patients that are affected by TBI has been met. The experience and support I need to be a researcher have been obtained.