<%BANNER%>

Using Real-Data Simulations to Compare Computer Adaptive Testing and Static Short-Form Administrations of an Upper Extre...

Permanent Link: http://ufdc.ufl.edu/UFE0024792/00001

Material Information

Title: Using Real-Data Simulations to Compare Computer Adaptive Testing and Static Short-Form Administrations of an Upper Extremity Item Bank
Physical Description: 1 online resource (126 p.)
Language: english
Creator: Wang, Jia-Hwa
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: adaptive, computerized, item, measure, meaurement, outcome, response, testing, theory
Rehabilitation Science -- Dissertations, Academic -- UF
Genre: Rehabilitation Science thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Computerized adaptive testing (CAT), which administers only items relevant to respondents' ability, has the advantage of measuring persons' ability precisely with considerably fewer items than traditional tests. CAT has been proposed for use in healthcare to reduce the respondents', administrators' or researchers' burden in clinics/clinical trials. However, there have been few studies in healthcare that have investigated the optimal characteristics of a CAT. The objective of my study was to investigate: 1) the psychometrics of an item bank measuring upper extremity (UE) disorders for CAT use, 2) how different testing procedures affect the ability estimates from CAT, and 3) whether CAT produce better ability estimates than a traditional static short assessment. The psychometrics of the item bank developed by combining items from the Disabilities of the Arm, Shoulder and Hand and the Upper Extremity Functional Index were examined by confirmatory factor analysis, item response theory analysis and differential item functioning (DIF) analysis across body part impairments (neck, shoulder, elbow and wrist/hand). Repeated-measures MANOVA was used to investigate the standard error (SE) and bias of ability estimates from CAT with different testing procedures. Structural equation modeling was implemented to examine the correlations between ability estimates from the full test and different CAT structures and the full test with a short form. Further, paired-sample t test was performed to investigate the SE and bias of ability estimates from CAT and a static short form. In general, the item bank was found essentially unidimensional, the generalized partial credit model fit to the data better than partial credit model and there was no significant DIF. The ability estimates from CAT with expected a posteriori ability estimation method was found to be more precise and more comparable to those from full test than ability estimates derived from the maximum likelihood estimation. Further, CAT had better precision, comparability to full test and sensitivity to detect change than a static short form. The findings from this study suggest that UE CAT differ across estimations methods and are significantly better than a short form version.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Jia-Hwa Wang.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Velozo, Craig A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024792:00001

Permanent Link: http://ufdc.ufl.edu/UFE0024792/00001

Material Information

Title: Using Real-Data Simulations to Compare Computer Adaptive Testing and Static Short-Form Administrations of an Upper Extremity Item Bank
Physical Description: 1 online resource (126 p.)
Language: english
Creator: Wang, Jia-Hwa
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2009

Subjects

Subjects / Keywords: adaptive, computerized, item, measure, meaurement, outcome, response, testing, theory
Rehabilitation Science -- Dissertations, Academic -- UF
Genre: Rehabilitation Science thesis, Ph.D.
bibliography   ( marcgt )
theses   ( marcgt )
government publication (state, provincial, terriorial, dependent)   ( marcgt )
born-digital   ( sobekcm )
Electronic Thesis or Dissertation

Notes

Abstract: Computerized adaptive testing (CAT), which administers only items relevant to respondents' ability, has the advantage of measuring persons' ability precisely with considerably fewer items than traditional tests. CAT has been proposed for use in healthcare to reduce the respondents', administrators' or researchers' burden in clinics/clinical trials. However, there have been few studies in healthcare that have investigated the optimal characteristics of a CAT. The objective of my study was to investigate: 1) the psychometrics of an item bank measuring upper extremity (UE) disorders for CAT use, 2) how different testing procedures affect the ability estimates from CAT, and 3) whether CAT produce better ability estimates than a traditional static short assessment. The psychometrics of the item bank developed by combining items from the Disabilities of the Arm, Shoulder and Hand and the Upper Extremity Functional Index were examined by confirmatory factor analysis, item response theory analysis and differential item functioning (DIF) analysis across body part impairments (neck, shoulder, elbow and wrist/hand). Repeated-measures MANOVA was used to investigate the standard error (SE) and bias of ability estimates from CAT with different testing procedures. Structural equation modeling was implemented to examine the correlations between ability estimates from the full test and different CAT structures and the full test with a short form. Further, paired-sample t test was performed to investigate the SE and bias of ability estimates from CAT and a static short form. In general, the item bank was found essentially unidimensional, the generalized partial credit model fit to the data better than partial credit model and there was no significant DIF. The ability estimates from CAT with expected a posteriori ability estimation method was found to be more precise and more comparable to those from full test than ability estimates derived from the maximum likelihood estimation. Further, CAT had better precision, comparability to full test and sensitivity to detect change than a static short form. The findings from this study suggest that UE CAT differ across estimations methods and are significantly better than a short form version.
General Note: In the series University of Florida Digital Collections.
General Note: Includes vita.
Bibliography: Includes bibliographical references.
Source of Description: Description based on online resource; title from PDF title page.
Source of Description: This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Statement of Responsibility: by Jia-Hwa Wang.
Thesis: Thesis (Ph.D.)--University of Florida, 2009.
Local: Adviser: Velozo, Craig A.

Record Information

Source Institution: UFRGP
Rights Management: Applicable rights reserved.
Classification: lcc - LD1780 2009
System ID: UFE0024792:00001


This item has the following downloads:


Full Text

PAGE 15

Introduction

PAGE 16

CAT Development Methodological Issues Related to the Item Bank Construction for CAT

PAGE 17

Unidimensionality

PAGE 18

IRT modeling

PAGE 19

Differential item functioning

PAGE 20

Methodological Issues Related to the Testing Algorithms of CAT System Types of ability estimation methods

PAGE 21

Maximum likelihood estimation method n ii iLuPu iiPu iu i n Lu n i iiPu Expected a posteriori estimation method

PAGE 22

Types of item selection criteria

PAGE 23

Types of stopping rules

PAGE 24

CAT Studies in Healthcare and Next Critical Step of CAT Studies in Healthcare

PAGE 28

r=.92) r>.90) r=.98)

PAGE 29

r=.94 .99) r=.96) r=0.96)

PAGE 30

r=.95)

PAGE 31

r>.97) r>.9)

PAGE 32

r= .97)

PAGE 33

Introduction

PAGE 36

Methods Sample Assessments

PAGE 39

Data Analysis Dimensionality Item response theory analysis

PAGE 40

ij i r j m r ij i x j ixiP i ij ij r j m r ij x j ixiP ij

PAGE 41

Differential item functioning grouptotbgroupbtotb jYP jYPj jYP jYP j Tot Group Tot*group

PAGE 42

totb jYP jYPj groupbtotb jYP jYPj R R R N ercept N Full InterceptML ML ML R ML InterceptM FullM

PAGE 43

Results Dimensionality p Item Response Theory Analysis p p

PAGE 44

p write write lifting a bag of groceries to waist level write your usual hobbies, recreational or sporting activities Diff erential Item Functioning

PAGE 45

Discussion

PAGE 47

single -finger tapping

PAGE 48

Conclusions

PAGE 53

p 0.001 0.003 0.008 0.000 0.001 0.005 0.000 0.000 0.001

PAGE 54

0.001 0.003 0.000 p

PAGE 56

R p

PAGE 57

Introduction

PAGE 59

Methods Item Bank and Samples

PAGE 60

Parameter Estimation Data Analysis Procedures Design

PAGE 61

CAT simulations

PAGE 62

Analysis

PAGE 63

Results Descriptive Statistics

PAGE 65

Inferential Statistics SE, bias and number of item administered compa risons F p F p F p

PAGE 66

Correlation comparisons Discussion

PAGE 70

Conclusions

PAGE 72

Full Scale Ability Estimate in Logits SE Full Scale Ability Estimate in Logits SE

PAGE 73

Full Scale Ability Estimate in Logits SE Full Scale Ability Estimate in Logits SE

PAGE 74

Full Scale Ability Estimate in Logits Bias Full Scale Ability Estimate in Logits Bias

PAGE 75

Full Scale Ability Estimate in Logits Bias

PAGE 76

Full Scale Ability Estimate in Logits No. of items administered

PAGE 80

0.929 0.989

PAGE 83

Introduction

PAGE 85

Methods Item Bank

PAGE 86

Subjects Data Analysis Procedures Creating a static short form

PAGE 87

CAT simulation

PAGE 88

Analyses

PAGE 89

Results Score Precision p p Score Comparability p Detecting Change p p

PAGE 90

p p Discussion

PAGE 93

Conclusions

PAGE 96

Full Scale Ability Estimate in Logits SE

PAGE 97

Full Scale Ability Estimate in Logits Bias

PAGE 98

CAT-10 Full Scale Short Form-10 Full Scale

PAGE 99

Full Scale Change Scores in Logits GRC CAT-10 Change Scores in Logits GRC

PAGE 100

Short Form-10 Change Scores in Logits GRC