UFDC Home  myUFDC Home  Help 



Full Text  
THE IMPACT OF NONNORMALITY ON THE ASYMPTOTIC CONFIDENCE INTERVAL FOR AN EFFECT SIZE MEASURE IN MULTIPLE REGRESSION By LOU ANN MAZULA COOPER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2007 O 2007 Lou Ann Mazula Cooper ACKNOWLEDGMENTS I would like to take this opportunity to thank the cochairs of my dissertation committee, Dr. David Miller and Dr. James Algina. They are both exceptional teachers and without their guidance, support, and patience, this work would not have been possible. Dr. Miller, through his flexibility, approachable manner, and breadth of knowledge has been an invaluable resource throughout the course of my graduate studies. Dr. Algina is perhaps the most generous teacher I have ever known and his door was always open to me. His passion for research and his incredible work ethic had a profound influence on me. I would also like to thank the other members of my committee for their help and encouragement. Special thanks go to Dr. Richard Davidson. I will always be grateful for the opportunity he provided to apply and expand my skills to experimental design, data analysis, and psychometric issues in medical education research. Thank you for giving me the most rewarding and stimulating j ob I have ever had. Dr. Walter Leite, although not there at the beginning, has provided a valuable sounding block and I look forward to our future collaborations. I would also like to express my gratitude to my family for their love and encouragement during my midlife career change. I know it has not always been easy to live with me. To my daughters, Abigail and Amanda, my hope is that I have provided a good example for you in pursuing my lifelong love of learning. Finally, and most especially to Brian, my husband and best friend, whose love and unwavering belief in me made achieving this goal possible. TABLE OF CONTENTS page ACKNOWLEDGMENTS .............. ...............3..... LIST OF TABLES ........._..... ...............6.._.._ ...... LIST OF FIGURES .............. ...............8..... AB S TRAC T ........._. ............ ..............._ 10... CHAPTER 1 INTRODUCTION ................. ...............12.......... ...... Effect Sizes and Confidence Intervals in Multiple Regression Analysis .............. ................14 Asymptotic Confidence Intervals for Correlations ................. ...............15........... ... The Impact of Nonnormality on Statistical Estimates ................. ............... ......... ...24 Statement of the Problem ................. ...............26................ Purpose of the Study ................. ...............26.......... ..... 2 M ETHODS .............. ...............28.... Study Design............... ...............28. Number of predictors ................. ...............28........... .... Squared multiple correlations ................. ...............28........... .... Sample size ................. ...............29................. D istributions ................. ......... ...... .... ... .. .. ..... .........2 Background and Theoretical Justification for the Simulation Method............... .................3 Data Simulation .............. ...............37.... Data Analysis............... ...............39 3 RE SULT S .............. ...............47.... Replication of Results for Multivariate Normal Data ................. ............... ......... ...47 Simulation Proper ................. .. ........... ........... .............4 Analysis of Variance and Mean Square Components ................ ..............................54 The Influence of Nonnormality on Coverage Probability ................. .......... ...............58 Nonnormal predictors ................. ...............58................. Nonnormal error distribution............... ... .. .......... ........5 The Impact of Squared Multiple Correlations on Coverage Probability ............... .... ........._..59 The Impact of Sample Size on Coverage Probability............... ..............6 Probability Above and Below the Confidence Interval .............. .....___ .............. .65 The Relationship between Estimated Asymptotic Variance, Empirical Sampling Variance of AR2, and Coverage Probability ................. ...............66............... 4 DI SCUS SSION ................. ................. 124........ .... Limitations ................. ...............126................ Further Research ................. ...............128................ Conclusion ................ ...............130................ APPENDIX A PROGRAM FOR COMPUTING MARDIA' S MULTIVARIATE MEASURES OF SKEWNESS AND KURTOSIS INT SAS .............. ...............135.... B DATA SIMULATION PROGRAM INT SAS ................ ...............138.............. LIST OF REFERENCES ................ ...............142................ BIOGRAPHICAL SKETCH ................. ...............146......... ...... LIST OF TABLES Table page 21 Study Design ................. ...............41........... .... 22 Mardia' s Multivariate Skewness, bl~k, for the Nonnormal Distributions ................... ........42 23 Mardia' s Multivariate Kurtosis, b2,k, for the Nonnormal Distributions. ................... .........43 31 Replication of Algina and Moulder' s Results for Multivariate Data and Two Predictors. ............. ...............72..... 32 Replication of Algina and Moulder' s Results for Multivariate Data and Six Predictors .............. ...............73.... 33 Replication of Algina and Moulder' s Results for Multivariate Data and Ten Predictors. ............. ...............74..... 34 Empirical Coverage Probabilities for Normal Predictors and Normal Errors. ..................75 35 Empirical Coverage Probabilities for Normal Predictors and Nonnormal Errors. ............82 36 Empirical Coverage Probabilities for Nonnormal Predictors and Normal Errors. ...........89 37 Empirical Coverage Probabilities for Predictors Nonnormal and Errors Nonnormal. ......96 38 Descriptive Statistics for Coverage Probability by Distributional Condition. .................1 04 39 Analysis of Variance, Estimated Mean Square Components, and Percentage of Total ..105 310 Descriptive Statistics for Coverage Probability by Distribution for the Predictors......... 105 311 Descriptive Statistics for Coverage Probability by Distribution for the Errors. ..............106 312 Coverage Probability by Ap2 and the Distribution for the Predictors .............. ...............106 313 Coverage Probability by Ap2 and the Distribution for the Errors. ............. ..................106 314 Coverage Probability by p~ and the Distribution for the Predictors. .............. ...............107 315 CoeaePoaiiyb and the Distribution for the Errors. ............. ...................107 36Coverage Probability by p and Ap2 for X Distributed Multivariate Normal. ................107 317 Coverage Probability by p~ and Ap2 for X Distributed Pseudotlo(g = 0, h = .058). ......108 38 Coverage Probability by p and Ap2 for X Distributed Pseudog ................0 39Coverage Probability by p2 and Ap2 for X Distributed Pseudo( .10 30Coverage Probability by p2 and Ap2 for X Distributed Pseudoexponential. ..................1 09 321 Coverage Probability by Sample Size and Ap2 ................ ...............109............. 3 22 Coverage Probability by Sample Size and Number of Predictors. .........._... ............... 110 323 Analysis of Variance, Estimated Mean Square Components, and Percentage of Total ..1 10 41 Coverage Probability as a Function of n, Selected Values for p2 and Ap2, and Distribution for the Predictors. ............. ...............132.... LIST OF FIGURES Figure page 21 Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = 0, h = .058 overlaid with a normal curve with Cash= 0, ags = 1.097. ............. ...............44..... 22 Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .301, h = .017 overlaid with a normal curve with Cash= .150, ags = 1.041. ............. ...............44..... 23 Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .502, h = .048 overlaid with a normal curve with Cas = .249, gsh = 1.108. ............. ...............45..... 24 Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .760, h = .098 overlaid with a normal curve with Cash= .378, gsh = 1.252 .............. ...............45.... 25 Comparison of Mardia' s multivariate skewness for the multivariate normal distribution to that of the distributions investigated. ................ ................. ..........46 26 Mardia's multivariate kurtosis for the multivariate normal distribution and the nonnormal distributions investigated. .............. ...............46.... 31 Mean estimated coverage probability by normality vs. nonnormality in the predictors, normality vs. nonnormality in the errors, and sample size. ................... ........111 32 Empirical coverage probability as a function of distributional condition and sample size. .............. .. ...............112......... ...... 33 Box plots of the distributions of coverage probability estimates by distribution for the predictors (n, = 14,700) ................. ...............113........... ... 34 Box plots of the distributions of coverage probability estimates by distribution for the errors (n, = 14,700)............... ...............114 35 Main effect of the squared semipartial correlation coefficient Ap2, and the effect of the interaction of Ap2 and X on coverage probability for Ap2 > 0 ................. ...............1 15 36 Main effect of the squared semipartial correlation coefficient Ap2, and the effect of the interaction of Ap2 and e on coverage probability for Ap2 > 0. .........._... ................115 37 Effect of the interaction between the size of the squared multiple correlation in the reduced model, pi, and the distribution for the predictors, X, on coverage probability. ................ ...............116................ 38 Interaction between the size of the squared multiple correlation in the reduced model, p and the distribution for the errors, e, and its relationship to coverage probability. p~ ................ ...............116......... ...... 39 Effect of the p~ x Ap2 interaction on coverage probability for Ap2 > 0. ................... ....... 117 310 Effect of the X x p~ x Ap2 interaction on coverage probability for Ap2 > 0 ..................1 18 311 Interaction between sample size, n, and the population squared semipartial correlation, Ap2, and the impact on coverage probability for Ap2 > 0. ................... .........119 312 Effect of the interaction between sample size, n and number of predictors, k, on coverage probability ................. ...............119................ 313 Ratio of mean estimated asymptotic variance to the variance in AR2 (MEAVn/Var AR2) as a function of the distribution for the predictors, Ap2, and p, ...............120 314 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 ................... 121 315 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 for multivariate normal data (g = 0, h = 0). ................ ...............121........... . 316 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 and X distributed pseudotlo (g = 0, h = .058)............... ...............122. 317 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 and X distributed pseudoX, (g = .502, h = .048) ................. ...............122............ 318 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 and X distributed pseudoX! (g = .502, h = .048)............... ...............123 319 Relationship between coverage probability and the ratio of mean estimated asymptotic variance to the empirical sampling variance of AR2 for Ap2 > 0 and X distributed pseudoexponential (g =.760, h = .098)............... ...............123 41 Coverage probability as a function of sample size and several combinations of p~ and Ap for predictors sampled from a normal distribution and (A) pseudotlo; (B) pseudVno7 \, (C)psudo(;, and (D) pseudoexponential distributions. ................... .......133 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy THE IMPACT OF NONNORMALITY ON THE ASMPTOTIC CONFIDENCE INTERVAL FOR AN EFFECT SIZE MEASURE IN MULTIPLE REGRESSION By Lou Ann Mazula Cooper May 2007 Chair: M. David Miller Co chair: James Algina Major: Research and Evaluation Methodology The increase in the squared multiple correlation coefficient, AR2, associated with an individual predictor in a regression analysis is a measure commonly used to evaluate the importance of that variable in a multiple regression analysis. Previous research using multivariate normal data had shown that relatively large sample sizes are necessary for an acceptably accurate confidence interval for this regression effect size measure. The coverage probability that an asymptotic confidence interval contained the population squared semipartial correlation, Ap2, was investigated by simulating data from a range of nonnormal distributions such that (a) the predictors were nonnormal, (b) the error distribution was nonnormal, or (c) both predictors and errors were nonnormal. Additional factors manipulated included (a) the number of predictor variables, (b) the magnitude of the population squared multiple correlation coefficient in the original model, pi, (c) the magnitude of the population squared semipartial correlation, Ap2, and (d) sample size. This study showed that when nonnormality is introduced, empirical coverage probability was always less than the nominal confidence level, often dramatically so. The degree of nonnormality in the predictors was the most important factor influencing poor coverage probability. Although coverage probability increased as a function of sample size, when nonnormality in the predictors was substantial, the confidence interval is likely to be inaccurate no matter how large a sample size is used. With multivariate normal data, coverage probability improved as both p~ and Ap2 increased. When predictors are sampled from a nonnormal distribution, coverage probability tended to decrease as p~ and Ap2 increased and became even worse as the degree of nonnormality increased. It was further demonstrated that the asymptotic variance underestimates the sampling variance of AR2. This produces standard errors that are too small and results in a confidence interval that is too narrow. Reliance on this confidence interval as a measure of the strength of the effect size will lead us to underestimate the importance of an individual predictor to the regression. CHAPTER 1 INTRODUCTION There is a growing consensus that the tradition of null hypothesis significance testing (NHST) has led to overreliance on statistical significance in evaluating research results in the behavioral and social sciences. According to Cohen (1994), the biggest flaw in NHST is that it does not tell us what we want to know. A statistical test evaluates the probability of the sample results given the size of the sample assuming that the sample is drawn from a population where the null hypothesis is exactly true. In this framework, the outcome of a significance test is a dichotomous decision whether or not to reject the null hypothesis. As noted by Steiger and Fouladi (1997, p. 225), "this dichotomy is inherently dissatisfying to psychologists and educators, who frequently use the null hypothesis as a statement of no effect, and are more interested in knowing how big an effect is than whether it is (precisely) zero." Fundamentally, we are interested in determining how accurately the population effect has been estimated from the sample data and whether the observed effect size has practical significance. Statistical significance testing fails to provide the answers. Within the behavioral and social sciences, methodological recommendations for reporting research results have increasingly emphasized the importance of reporting confidence intervals (Cumming & Finch, 2001; Smithson, 2001), effect sizes (Olejnik & Algina, 2002; VachaHasse & Thompson, 2004), and confidence intervals for effect sizes (Cohen, 1990; Steiger & Fouladi, 1997; Thompson, 2002) to complement the results of hypothesis testing. Among the recommendations of the APA' s Task Force on Statistical Inference (Wilkinson & Task Force on Statistical Inference, 1999) was a proposal to move away from routine reliance on NHST as a primary means of analyzing data to exploring, summarizing and analyzing data using visual representations, effectsize measures, and confidence intervals. The most recent edition of The Publication Manual of the American Psychological Association (200 1, p. 2526) states, "For the reader to fully understand the importance of your findings, it is almost always necessary to include some index of effect size or strength of relationship in your Results section... The general principle to be followed, however, is to provide the reader not only with information about statistical significance but also with enough information to assess the magnitude of the observed effect or relationship." The Manual also states that failure to report an effect size is a "defect" (p. 5). In 1996, Thompson recommended that American Educational Research Association (AERA) journals require that effect sizes be reported and interpreted in all studies. Ten years later the AERA Council recommends that statistical results should include an effect size measure as well as an indication of the uncertainty of that index of effect such as a confidence interval. The recently adopted Standards for Reporting on Empirical Social Science Research in AERA Publications (AERA, 2006) states that when quantitative methods are employed, "It is important to report the results of analyses that are critical for the interpretation of findings in ways that capture the magnitude as well as the significance of those results" (p. 37). Editors of over 20 APA and other social science journals have published guidelines explicitly requiring authors to report effect sizes (Ellis, 2000; Harris, 2003; Heldref Foundation, 1997; Hresko, 2000; McLean & Kaufman, 2000; Royer, 2000; Snyder, 2000; Thompson, 1994; VachaHaase, Nilsson, Rentz, Lance, & Thompson, 2000) and the Editor of Journal of Applied Psychology requires an author to provide an explanation when an effect size is not reported (Murphy, 1997). Although this is evidence that editorial practices have evolved somewhat, effect size reporting is unlikely to become the norm until we move from recommendation and encouragement to requirement (Thompson, 1996; 1999). Effect Sizes and Confidence Intervals A confidence interval establishes a range of parameter values that are reasonably consistent with the data observed from a sample. Because a confidence interval gives a best point estimate of a parameter of interest and an interval about it reflecting an estimate of likely error, it contains all the information to be found in a significance test and more (Cohen, 1994). The likely range of the parameter values provides researchers with a better understanding of their data. If the parameter estimated has meaningful units, a confidence interval can be used to make statistical inferences that provide information in the same metric. According to Cumming and Finch (2001), there are four main reasons for promoting the use of confidence intervals: (a) they are readily interpretable, (b) are linked to familiar statistical tests, (c) can encourage replication and metaanalytic thinking, and (d) give information about precision. The term effect size is broadly used to refer to any statistic that provides information that helps us judge the "practical significance" of the results of a study (Kirk, 1996). Cohen (1990) recommends that in addition to reporting an effect size, researchers should provide confidence intervals for effect sizes in order to gauge the possible range of values an effect size may assume. Absent a confidence interval, it is difficult to evaluate the accuracy of the effect size estimate. This, in turn, has implications for drawing meaningful conclusions. Unfortunately, despite the increasing demand for researchers to do so, reporting effect sizes and confidence intervals has yet to become commonplace in educational and psychological journals. VachaHasse, Nilsson, Rentz, Lance, and Thompson (2000) reviewed ten studies of effect size reporting in 23 journals, and found effect size(s) to be reported in roughly 10 to 50 percent of articles, notwithstanding the encouragement to do so from the fourth edition of the APA manual (1994). Empirical studies show that even when effect sizes are reported, interpretation is often given short shrift (Finch et al, 2002; Keselman et al., 1998). It is likely that the emphasis on null hypothesis significance testing in graduate courses in statistics and research methodology has contributed to a general lack of knowledge concerning confidence intervals. Moreover, techniques for computing confidence intervals are often neglected in popular statistics textbooks and are not easily available in the statistical software that is routinely employed by applied researchers in the social sciences (Smithson, 2001). Even if these factors were not operating, researchers might be reluctant to report confidence intervals because as Steiger and Fouladi (1997, p. 228) observe, "interval estimates are sometimes embarrassing." Reporting confidence intervals can highlight the level of imprecision of statistical estimates and exposes the trivial nature of many published studies. Smithson (2001, p. 614) notes, "Almost any literature review or metaanalysis in psychology would give a very different impression from that conveyed by NHST if we routinely 'reconstructed' CIs for multiple R2 and related GLM parameters." Asymptotic Confidence Intervals for Correlations A confidence interval establishes a range of hypothetical parameter values that cannot be ruled out given the observed sample data. The probability that the random interval includes, or covers, the true value of the parameter is the coverage probability of the interval. When the exact distribution of a statistic is known, the coverage is equal to the confidence level and the interval is said to be exact. A confidence interval is exact if it can be expected to contain a parameter' s true value 100(1 a)% of the time. Often exact intervals are not available or are difficult to calculate, and approximate intervals are used instead. Confidence intervals are based on the sampling distribution of a statistic. Due to the central limit theorem, when sample size is sufficiently large, the sampling distribution of statistic will become more symmetric and eventually appear nearly normal, even when the population itself is not normally distributed. Methods based on asymptotic theory use approximations to the sampling variance of a statistic. If only the asymptotic distribution of the statistic is known, we can obtain an approximate confidence interval, which may or may not be reasonably accurate in Einite samples. If the asymptotic confidence interval procedure is fully adequate, under repeated random sampling under identical conditions, a 95% confidence interval would contain the true population parameter 95% of the time. The accuracy of the approximation depends on whether there is a lack of bias and the degree to which the sampling distribution deviates from normality. If a statistic has no bias as an estimator of a parameter, its sampling distribution is centered at the true value of a parameter. An unbiased confidence interval is one where the probability of including any value other than the parameter's true value is less than or equal to 100(1 a)%. An interval is said to be conservative if the rate of coverage is greater than 100(1 a)%, the nominal confidence level. If the coverage probability is less than the nominal, the interval is said to be liberal. In general, conservative intervals are preferred over liberal ones (Smithson, 2003). Whenever a statistic based on asymptotic theory has poor finite sample properties, a confidence interval based on that statistic has poor coverage. Multiple regression analysis is a common statistical application frequently used to predict a dependent variable (outcome) from two or more independent variables (predictors). The interpretation of results would be enhanced by the reporting of confidence intervals and effect sizes. The sample statistic, R2, which estimates the proportion of variance in the dependent variable that is explained by the set of predictors, is commonly used to evaluate a multiple regression model. Published research studies frequently report R2 ValUeS without any evidence of the precision with which they have been estimated. It is unfortunate that a confidence interval for the population parameter, p2, iS not computed by most popular statistical software packages. Perhaps more significant, the topic is not even discussed in many applied or theoretical statistics texts. In addition to the amount of variance explained by a given multiple regression model, researchers are often interested in evaluating the contribution that one variable makes to the regression, over and above a set of other explanatory variables. The increase in R2, R2, when a variable (4) is added to a multiple regression model is a useful measure of the strength of the relationship between 4 and the dependent variable, Y, controlling for all other independent variables in the model. The change in R2 that we observe by including each new 4 in the regression equation is the squared semipartial correlation corresponding to a given regression coefficient. Typically, whether 4 has made a statistically significant contribution to predicting Y is tested by conducting a t or Ftest on that regression coefficient. But, the squared semipartial correlation itself is a useful measure of effect size and as recommended by Cohen (1990) and Thompson (2002), we should calculate a confidence interval to evaluate the precision with which it has been estimated and the range of likely values. Hedges and Olkin (1981) presented procedures for constructing a confidence interval for the squared semipartial correlation based on calculating the asymptotic covariance matrix for commonality components. Commonality analysis is a procedure by which the variance accounted for in the criterion is partitioned into two parts, the unique part and the common part. The unique part is attributable to the predictors individually. This is essentially the partial contribution of each predictor to the squared multiple correlation with the criterion. The second part is the common part, attributable to a combination of the predictors, which is the contribution to the multiple correlation with the criterion that all of the predictors in the combination share. Thus, commonality analysis is a way to measure the importance of variables through the use of partial correlations. Hedges and Olkin's results can be used to construct a confidence interval for AR2. Olkin and Finn (1995) derived explicit expressions for asymptotic (largesample) confidence intervals for functions of simple, partial, and multiple correlations. Since the focus of this study is on the squared semipartial correlation, the following discussion will be limited to Olkin and Finn's Model A (p. 157159). Model A is the special case for use in determining whether an additional variable provides an improvement in predicting the criterion. All of the procedures for comparing two sample correlation coefficients or two sample squared correlation coefficients described by Olkin and Finn have the same general form. Let rA and rB be the two sample correlations to be compared and pA and pB denote their corresponding population values. The largesample distributional form for the difference in two correlations is [(r~A B A B)]~ N 0,01) (1.1) where 01 = var(rA )+ Var(rg) 2 cov(rA B) (1.2) is the asymptotic variance of the difference of the two correlation coefficients; ol is dependent on the population correlations (Olkin & Finn, 1995, p. 156). When squared correlation coefficients are compared, the expressions in Equations 1.1 and 1.2 become [(] j ( \ p )]_ ~~C N(, (1.3) and 02 = var r rjp p ) 2cov( r ,rj). (1.4) Olkin and Finn present the general form for the largesample variance of functions of correlations o f (q, rk ',k, = B08' (1.5) specialized to a function of three correlations, rzi, rGk, and ryk where f( ) is a function of the correlations, contains a set of coefficients that depend on the function of the correlations to be evaluated. The variance of sample correlation rzi is var(rl) = (1 p )) / It (1.6) and the covariance of two correlations is cor rik zy k1 7k I k + 1 T 7k ~1 T 71 ]~k 2 (1.7) When two correlations have one variable in common, Equation 1.7 simplifies to cov(rry, rik) = (2p~ pyk ( zy Pk zy Pk Pk k 7 77.i (1.8) Largesample estimates are obtained by replacing the population parameters with values computed from sample data. Using the delta method, it can be shown that if f(r,,, rzk r7k) is a function of the three correlations, then the vector a consists of the partial derivatives a = d 3d (1 .9 ) In the simplest case, suppose that two variables X1 and X2 are USed to predict a third variable, Xo. In order to determine whether X2, makes a significant contribution to the regression, we are interested in the difference, 42 4 Here, we use a capital "R" to signify a multiple correlation rather than a bivariate correlation, denoted by a lower case "r"'. The symbol R412 denotes the squared multiple correlation between Xo, X1 and X2, which is a function of the correlations among the variables rol, r02, and rl2 given by xiaZ) = Pla2 1_ (1.10) 1r1 The squared correlation between Xo and X1 is represented by r,, Therefore, a confidence interval for IC, 4,] can be computed using Olkin and Finn's results for comparing two squared multiple correlation coefficients. In order to compare the population squared multiple correlations p and z w use, the estimates@ ,2 4, and 61, the estimated variance of the difference R(1Z) ;,where var(R4 \) = a~a'. (1.11I) The upper triangular of the symmetric population correlation matrix is P=1 p, 1 p,,1 (1.12) and the elements of the vector, a, are 2p, az = (poiP12 902), (1.13) IJ2 as = (zpoz P0112), (1.14) 2p az= (p1 p + p1 p2 DPO poPo 01902912?). (1.15) (1 p 2)2 The variancecovariance matrix for the sample correlations is i1 ~12 ~13233 vr ovq, ,)co:,,r 22 2 = vr~q ) cv~q r) .(1.16) The sample correlation matrix, R, estimates P and the sample values in R can be used to compute the elements of a. Because the calculation of analytic derivatives becomes increasingly complicated as the number of variables increases, Olkin and Finn illustrated their method for a multiple regression model with no more than two predictors. Graf and Alf (1999) expanded Olkin and Finn' s procedures to more general forms. Graf and Alf substituted numerical derivatives and offered two BASIC programs for calculating asymptotic confidence limits on the difference between two squared multiple correlations and the difference between two partial correlations. These programs, REDUXAB, to compare two multiple correlations, and REDUXCD, to compare two partial correlations, compute the # matrix, the partial derivatives in vector a, and a 95% confidence interval. Alf and Graf (1999) present a further simplification that does not employ numerical derivatives, is less computationally demanding, and produces results equivalent to the method described by Olkin and Finn. All computations are based on sample estimates. The problem is approached by representing a multiple correlation as a zeroorder correlation between the outcome variable and another single variable that is a weighted sum of the predictors. Alf and Graf defined rAB OB(1.17) r0A where the subscripts A and B denote weighted sums of two sets of predictors and reBis the correlation between the two composite variables. The confidence interval for the squared semipartial correlation coefficient is determined by the special case in which one set of predictors is a proper subset of the predictors in the other correlation. The two squared multiple correlations are computed using the same sample and the variables in the reduced model are a subset of the variables in the full model. Let pS and p' denote the population squared multiple correlation coefficients corresponding to Rf and R The subscript,J; refers to the "full" model with all predictors; the subscript, r, refers to the "reduced" model. The reduced model contains all predictors with the exception of the variable of interest. The asymptotic variance of R is VarR ) (1.18) The asymptotic variance of RY is 4p' (1p Var (R )= (1.19) The asymptotic covariance between Rf and R~ is 2 4p,p, [.5 2p,/ip,p,p,( 1 p p p /p >+p /p3 Cov(R,, Rf ) = (1.20) For the squared semnipartial correlation, let AR2 = R~ R. The asymptotic variance of AR2 is ,o = Var(R: )+Var(R)) 2Cov(R ,Rf ). (1.21) An asymptotically correct 100(1 a)% confidence interval for Ap2 = p pS is AR2 fZn/2 m (1.22) where zu/2 is the (1 a/2)th percentile of the standard normal distribution and ois the estimate of am. Inm+,~ prctc,the lagesmple variance is estimatedl byr substituti;ng R for p and Ry2 for p in Equations 1.18, 1.19, and 1.20. Equations 1.18 and 1.19 are problematic when the population squared multiple correlations are zero because the implication is that the sampling variance of R2 is also zero (Stuart, Ord, & 22 Arnold, 1999). Similarly, Equation 1.20 implies that the sampling covariance is zero if either population multiple correlation coefficient, p, or py is zero. If it were known that both p: and p, were zero and these values were used to construct a confidence interval, we would incorrectly conclude that the width of the resulting interval is zero. This computational problem is unlikely to occur in practice since we substitute sample multiple correlation coefficients for their population values and it is doubtful that either Rp or Rf will ever be exactly zero. The Alf and Graf formulas rely on asymptotic results. As such, they are only exactly correct for infinitely large samples. Thus, the accuracy of this approximation is heavily dependent on sample size. Alf and Graf (1999, p.74) concluded that "the correlation between two multiple correlations will be extremely high when the variables in one multiple correlation are a subset of the variables in another multiple correlation" and to ensure that coverage probability is equal to the nominal for the confidence interval on Ap2, "mOderately large to large" sample sizes are necessary. In the absence of more specific recommendations on sample sizes, Algina and Moulder (2001) conducted a simulation study to evaluate the empirical probability that the interval in Equation 1.22 includes Ap2 for 95% confidence interval. Algina and Moulder manipulated p , pi, the number of predictors in the model (k), and the sample size (n). When the data are distributed multivariate normal, results indicate that when Ap2 > 0, for sample sizes representative of those used in psychology (i.e., n < 600), coverage probabilities for a nominal 95% confidence interval were less than .95. This tends to be true even with relatively large sample sizes, i.e. between 600 and 1200. When p. p~ = 0 all coverage probabilities were at least .999 for all sample sizes studied. That is, when p2 does not increase when a predictor is added to a multiple regression model, the confidence interval is always too wide. Algina and Moulder (2001) posited two reasons for this defect in the confidence interval: (a) for all conditions in which p~ p~ = 0 the asymptotic variance overestimated the sampling variance and (b) the distribution of R Rf is positively skewed with a lower limit of 0. Because the confidence interval does not take this lower limit into account, even if the asymptotic variance was not overestimated, the lower limit would tend to be smaller than zero. Algina and Moulder (2001) showed that coverage probability tends to increase as p~ increases and as Ap2 increases and tends to decrease as the number of predictors increases. Further, when the interval does not contain Ap2, there is a tendency for the interval to be entirely below Ap2. Algina and Moulder conclude that using the Alf and Graf method to compute a confidence interval with an inadequate sample size will underestimate the strength of the relationship between the predictor and the outcome variable. The Impact of Nonnormality on Statistical Estimates Every procedure used to make statistical inferences is based on a set of core assumptions. If the assumptions are met, the test will perform as theorized. However, the results may be misleading when the assumptions are violated. The most common method for estimating regression coefficients is ordinary least squares (OLS). Ordinary least squares yields unbiased, efficient, and normally distributed estimates when the following conditions are met: (a) No measurement error; (2) the mean of the residuals is zero; (3) the residuals have constant variance; (4) the residuals are not intercorrelated; and (5) the residuals are normally distributed. In terms of power and accurate probability coverage, standard analysis of variance (ANOVA) and regression methods are affected by arbitrarily small departures from normality. As early as 1960, Tukey found that nonnormality could have a sizeable impact on power and measures of effect size could be misleading whenever means are being compared. By sampling from a contaminated normal distribution, Tukey showed that classical estimators are quite sensitive to distributions with heavy tails. The contaminated normal distribution is a mixture of two normal distributions, one of which has a large variance; the other distribution is standard normal. This results in a distribution with heavier tails than the Gaussian. Heavytailed distributions are characterized by unusually large or small values. Both heavytailed and skewed distributions are commonplace in applied work (Micceri, 1989). The presence of these characteristics in the data can "diminish the chances of detecting true associations among random variables and obtaining accurate confidence intervals for the parameters of interest" (Wilcox, 1998). After reviewing over 400 large data sets from educational and psychological research, Micceri (1989) found the maj ority did not follow univariate normal distributions. Approximately twothirds of ability measures and over 80% of the psychometric measures examined exhibited at least moderate asymmetry. For all data sets studied, 31% of the distributions showed skewness, yl, greater than .70 and 52% of psychometric measures demonstrated extreme to exponential asymmetry, yl > 2.00. Psychometric measures also exhibited heavier tails than ability measures. Kurtosis estimates ranged from 1.70 to 37.37. To put this in some perspective, the kurtosis for the double exponential distribution is 3.0. Breckler (1990) considered 72 articles in personality and social psychology journals and found that in analyses relying on the assumption of multivariate normality, only 19% of authors acknowledged this assumption and less than 10% considered whether it had been violated. Keselman and his colleagues (1998) reviewed articles in prominent educational and behavioral sciences research j ournals published during 1994 and 1995 and concluded (a) the maj ority of researchers conduct statistical analyses without considering the distributional assumptions of the tests they are using and therefore use analyses that are not robust; (b) researchers rarely reported effect sizes; and (c) researchers failed to perform power analyses in order to inform sample size deci sions. Statement of the Problem Methods for constructing confidence intervals based on asymptotic theory, such as those proposed by Olkin and Finn and Alf and Graf, have the potential to be very attractive to applied researchers. In the case of the equations presented by Alf and Graf, a hand calculator can be used to compute a confidence interval using the appropriate estimates from the results of data analysis obtained using standard statistical analysis software. However, as Algina and Moulder demonstrated, even under the best case scenario, where data are drawn from a multivariate normal distribution, the coverage probability of the asymptotic confidence interval for Ap2 is leSS than optimal, and when sample size is relatively small, e.g., < 200, would be considered unacceptable by most researchers. Since multivariate normal data is rare, the performance of Alf and Graf s procedure under "real world" conditions warrants further investigation. Purpose of the Study My dissertation will extend the work of Algina and Moulder (2001) and investigate the effect of the magnitude of population squared multiple correlation coefficients, p~ and p as well as the number of predictors, on the asymptotic confidence interval for Ap2 under a range of nonnormal conditions. The study will investigate coverage probability when (a) the predictor variables are not distributed multivariate normal; (b) the residuals are not normal; and (c) both predictors and residuals are nonnormal. Empirical coverage probabilities will be compared to nominal coverage probabilities over a wide range of sample sizes. My research will address the following questions: * How adequate is Alf and Graf s asymptotic confidence interval procedure for the squared semipartial correlation coefficient when used with sample sizes typically employed in research in education, psychology and the behavioral sciences under conditions of nonnormality? * Is there a minimum sample size for which this method meets established standards for accuracy over a wide range of situations such that recommendations can be made for the use of this procedure in reporting the results of applied research? CHAPTER 2 METHOD S In conducting a simulation study, especially when the goal is to inform the practice of researchers, it is important to ensure that the relevant factors are manipulated and that the levels of these factors reflect those routinely observed. To that end, six factors were manipulated in a factorial design using values typical of those observed in applied research: the number of predictors, the size of the squared multiple correlation in the reduced model, the size of the squared semipartial correlation, sample size, the distribution for the predictors, and the distribution for the error. These factors, and the levels of these factors, are detailed in Table 21. Study Design Number of predictors Algina, Moulder, and Moser (2002) examined sample size requirements for accurate estimation of squared semipartial correlation coefficients and found a modest effect on the distribution of AR2 due to the number of predictors included in the multiple regression model. Therefore, it follows that the sample size required for the confidence interval on Ap2 to be robust, i.e. to have the coverage probability equal to the nominal confidence level, will likewise depend on the number of predictors. The number of predictors in the initial set of predictors (k 1) ranged from 2 to 10 in increments of 2. This allowed investigation of the performance of the asymptotic confidence interval for a reasonable range of model sizes. Squared multiple correlations Algina, Moulder, and Moser also showed that the sampling distribution of AR2 Strongly depends on the population squared multiple correlations in both the full and reduced models, pf and p~ Based on a survey of all APA journal articles published in 1992 reporting multiple regression results, Jaccard and Wan (1995) found the median squared multiple correlation in these studies to be .30. The 75th percentile for squared multiple correlations was approximately .50. Based on these results, the values for the squared multiple correlation coefficients for the predictors in the initial set (p~ ) ranged from .00 to .60 in steps of .10 (7 levels of the factor). Cohen (1988) proposed, as a convention, that .02, .13, and .26 represent small, medium, and large effect sizes for squared semipartial correlations. By manipulating the squared multiple p~ + .30 in steps of .05, values for Ap2 that ranged from .00 to .30 in steps of .05 were produced (7 levels of the factor). The values for Ap2 are reasonably representative of likely effect sizes andthevalesselcte fo pandu p~ cover a comprehensive range of population squared multiple correlations for multiple regression models from p2 = .00 to p2 = .90. Sample size Jaccard and Wan also reported typical sample sizes for studies using regression analysis. The median sample size was 175; a sample size of 400 was at the 75th percentile. However, Algina and Moulder found with multivariate normal data empirical estimates of the coverage probability were smaller than .95 even with a sample size as large as 1200. Since we expected empirical coverage probabilities to be worse for nonnormal data, larger sample sizes than are usually observed in psychological research were included. Sample size ranged from 100 to 1000 in steps of 100 and from 1000 to 2000 in steps of 250 (14 levels of the factor). Distributions The distributions chosen for study represent varying levels of nonnormality and were selected to: (a) allow examination of the effects of skewness and kurtosis; and (b) be representative of the types of univariate nonnormality commonly encountered in applied research in education and psychology. The method described in Hoaglin (1985) and Martinez and Iglewicz (1984) using the gandh distributions was used to generate data that is characterized by varying degrees of skewness (yl) and kurtosis (y2). A gandh distribution is generated by a single transformation of the standard normal distribution and allows for asymmetry and a variety of tail weights. In the case of the standard normal distribution, g = h = 0 and yl = Y2 = 0. When g = 0, a distribution is symmetric. Distributions with positive skew typically have vi > 0 and in distributions with negative skew, yl < 0. The tails of the distribution become heavier as h increases in value. Longtailed distributions, such as the tdistribution, are characterized by Y2 > 0. Shorttailed distributions, such as the uniform distribution, have y2 < 0. The distributions selected for this study and their skewness and kurtosis are presented in Table 21. Distribution 1 is the multivariate normal case. Distribution 2 is symmetric and longtailed and has the same skew and kurtosis as a tdistribution with 10 degrees of freedom. Distribution 3 is both asymmetric and leptokurtotic with the same skew and kurtosis as a X2 distribution with 10 degrees of freedom. Since distributions 2 and 3 have similar kurtosis, but differ with respect to asymmetry, this allowed us to evaluate the relative importance of skewness and kurtosis on the coverage probability of the confidence interval. Distribution 4 has the same skew and kurtosis as X Distribution 5 is extremely skewed with heavy tails and has skew and kurtosis equal to the exponential distribution. Nonnormality was manipulated in either (a) the predictors, (b) the residuals, or (c) in both the predictors and the residuals. The error distribution is a univariate distribution. The empirical cumulative distribution functions for the four nonnormal distributions selected for this study, generated by sampling 1,000,000 random variates from each gandh distribution, are depicted in Figures 21 to 24. In addition, the deviation from normality is shown by including the normal curve with mean equal tO CEgh and standard deviation equal to agh for each distribution. The population mean and standard deviation for each gandh distribution were calculated using the formulas given by Hoaglin (1985, p. 502503). In multiple regression, the predictors are multivariate. Multivariate normality, however, is a stronger assumption than univariate normality. Univariate normality of each of the variables is necessary, but not sufficient, and a nonnormal multivariate distribution can have normal marginals. Therefore, a preliminary step in evaluating multivariate normality is to study the reasonableness of assuming marginal normality for the observations on each of the variables (Gnanadesikan, 1997). In addition to graphical approaches, a common method for evaluating the normality of univariate observations is by means of skewness and kurtosis coefficients, J and b2. = => 3/2 (2.1) and x, x_4 b2 n =1 2 (2.2) These are sample estimates of the population skewness and kurtosis parameters JSand p2, respectively. When the population is normal, Jp= 0 and p2 = 3. If P2 < 3, there is negative kurtosis; if p2 > 3, there is positive kurtosis. Population skewness and kurtosis are also commonly described by yl and y2 (Hoaglin, 1985) where Yi = J~(2.3) and Yz = Pt 3. (2.4) Mardia (1970) proposed indices for assessing multivariate normality that are generalizations of the univariate skewness and kurtosis measures and b2. Let X1,...,X, be a random sample from a population with mean vector Ct and covariance matrix C. The sample mean vector and covariance matrix are denoted by X and S, respectively. The skewness and kurtosis, Bl,k and B2,k, for ai multivariate population, as defined by Mardia, are 1,k = Eit (x, p) Ex, @ (2.5) and 2,k = E (x, ' p)c x E x, (2.6) According to Rencher (1995), since third order central moments for the multivariate normal distribution are zero, Bl,k = 0 when X ~ N(CL,1). Furthermore, it can be shown that for multivariate normal X 2,k, = k(k + 2) (2.7) where k is equal to the number of variables. Sample estimates of f 1k and B2,k arT giVen by b1,k 2 X )S X X(28 and b2,k [ (X~, X'S '(X, X) (2.9) Multivariate skewness and kurtosis were calculated by simulating 1,000,000 random variates sampled from each gandh distribution for each level of k under investigation and then applying equations 2.8 and 2.9 to obtain estimates of Mardia's multivariate measures, bl~k and b2,k. The SAS program used to estimate these indices is included in Appendix A. Mardia's multivariate skewness estimates are presented in Table 22 and Table 23 presents Mardia's multivariate kurtosis estimates. Figures 25 and 26 are graphic presentations that compare the coefficients for the nonnormal distributions to the values expected under multivariate normality for the number of predictors under investigation in this study. The design for the study is a 5 (data generating distribution for the predictors) x 5 (data generating distribution for the errors) x 7 (pi) x 7 (Ap2) x 5 (k) x 14 (n) fully crossed factorial. This resulted in a total of 85,750 unique conditions. Each combination of factors was replicated 10,000 times and for each replication, a 95% confidence interval was constructed using the Alf and Graf method. Background and Theoretical Justification for the Simulation Method The multiple regression model can be written as Y, = Po + PX,, + P2X2, +...+ PkX,, + E,. (2.10) In the standardized multiple regression model, in the population with k > 1 predictors and one criterion, all variables are standardized to mean zero and unit variance so an intercept is not needed. This model is Y, = P,X,I + P7XZI +...+ BkX,, + s, = C P,X, + El (2.11) where p, is the population standardized regression coefficient associated with the ith predictor; er; ~N(0,G2); i = 1, ... k; j = 1, ... n. Assuming that we are operating on the population and that the model is correct, predicted values are given by 2, = [,X,, (2.12) and the squared correlation between the observed (Y) and the predicted (Y) values is denoted as p~ In the sample, this is estimated by R2. When the predictors are uncorrelated, the sum of the squared correlations is equal to the variation accounted for by all the predictors i7=1 =PI (.3 A simplifying transformation (Browne, 1975) holds that for any set of predictors that has a squared multiple correlation, p2, with Y, it is always possible to transform the predictors so that (a) the transformed predictors are mutually uncorrelated, (b) have unit variance, and (c) the regression coefficients are equal to any set of values such that :0 = oP (2.14) The quantity Ap2 is a function of the elements of the covariance matrix for the predictors and the criterion. In order to illustrate the application of Browne's results to the current simulation, let xt denote the vector of standardized predictor variables, with k x k correlation matrix P and k x 1 vector of correlation coefficients p between the predictors and the criterion variable, y. The squared multiple correlation coefficient for all k variables is denoted by p and for the first k 1 variables is denoted by pl We seek a transformation of the predictors to x such that the new variables are standardized and uncorrelated, and the regression coefficients relating y to the variables in xt are P, = 0 for the first k 2 variables and Bk1 = and Bk = ,pI for the last two variables, respectively. The transformation can be constructed in two steps. It is well known that the variables in the vector i = Axt, where A is ak x k matrix, will be uncorrelated dependent on an appropriate choice of A. For example, A can be selected as the inverse of the left Cholesky factor of R (i.e., R = A'A wNhere AT indicates the inverse of A') The vector of correlation coefficients between the transformed predictors and the criterion is Ap and because the transformed variables are uncorrelated, P = Ap is the vector of regression coefficients relating the criterion variable to the variables in 8E Because the criterion is a standardized variable and % Ax; is no~nsingular transforrmatio~n, is; u~nchngedl by the transformation, andl~ p = p We next seek a transformation x = T'i, where T' is k x k, such that the variables in x are standardized and uncorrelated and so that the regression coefficients for the variables in x are p, = 0 for the first k 2 variables and Pk1= Jand Pk = for the last two variables, respectively. Wer,, see+ tht 'R = p. Because the variables in ii are standardized and uncorrelated, the matrix T' must be orthogonal so that the variables in x will be standardized and uncorrelated. With an orthogonal transformation, P = TP. The matrix T can be constructed as follows (M. W. Browne, personal communication with J. Algina, 1999): Let u = p Then, T = I 2u (u'u) u' is an orthogonal matrix, and P = Tp. Because 2u'P 2u'P ~B)Tui hevralsi P'P = P'P = 1, and P TP it follows that TP = P .Thsiftevralsn uu uu xt are transformed to x = T'Axt, with T' and A defined as above, the transformed variables will be uncorrelated and standardized and the regression coefficients will be P, = 0 for the first k 2 variables, and Sk1 = and Bk = J for the last two variables, respectively. Because the variables are standardized and uncorrelated, the squared multiple correlation coefficient for the first k 1 variables will be 10, = p: and the squared multiple correlation coefficient for all k variables will be 1972 __ ?, 2 _P ? __ 2 The implication of Browne' s result is that if the predictors are correlated, they can be transformed so that (a) the predictors are uncorrelated, (b) the predictive power of k 1 of the predictors is channeled into one of the transformed predictors, (c) the predictive power of the remaining predictor is channeled into another of the transformed predictors, and (d) the remaining k 2 predictors have no predictive power (Algina, Moulder, & Moser, 2002). Rather than simulating various covariance structures for the predictors, the application of Browne's results allows us to operate with uncorrelated predictors since it is always possible to transform these variables to correlated variables. This dramatically reduces the number of conditions in the simulation to a more manageable number. In addition, when the focus of the study is squared multiple correlation coefficients, there is no loss of generality if the means of the predictors and the criterion are rescaled to zero. Therefore, in the simulation, (a) the independent variables are mutually uncorrelated with mean zero and variance one; (b) the criterion has mean zero and variance one; and (c) the regression coefficients are pt = pr, P2 P k = 0, Bk = p p The squared multiple corre~lationl is p for variables X1 to Xk and p, for variables Xi to Xk1. Given these conditions, the covariance between Y and X1 is pr, the covariances of Y with the remaining independent variables, X2 to Xk1 arT all ZeoO, the covariance between Y and Xk is jp p and the covariance for any pair ofXvariables is zero. Data Simulation The data were simulated using the randomnumber generating function in SAS Version 9.13. Computations were performed using SAS Interactive Matrix Language (PROC IML). Data management and follow up analyses were also conducted using SAS. Normal random deviates were generated for the n x k data matrix of predictors, X, using the SAS RANNOR function. All nk scores were generated to be statistically independent. In order to generate data from a gandh distribution, standard unit normal variables, Z,, were transformed via the following equation exp (gZ 1g hZ (.5 X = ex 2.5 g 2 when both g and h were nonzero. When g is zero, equation 2. 15 is reduced to X = Z, exp 2 (2.16) The gandh distributed variables were then standardized by subtracting the population mean and dividing by the population standard deviation. If g = 0, Clgh = 0. When g > 0, the population mean is 82 exp 1 2(1 h) Rgbh =~ i (2.17) and for h I V/2 the population standard deviation is 2 ex 2( 2h ~ [ 2( h '2 2g2 2 2 [~2(1 h) 2( h (1h 0 gh2 2 1h (2.1 8) In a similar manner, an n x 1 vector of standard normal random variables was generated. All n scores were generated to be statistically independent. The results of this vector were multiplied by p The result is a vector of residuals, e, with mean zero and variance equal to 1 p2, These steps ensured that the dependent variable, y, has mean of zero and variance equal to 1.0. As detailed above, applying Browne' s results, the k x 1 vector of regression coefficients was constructed such that elements 1 to k 2 are zero and the next two elements are pr and J p, respectively. The sample covariance matrix, S, was calculated from the data according to the model y = XP + e. Let Rf be the correlation matrix for the full set of k predictor variables, Rf+ be the k + 1 correlation matrix for all variables (including the criterion), R, be the correlation matrix for the first k 1 predictors, and Rr + be the correlation matrix for the first k 1 predictors and the criterion variable. All four correlation matrices can then be calculated from S. The squared multiple correlation coefficients for the full and reduced models are given by 2det (Rf, R, = 1 (2.19) det (Rf) and 2'= det (Ri,) R, = 1(2.20) det (R,) where det () represents the determinant of the matrix (Mulaik, 1972). For each of the 10,000 replications of each distributional condition, the asymptotic confidence interval was calculated using the method described by Alf and Graf (1999). Data Analysis Coverage probability, the probability that a confidence interval contains the parameter for which the confidence interval was constructed, was used to evaluate the adequacy of the confidence intervals. Coverage probability was estimated as the proportion of the 10,000 replications in which the confidence interval contained the population squared semipartial correlation, Ap2. In Order to investigate bias, the probability that the confidence interval was wholly below Ap2 and the probability the confidence interval was entirely above Ap2 were also estimated. To evaluate the conditions under which a hypothesis test is insensitive to assumption violations, Bradley (1978; 1980) proposed three criteria. Given the nominal Type I error rate, a, a test is robust if the empirical estimate of a falls within the interval at f /s. A liberal criteria is established when s = 2 and the limits are given by a f .025 = [.025, .075]. Using s = 5, the interval for a moderate criterion is [.04, .06]. To establish a strict criterion, s = 10 and the interval is [.045, .055]. If these recommendations are adapted and applied to criteria for a confidence interval with a nominal coverage probability of .95, the criterion intervals become (a) [.925, .975]; (b) [.94, .96]; and (c) [.945, .955]. Although there is no universally accepted standard by which procedures are considered robust or not, Lix and Keselman (1998) suggest that applied researchers should be comfortable working with a procedure that controls Type I error within the bounds established by Bradley's liberal criterion, as long as the procedure also limits the error rate across a wide range of assumption violations. Applying this recommendation to the procedure for constructing an asymptotic confidence interval means that in order to be controlled, the coverage probability should fall within the interval [.925, .975]. We used this interval for judging the adequacy of the confidence intervals. Because there are those who would consider this standard to be too lenient, confidence intervals were also evaluated according to the more stringent criterion level of .94 to .96. Table 21. Study Design Number of predictors, k (5 levels) 1. k= 2 2. k= 4 3. k= 6 4. k= 8 5. k= 10 Size of the squared multiple correlation coefficient for the reduced model (7 levels) 1. p = .00 2. p = .10 3. p = .20 4. p = .30 5. p = .40 6. p = .50 7. p = .60 Size of the squared semipartial correlation coefficient (7 levels) 1. Ap2 =.00 2. Ap2 =.05 3. Ap2= .10 4. Ap2= .15 5. Ap2 =.20 6. Ap2 =.25 7. Ap2= .30 Sample size, n (14 levels) 1. n =100 2. n =200 3. n =300 4. n =400 5. n =500 6. n =600 7. n =700 8. n =800 9. n =900 10. n =1000 11. n =1250 12. n =1500 13. n =1750 14. n = 2000 Table 22. Mardia's Multivariate Skewness, bl~k, for the Nonnormal Distributions. Distribution g =0 g =.301 g =.502 g =.760 h= .058 h= .017 h= .048 h= .098 k bl~k Interval bl,k Interval bl,k Interval bl,k Intervall 2 .01 (.03, .05) 1.55 (15,15) 3.90 (3.81, 3.98) 7.87 (7.71, 8.03) 4 .02 (.05, .08) 3.15 (3.09, 3.22) 7.65 (7.52, 7.78) 15.80 (15.57, 16.02) 6 .00 (.09, .08) 4.71 (4.62, 4.80) 11.50 (11.33,11.66) 23.74 (23.47, 24.02) 8 .01(.2,.0 6.23 (6. 11, 6.35) 15.47 (15.26,15.68) 31.64 (31.32, 31.96) 10 .01(.3,.4 7.71 (7.57, 7.86) 19.43 (19.18,19.68) 39.61 (39.23, 39.98) Table 21 Continued Distribution for the predictor variables, X (5 levels) 1. g= 0, h = 0 2. g=0, h =.058 3. g= .301, h= .017 4. g =.502, h =.048 5. g =.760, h =.098 CL= 0, a CL= 0, a: L= .150, CL = .249, CL = .378, = 1, yi = .00, y2 = .00 = 1.097, yi= .00, y2= 1.00 o = 1.041, yi = .89, y2= 1.20 a = 1.108, yi = 1.41, y2 = 3.00 S= 1.252, yr 2.00, y2 = 6.00 Distribution for the residuals, 1. g=0, h =0 2. g=0, h =.058 3. g =.301, h =.017 4. g =.502, h =.048 5. g =.760, h =.098 e (5 levels) CL= 0, a = 1, yi=.00, y2= .00 CL= 0, = 1.097, yl= .00, y2= 1.00 C = .150, CL = .249, CL = .378, 0 =1.041, yr S= 1.108, yr 0 =1.252, yr .89, 72 1.41, 72 2.00, 72 1.20 = 3.00 = 6.00 1This interval represents .025 and .975 percentiles of the 1,000,000 replications. g= .502 h= .048 Interval' (13.79,13.89) (35.60,35.77) (65.39,64.62) (103.26,103.57) (149.02,149.40) g= .760 h= .098 Interval (19.88, 20.02) (47.90, 48.13) (83.93, 84.23) (127.87, 128.24) (179.80, 180.25) Interval' (10.32, 10.38) (28.70, 28.80) (55.02, 55.16) (89.34, 89.53) (131.67, 131.91) b2,k 10.05 28.07 54.07 88.10 130.12 b2,k 10.35 28.75 55.09 89.44 131.79 b2,k 13.84 35.69 65.50 103.41 149.21 b2,k 19.95 48.01 84.08 128.05 180.03 1This interval represents .025 and .975 percentiles of the 1,000,000 replications. Table 23. Mardia' s Multivariate Kurtosis, b2,k, for the Nonnormal Distributions. Di stributi on I g= .301 h= .017 g= 0 h= .058 Interval' (10.03,10.08) (28.02,28.11) (54.00,53.13) (88.01,88.19) (130.01,130.23) OC ', 6 5 4 3 2 9 7875 675 5625 45 3375 225 1125 0 1 125 2 25 3 375 4 5 5 625 675 7 875 9 Figure 21. Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = 0, h = .058 overlaid with a normal curve with CEgh = 0, Ggh = 1.097. O 451 4 5 6 Figure 22. Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .301, h = .017 overlaid with a normal curve with CEgh = .150, Ggh = 1.041. 6 5 4 3 2 1 0 1 2 3 4 5 67 6 5 4 3 2 1 0 1 2 3 4 5 6 Figure 23. Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .502, h = .048 overlaid with a normal curve with Egh = .249, Ggh = 1.108. Figure 24 .Plot of the empirical cumulative distribution function for a univariate nonnormal distribution where g = .760, h = .098 overlaid with a normal curve with CEgh = .378, agh = 1.252 0g9 = 0, h = 0 (Multivariate Normal) * g =0, h =.058 ~ g = .301, h = .017 ,40  g = .502, h = .048 9g =.760, h = 098 ~30 e 20 2 4 6 8 10 Number of Predictors, k Figure 25. Comparison of Mardia's multivariate skewness for the multivariate normal distribution to that of the distributions investigated. 200 mg = 0, h = 0 (Multivariate Normal) 18og9=09,h =.058 A~ g = .301, h = .017 160 1 rg =.502, h =.048 4g =.760, h =.098 120 10 2 81 NubrofPeicos Figre26Madasmliaitkutssfrtemliaitnomldsrbtoanth Fiue2.Mriasmltvrae utsnon hemlivraenormal distributions invetigted CHAPTER 3 RESULTS Replication of Results for Multivariate Normal Data Prior to conducting the study, data were simulated for the multivariate normal case in order to replicate key findings reported by Algina and Moulder (2001). Replication served two additional purposes. It verified that the simulation program was functioning properly and that reasonably close agreement was achieved between coverage probabilities estimated with 10,000 replications and coverage probabilities estimates reported by Algina and Moulder based on 50,000 replications. Results are compared for k = 2, 6, and 10 in Tables 31, 32, and 33. The shaded columns are the results from this simulation; the unshaded columns reproduce tabled results reported by Algina and Moulder (p. 638). In these tables, as well as subsequent tables reporting coverage probabilities, italics indicate that the estimated coverage probability falls within the interval from .925 to .975. Results in bold represent estimated coverage probabilities between .94 and .96. As Olkin and Finn warned, and Algina and Moulder demonstrated, this procedure does not work at all when the population squared semipartial correlation is zero. Regardless of sample size, number of predictors, or the value of the population squared multiple correlation in the reduced model, the coverage probability when Ap2 is zero is always too large, i.e.,~ j)>.999. This is because if p: = pi om = 0 even though the actual sampling variance of R2 is not zero. Because of this defect in the asymptotic confidence interval, Alf and Graf recommended that researchers perform a hypothesis test of the significance of the corresponding regression coefficient and apply the asymptotic confidence interval procedure only when the null hypothesis is rejected. Given the coverage probability results when Ap2 = 0, although coverage probabilities are reported in Tables 31 to 33, they are not included in the assessment of agreement that follows as doing so would tend to exaggerate the degree of correspondence between the two sets of estimates. Comparing the coverage probability estimates generated by the two studies, for Ap2 > 0, 79% were within f .003 and 94% were within f .005. Of the 504 comparisons, 73 (15%) showed no difference to 3 decimal places. When coverage probabilities differed, 208 (41%) estimates from the current study were greater and 223 (44%) were smaller than coverage probabilities reported by Algina and Moulder. For k = 2, reported in Table 31, 90% of the estimates from the two simulations were within f .003 and only 5 differences were greater than f .005. For 15 of the 168 cases, estimated coverage probability would have been categorized differently with respect to Bradley's criteria for robustness, [.925,.975] or [.94,.96]. These discrepancies were evenly split with 8 estimates from Algina and Moulder' s study falling in the more stringent interval, that is, closer to the nominal level, and 7 values of p estimated in this study satisfied the more stringent criterion. Both sets of estimates when k = 2 showed that empirical coverage probability approached the nominal as sample size increased and as the magnitude of the squared semipartial correlation increased. The confidence interval was least accurate for the smallest sample size, n = 175, for all levels of pl when Ap2 = .05. There was good coverage probability, i.e. at least .94, for n > 425 and Ap2 > .10. Depending on the tolerance one has for the difference between coverage probability and the nominal confidence level, coverage probability could be considered marginally adequate, that is, at least .925, for all sample sizes and Ap2 > .10. The agreement between the two replications was somewhat worse as the number of predictors increased. As shown in Tables 32 and 33, for both k = 6 and k = 10, 127 (76%) comparisons were within f .003. There were 8 (5%) differences greater than f .005 with 6 predictors and 16 (9%) differences exceeded + .005 with 10 predictors. Although for k = 6 the large differences favored the results reported by Algina and Moulder (6 vs. 2), for k = 10 a large difference was just as likely to favor the estimates from the current simulation where "favoring" is defined as an estimated coverage probability that is closer in value to the nominal. In Algina and Moulder' s data, there was also a tendency for the estimated coverage probability to meet the more stringent evaluation criterion when there was mismatch in categorization. For k = 6 and Ap2 > .10, all coverage probabilities were greater than .925 for n > 425, and all but one were greater than .94 for n = 600. At k = 10, although all coverage probabilities met the liberal criterion at n = 600, there was no level of Ap2 for which all were greater than .94. Overall, agreement between the two studies was quite good and therefore, the current study was conducted by simulating 10,000 replications of each condition. Simulation Proper In this simulation, 857,500,000 independent confidence intervals were calculated. Given there were 10,000 replications of each combination of X, e, n, k, p and Ap2, COVerage probability was computed as the proportion of times the constructed confidence interval contained Ap2, the population squared semipartial correlation. In this manner, 85,750 coverage probabilities were estimated. Since the distribution from which predictors were sampled and the distribution for the residuals were both manipulated, this allowed us to examine four distinct situations that might be encountered when analyzing data using multiple regression: (a) normal X, normal e, (b) normal X, nonnormal e, (c) nonnormal X, normal e, and (d) nonnormal X, nonnormal e. Average empirical coverage probability estimates for these four scenarios, as a function of sample size, are depicted in Figure 31. Results for all values of k, p and Ap2, for selected sample sizes, are reported in Tables 34 to 37. Estimates for conditions where Ap2 = 0 were omitted since all were either .999 or 1.000, rounded to three decimal places. Table 34 presents results for normal predictors with normal errors. If we consider Bradley's liberal interval, .925 to .975, as evidence for robustness, for k = 2, 4, 6, 8, and 10, the percentages of nonrobust values at n = 200 were 9%, 12%, 14%, 38%, and 71%, respectively. At n = 400, the percentages of empirical values that were not robust decreased dramatically to 0%, 0%, 0%, 2%, and 2%. All estimated coverage probabilities were robust at n > 600. At the largest sample sizes reported, n = 1500 and n = 2000, all exceeded .94 and met the more stringent standard for robustness When predictors were normal with nonnormal residuals, reported in Table 35, the percentage of nonrobust coverage probabilities increased. For k = 2, 4, 6, 8, and 10 and n = 200, the percentages of nonrobust values were 3 1%, 38%, 50%, 76%, and 100%, respectively. As expected, the number of nonrobust coverage probabilities decreased as n grew larger. This decrease was notable between n = 200 and n = 400 (7%, 7%, 5%, 14%, and 19%) and less so for n = 600 (5%, 2%, 2%, 5%, 7%) and n = 800 (2%, 2%, 2%, 5%, 5%). For n > 1000, all coverage probabilities were robust except when pi = 0 and Ap2 = .30. Table 36 shows coverage probability estimates when the predictors were nonnormal and the distribution of the residuals was normal. At n = 200, there were no robust empirical estimates at any level of k. For n = 400, the percentages of estimates outside Bradley's liberal interval were 64%, 62%, 76%, 95%, and 100% for k = 2, 4, 6, 8 and 10, respectively. For n = 600, the percentage of coverage probabilities that were nonrobust for these values of k were 50%, 55%, 60%, 64%, and 69%. For sample sizes greater than 600, improvement, as measured by a decrease in nonrobust values, was much more gradual. For k = 2, 4, 6, 8, and 10, and n = 800, the percentages were 50%, 50%, 50%, 57%, and 60%; for n = 1000, 48%, 50%, 50%, 48%, and 55%; and for n = 1500, 45%, 45%, 50%, 48%, and 52%. At the largest sample size, n = 2000, at least 45% of empirical coverage probabilities at every level of k failed to meet even the liberal standard for robustness. The coverage probabilities contained in Table 37 were estimated for the case where both predictors and errors were nonnormal. For n I 400, there were only 6 estimates greater than .925. Of these, 5 were observed for n = 400 and k = 2, and 1 at n = 400 and k = 4. For n = 600, the percentages of coverage probabilities that were nonrobust were 74%, 71%, 81%, 86%, and 88% for k = 2, 4, 6, 8, and 10, respectively. Similar to what was observed with nonnormal X and normal e, the improvement in coverage probabilities is minor for n > 600 such that when n = 2000, nonrobust estimates were 71% 71% 74% 74% and 76% for k = 2, 4, 6, 8, and 10, respectively. For all four scenarios, coverage probability tended to decrease as more predictors were included in the model, particularly with smaller sample sizes. Coverage improved as sample size increased. Figure 31 suggests that nonnormality in the predictors was more detrimental to the adequacy of the confidence interval than was a nonnormal error distribution. A modest decline in coverage probability was observed between normal X, normal e and normal X, nonnormal e, but there was a considerable drop off in performance when X was nonnormal even when the errors were normally distributed. In addition, coverage probability was examined by distributional condition. A distributional condition was defined by the combination of the distribution for the predictors and the distribution for the errors. There were 25 distributional conditions included in this study. For clarity and ease of presentation, the gandh distributions from which data were generated willl be referred to as:. (a)j pseuudoto for g= h= .058; (b) pseudvnoI fo g= .301, h= .017;) (c) pseudoX! for g = .502, h = .048; and (d) pseudoexponential for g = .760, h = .098. The descriptive statistics reported in Table 38 were based on 2940 coverage probability estimates per distributional condition, excluding those cases where Ap2 = 0. Average coverage probability was closest to the nominal confidence level when both X and e were normally distributed. The average coverage probability was smallest for the most seriously nonnormal case, both X and e sampled from the pseudoexponential distribution. Within each level of X, mean coverage probability decreased as the error distribution exhibited increasing nonnormality. A similar pattern was observed for the median. In the extremes, the median for multivariate normal data was .944. In contrast, for the condition where both X and e were distributed pseudoexponential, half the estimated coverage probabilities were less than .868. For all distributional conditions in which X was distributed pseudoexponential, at each error distribution, at least 50% of the estimated coverage probabilities were below .90. The variability in coverage probability increased with greater skewness and kurtosis in the data. When X was distributed pseudoexponential, regardless of the distribution for e, the standard deviation was over three times that observed for the multivariate normal case. Although the maximum value did not differ a great deal as a function of distributional condition, the minimum was much lower and the range was wider for conditions with greater nonnormality. Also included in Table 38 is an examination of the robustness of the confidence interval as a function of distributional condition at n = 600 and n = 2000. Applying the liberal criterion, .925 to .975, all coverage probabilities were robust at n = 600 for multivariate normal data and when predictors were normally distributed and the distribution for the errors was either pseudotlo or pseudoX, ,. There was no other distributional condition for which coverage probability was adequate for the entire range of values for k, p~ and Ap2, eVen for the largest sample size investigated, n = 2000. For the most extreme distributional condition simulated, both X and e drawn from a pseudoexponential distribution, 100% of the coverage probabilities were nonrobust at n = 600. There was only slight improvement at n = 2000 where 90% of the estimates were not robust. Although it could be argued that data like this is unlikely to occur in practice, with an error distribution with severe nonnormality, i.e. pseudoexponential, there was poor coverage even when the predictors were multivariate normal. At n = 2000, 25.2% of the estimates were not robust. Furthermore, when using multiple regression, applied researchers are much more likely to be concerned about the error distribution since violation of this assumption influences the power and accuracy of hypothesis tests. Researchers may not even investigate the multivariate skewness and kurtosis for the predictors. With a normal error distribution, the percentages of nonrobust estimates at n = 2000 for predictors distributed pseudotlo, pseudo7 ,, pseudoX), and pseudoexponential, were roughly 7%, 10%, 49%, and 96%, respectively. Although results are reported for only two sample sizes, for all distributional conditions and sample sizes investigated, when an estimated coverage probability was outside of either criterion interval, it was without exception, too small. Figure 32 illustrates the relationship between coverage probability, distributional condition, and sample size. The best coverage probability, over the full range of sample sizes investigated, was observed for the condition in which both X and e were normal. However, at best, average coverage probability never reached the nominal confidence level, .95. There was a slight degradation in performance for conditions where X was normal and the nonnormal errors were distributed pseudotlo or pseudoX 0 Although it is a bit hard to discern because of the overlap for conditions where X was distributed pseudotlo and pseudoX 0,, results were similar for normal X with e distributed pseudoX0, and X sampled from pseudotlo with normal error. That is, coverage probability estimates were similar when the predictors were normal with markedly nonnormal errors and when predictors were sampled from a pseudotlo distribution wit a orml ero ditriutin.Similarly, the condition in which X was distributed pseudoX, with normal error exhibits coverage probability comparable to the conditions where predictors were moderately nonnormal, sampled from pseudotlo and pseudoX ,with errors that were extremely skewed and kurtoti c (p seudoexponenti al). Thereafter, as the distribution for X became increasingly nonnormal, coverage probability decreased and was least adequate when the predictors were sampled from a pseudoexponential distribution regardless of the distribution for the residuals. Within each condition for X, coverage probability decreased in the same systematic way as a function of the nonnormality in the error distribution such that coverage probability was best with normally distributed errors and worst when the errors were distributed pseudoexponential. Analysis of Variance and Mean Square Components Given the sheer volume of data collected in this study, analysis of variance (ANOVA) was used to identify the experimental factors that were important in determining the estimated coverage probability, p. Factorial ANOVA assumes that multiple factors contribute to the variance in the data. The total variance is partitioned into main effects corresponding to each factor, the interactions among them, and random error. The factors manipulated in the study were all treated as betweensubj ects effects in a fullycrossed ANOVA model that consisted of 6 main effects and 56 interactions. Since the procedure for calculating the confidence interval is clearly inappropriate when Ap2 = 0, the 12,250 coverage probabilities calculated for this value were not included in this analysis. It was felt this provided a more accurate reflection of the data. ANOVA analyses and variance partitioning of coverage probabilities were therefore based on N= 73,500. The mean squares, Fstatistics, and pvalues associated with each effect in the full model were computed using the ANOVA procedure in SAS. These results are reported in Table 39. The combination of a large number of effects and a very large sample size ensured that there were many statistically significant effects, including higherorder interactions. In all, 34 of the 62 effects estimated were significant at p < .0001. Because statistical significance is in large part a function of sample size, a statistically significant effect is not very informative when the sample size is very large. To better understand the relative impact of these effects on coverage probability, it was necessary to obtain a measure of influence to determine which effects were associated with a meaningful proportion of the variance. The term variance component is used in the context of analysis of variance with random effects and denotes the estimate of the amount of variance that can be attributed to each effect. In the current context, the levels of each factor were purposively selected. Because effects are fixed and not random, the more accurate term is mean square component. The ANOVA method for estimating mean square components equates mean squares to their expected values, EM~S, and solves for the mean square components in those expectations. The estimated mean square component for each main effect and interaction was computed using the general formula M~S~(o) MS(Residual) 62= where a is the effect of interest and j is the product of the number of levels for each factor not involved in a (Myers & Well, 2003). In this case, the residual mean square, .0000079, includes the mean square for the sixway interaction and the mean square for error. For example, the mean square component for X is given by S7.5162 .0000079 7.516192 6 .0005113. r(5)(5)(7)(6)(14) 14700 Since these are simultaneous linear equations with as many unknowns as there are equations, they have unique solutions and mean square components are estimated noniteratively. An unfortunate characteristic of ANOVA estimators is that they can yield negative estimates even though, by definition, they are nonnegative. Negative components were set equal to zero before calculating the proportion of variance that could be attributed to each effect. The components were then summed and the ratio of each mean square component to the sum was used as a measure of influence. Effects significant at a = .0001 that accounted for at least .5% of the variance are reported in Table 39. The distribution for the predictors, X, was responsible for 44.5 1% of the total variance in coverage probability. The variance component associated with X was nearly four times greater than that of any other main effect. The main effects of Ap2 and pl were comparatively less important factors in determining average coverage probability, accounting for 10. 12% and 3.26% of the total variance, respectively. Effects of Ap2 and pl were moderated by their interaction. This twoway interaction accounted for an additional 1.60% of the variance. The mean square component associated with sample size, n, accounted for 9.41% of the total variability in p The main effect of e accounted for only 3.3 8% of the variance indicating that the error distribution had a much smaller impact on the coverage probability of the confidence interval than did the distribution of the predictors. The number of predictor variables, k, had very little impact on p, accounting for only .69% of the variability. The critical importance of nonnormality in the predictors was further substantiated by the fact that interactions involving X explained an additional 22.24% of the variance in p The variance components for the twoway interactions between X and Ap2 and X and p~ were associated with 1 1.3 8% and 8.82% of the total variance, respectively. The threeway interaction of these factors, X x p~ x Ap2, mOderated the three main effects and the twoway interactions and accounted for an additional 2.04% of the total variance in coverage probability. The main effectsall ofX p adp, and the interactions of these three factors explained 81.7% of the total variance in coverage probabilities. The effect of e was also moderated, although to a lesser extent, by the twoway interactions between e and Ap2 and e and p~ These interaction effects were responsible for .70% and 1.28%, respectively. The threeway interaction, e x p~ x Ap2, explained .54% of the total variance. The main effects of e, Ap2, and p~ and their interactions accounted for 5.85% of the variance in p . This was further evidence that although a nonnormal error distribution had some effect on the coverage of the confidence interval it was not nearly as important as nonnormality in the predictors. Sample size interacted with the number of predictors, k, and the size of the squared semipartial correlation coefficient, Ap2. The n x k and n x Ap2 interaction effects each explained approximately 1% of the total variance. Important effects involving sample size were associated with 1 1.3 5% of the variability in coverage probability. Thus, it appears that sample size was also more important than nonnormality in the error distribution in determining the adequacy of the confidence interval. The effects reported in Table 39 accounted for an estimated 99.6% of the total variance in coverage probability. The following sections describe the important factors influencing coverage probability as identified by the mean square components analysis. The Influence of Nonnormality on Coverage Probability Nonnormal predictors When coverage probability was averaged over all other factors, Table 310 shows the adequacy of the confidence interval, as measured by coverage probability, worsened as the distribution for the predictors became increasingly nonnormal. When X was distributed multivariate normal, avera e coverage probability was .935 SD = .014 When the set of predictors was made up of variables sampled from a pseudotlo distribution, that is symmetric, but more peaked and heavier tailed than the normal distribution, average coverage probability dropped to .925 (S = .015). A similar estimate of average coverage probability, p =.923 (S = .015), was obtained when the ex lanator variables were sam led from a poulation distributed as pseudoX,2. Because these distributions had similar values for both univariate and multivariate kurtosis, but differed with respect to skewness, this result seems to suggest, at least for moderate nonnormality, that skewness may be less important than kurtosis in determining the adequacy of the confidence interval procedure. When predictors were sampled from a pseudoX; population distribution, the average coverage probability was .906 (SD = .022). The average coverage probability when predictors were sampled from a distribution that has the same skewness and kurtosis as the ex onential distribution was .877 (S = .037). The median was also related to the degree of nonnormality present and declined in a manner similar to the mean. In addition, the range of coverage probability values estimated in the simulation expanded as the degree of nonnormality became more extreme. Figure 33 presents boxplots that describe the distribution of coverage probability estimates as a function of the distribution for X. We see that all distributions for p are skewed to the right, but the distribution was flatter, more spread out, and longertailed as the degree of skewness and kurtosis in the distribution for the predictors increased. Nonnormal error distribution Table 311 shows descriptive statistics for the main effect of the distribution for error. The means, by error distribution, also declined as a function of the degree of nonnormality present. The range between the largest mean, .919 for normally distributed errors, and the smallest, .903 for errors distributed pseudoexponential, was much smaller than observed in Table 310 for the main effect of the distribution for the predictors. There was also less variability in the median, ranging from .929 for normal errors to .911 for pseudoexponential errors. The range of coverage probabilities and the standard deviations were essentially equal suggesting that there was little difference in the variability of coverage probability estimates as a function of the error distribution. The boxplots depicted in Figure 34 supported this contention. The Impact of Squared Multiple Correlations on Coverage Probability Figure 35 depicts the relationship between coverage probability and the magnitude of the population squared semipartial correlation. Averaged over all other factors, coverage probability tended to decrease as the size of the squared semipartial correlation increased. Figure 35 also shows that the effect of Ap2 OH COVerage probability varied depending on the distribution for the predictors hence the significant interaction between X and Ap2. Figure 35 and Table 312 show the relationship between Ap2 and coverage probability within each distribution for X. Under normality there was actually a slight increase in p from Ap2 = .05 to Ap2 = .10, the smallest values investigated. This increase essentially leveled off thereafter. Stable coverage probability between Ap2 = .05 and Ap2 = .10 was observed for pseudotlo and pseudoX,, distributions. In both distributions, p showed a steady, but modest, decline for Ap2 > .10. The decline in p when X was distributed pseudoX! was modest between Ap2 = .05 ( p = .924) and Ap2 = .10 ( p = .919). The rate of change was much steeper for Ap2 > .10 such that p decreased to .886 at Ap2 = .30. For X sampled from the pseudoexponential distribution, coverage probability was essentially a linear function of Ap2 that declined sharply over the range of Ap2 fTOm p = .915 to p = .840. There was also a significant interaction, depicted in Figure 36, between e and Ap2 However, as reported in Table 39, this effect while statistically significant, accounted for little of the variance in coverage probability. A comparison of Figure 36 with Figure 35 shows a similar pattern for the relationship between the error distribution and Ap2 with less extreme variation in the rate at which coverage probability declined. When the error distribution was normal, pseudotlo, or pseudoX,,, p declined slightly between Ap2 = .05 and Ap2 = .10 with a steady, gradual decrease for Ap2 > .10. The decline in coverage probability between Ap2 = .05 and Ap2 = .60 was more nearly linear, with a steeper slope, when the error distribution was sampled from either a pseudoX! or pseudoexponential distribution. The decrease in coverage probability was most dramatic when errors were distributed pseudoexponential. At Ap2 = .05, p = .923 and at Ap2 = .60, coverage probability dropped to p = .883. Coverage probabilities, as a function of e and Ap2, are reported in Table 313. Figure 37 depicts the relationship between p~ and coverage probability. Coverage probability stayed relatively constant between p = .00 and p~ = .40 and then decreased for pi = .50 and pi = .60. The interaction between X and pl is also demonstrated in Figure 37. When the predictors were distributed multivariate normal, coverage probability was a linear function of pi, gradually increasing from .928 at p~ = .00 to .940 at p~ = .60. For X distributed pseudotlo and pseudoX,2,, there was a minor increase in coverage probability, roughly .92 to .95, between pi = .00 and pi = .40. Coverage probability was smaller for pl > .50. As the distribution for X demonstrated greater skewness and kurtosis, the coverage probability function tended to be more curvilinear. For X distributed pseudoX), coverage probability was relatively consistent between pi = .00 and pi = .30 and decreased steadily for p~ .40 to a minimum of .887 at p = .60. When X was sampled from a pseudoexponential distribution, coverage probability started out at p= .896 atpi = .00 and decreased between p = .00 and pi = .30 to .885. The decline in p was at a much faster rate thereafter such that when p~ = .60, p = .837. As Table 314 shows, the differences in coverage probabilities, as a function of the degree of nonnormality in X, had their smallest range of values at p~ = .00 (.928 to .896) and the range was maximized at p = .60 (.940 to .837). While the behavior of p over the levels of Ap2, as a function of the error distribution, was comparable to the relationship between Ap2 and the distribution for the predictors, the e x p~ interaction, presented in Figure 38, shows this was not the case for pl In contrast, the differences in j>, as a function of nonnormality in the error distribution, were greatest at p = .00. Coverage probability, reported in Table 315, ranged from .927 for normal errors to .897 when errors were pseudoexponential. By the time p~ = .60, coverage probability had essentially converged and was approximately .90 regardless of the degree of nonnormality in the error distribution. Furthermore, for normal errors, maximum coverage probability, .927, occurred for p~ = .00. For e distributed pseudotlo and pseudoX,2,, the largest coverage probability, .922, occurred at p~ = .10. For pseudoX, ,, the largest average coverage probability, .915, was observed for p~ = .20 and p~ = .30. When the error distribution was sampled from a pseudoexponential population distribution, the largest coverage probability, .907, was observed at p = .30 and p~ = .40. These results suggest that the X x p~ and e x p~ interactions might have a counterbalancing effect. However, the e x p~ interaction, although statistically significant, explained a modest 1.3% of the total variance in coverage probability while the X x p~ interaction accounted for 9.5% of the total variance. The impact of the interaction between Ap2 and p~ on coverage probability is shown in Figure 39. Although there was a tendency for estimated coverage probability to be further from the nominal as p~ increased, this was not the case for all values of Ap2. When Ap2 = .05, there was an increasing trend in coverage probability over the range of p values. For Ap2 > .05, coverage probability was relatively stable between p~ = .00 and p~ =.30, but then decreased substantially from p~ =.30 to p~ =.60. However, the relationship of p to p~ and Ap2 varied depending on the distribution for X. Figure 310 shows the effect of the threeway interaction between X, p~ and Ap2 OH COVerage probability. To aid in the description and interpretation of effects, coverage probabilities, as a function of p~ and Ap2, for each level of X are reported in Tables 316 through 3 20. For the multivariate normal case, coverage probability tended to be worse when Ap2 = .05 and for all levels of Ap2 COVerage probability increased as p~ increased. The plots of coverage probability as a function of Ap2 and p~ for pseudotlo and pseudo'7 lokrmrabysmlrtoecte and have the same pattern of results described for the twoway interaction of p~ and Ap2, albeit over a narrower range of values. Coverage probability increased over the levels of p~ when Ap2 = .05, but for Ap2 > .05, coverage probability tended to increase from p~ = .00, reached a maximum at p~ = .30, and decreased thereafter. Although coverage probability was best for Ap2 = .05 and p~ = .60, for all other levels of Ap2, COVerage probability was lowest at p~ = .60. Coverage probability was consistent at approximately .925 over the full range for p ~for Ap2 = .05 when the predictors were distributed pseudoX For Ap2 > .05, coverage probability was stable between p~ = .00 and p~ = .20, but showed a decline between p~ = .30 and p = .60. The rate of decline was faster for larger values of Ap2 For X sampled from the pseudoexponential distribution, coverage probability decreased as p~ increased for all levels of Ap2. The rate of decline varied according to the value of Ap2 with steeper slopes associated with larger values of Ap2. The drop in coverage probability was minor for Ap2 = .05, where p = .917 at p~ = .00, falling to p =.907 at p~ = .60. However, when Ap2 = .30, at p = .00 coverage probability was .870 and decreased markedly to~ p .777 at p~ = .60. Thus, when nonnormality in the predictors was extreme, the importance of the magnitude of the squared multiple correlations, Ap2 and p~ was critical for determining the adequacy of the confidence interval procedure. Although no condition, on average, demonstrated acceptable coverage over the entire range of factors manipulated in this study, Figure 310 illustrates how inaccurate the asymptotic confidence interval can be under conditions that could occur in practice. The Impact of Sample Size on Coverage Probability As seen in Figures 31 and 32, regardless of the distribution for the predictors or the distribution for error, coverage probability increased rapidly between n = 100 and n = 400. The average coverage probability at n = 100 was .882 increasing to .912 at n = 400. The rate of increase, from .914 to .917, was considerably slower between n = 500 and n = 800. Furthermore, it appears that there was little to be gained by increasing the size of the sample beyond n = 1000 with respect to the adequacy of the confidence interval. Coverage probability is increasing so slowly between n = 1000 and n = 2000 (from .918 to .920) that it is likely that sample sizes well in excess of 2000 would be required to ensure the robustness of the confidence interval over a wide range of nonnormal conditions. Evidence to support this contention was evaluated by estimating coverage probabilities for X and e distributed pseudoexponential; n = 5000; p~ = .00, .30, and .60; and Ap2 = .05, .10, .15, .20, .25, and .30. Results indicated that even with an extremely large sample size, when nonnormality is severe, coverage probability remained inadequate. Only 7 of 54 coverage probability estimates exceeded .925 and consequently, 87% were nonrobust. Six of the robust estimates were observed for p~ = .00 or p~ =.30 and Ap2 = .05 for all three levels of k. The remaining robust estimate occurred for k = 10, p~ = .00, and Ap2 = .10. Figure 311 shows that the effect of sample size was not the same at every level of Ap2 The interaction between n and Ap2 was due to the fact that the effect of Ap2 was smaller when the sample size was smaller than the effect of Ap2 when the sample size is larger. In addition, the average values for p are not is the same order as a function of Ap2 foT Smaller sample sizes. For example, at n = 100, although coverage probability was clearly inadequate for all levels of Ap2, it was worse for the smallest value, Ap2 = .05, as well as the largest values, Ap2 = .25 and Ap2=.30. Coverage probability improved noticeably for Ap2 = .05 at n = 200 although it was still not as large as it was for Ap2 = .10. By n = 300 coverage probability for Ap2 = .05 and Ap2 = .10 were equal. At n > 400, coverage probability was a function of Ap2 grOwing worse as Ap2 increased. Coverage probabilities, as a function of sample size and Ap2, are presented in Table 321. As shown in Figure 312, the rate of increase in coverage probability as a function of sample size depended on the number of predictors in the model. For the smaller sample sizes, most notably at n = 100, although average coverage probability was clearly inadequate, it was considerably worse when there were more predictors in the model. As the sample size increased, the difference between coverage probabilities as a function of the number of predictors became progressively smaller. Table 322 shows that at sample sizes greater than 1000, the difference in coverage probability was minimal and it appears that the number of predictors exerted very little influence on coverage probability. Probability Above and Below the Confidence Interval When the confidence interval did not contain the population squared semipartial correlation coefficient, the probability that the confidence interval was below Ap2 and the probability that the confidence interval was above Ap2 were also estimated. When Ap2 = 0, average coverage probability was .9998. Only 18,754 of the 122,250,000 confidence intervals constructed did not contain the population parameter. There were only 4 instances in which the interval was wholly below Ap2; 18,750 confidence intervals were wholly above Ap2 When the increase in the squared multiple correlation was zero the confidence interval was too conservative, but for all other values of Ap2, the confidence intervals tended to be too liberal. For the 73,500 conditions where Ap2 > .05, the probability that the confidence interval was wholly below Ap2 was twice the probability that the confidence interval was entirely above Ap2 (.664 vs. .336). The confidence interval is biased in the sense that there is a systematic error that causes the estimated confidence limits to regularly miss the population parameter in the same direction. The tendency to underestimate Ap2 Occurs because the estimated asymptotic standard error declines as AR2 declines. As a result, when AR2 B2 there is a tendency for the interval to be completely below Ap2 (Algina & Moulder, 2001). The Relationship between Estimated Asymptotic Variance, Empirical Sampling Variance of AR2, and Coverage Probability As previously noted, all coverage probabilities were at least .998 for Ap2 = 0. This result indicates that when a predictor was added to a multiple regression model and there was no increase in p2, the confidence interval was always too wide. As previously noted, there were two reasons for this shortcoming in the confidence interval. The distribution of AR2 is skewed to the right and since the increase in R2 cannot be less than zero it has a lower limit of zero. Because the confidence interval formula does not recognize this lower limit, when the population value was Ap2 = 0, the confidence interval tended to have a lower limit less than zero. The second basis for the problem, identified by Algina and Moulder, is that the asymptotic variance overestimates the sampling variance ofA~R2. This was verified in the current study by calculating for each combination of X, e, n, k, pi and Ap2 (a) the mean estimated asymptotic variance over the 10,000 replications and (b) the empirical sampling variance of AR2. For all conditions where Ap2 = 0, the ratio of the average value of (a) to (b), denoted as MEAV/VarAR2, ranged from 1.27 to 2.18 with a mean of 1.95 and a median of 1.96. The ratio, MEAV/VarAR2, was also evaluated for Ap2 > 0. ANOVA and mean square components analyses were conducted for MEAV/VarAR2 as the outcome variable. As was the case with coverage probability, due to the large sample size, only 24 of 62 effects failed to demonstrate significance at p < .0001. Effects significant at a = .0001 that accounted for at least .5 % of the variance are reported in Table 323. These effects accounted for 97.8% of the variability in the variance ratio, MEAV/VarAR2. The distribution for the predictors explained 51.78% of the variance in the ratio. An additional 21.06% was attributable to the size of the squared semipartial correlation coefficient. Less important for accurate estimation of the variance were the main effects of e and p~ These effects explained 6.26% and 2.67% of the total variance, respectively. As observed for coverage probability, a substantial proportion of the variance, 89.5%, was accounted for by the main effects of X, Ap2, and p and the interaction of these effects: 6.81% was associated with the X x Ap2 interaction, 6.45% was associated with the X x p~ interaction, and the threeway interaction, X x py x Ap2, explained a modest .72%. The interaction between the distribution for the errors and p~ accounted for an additional 2.02%. Although sample size plays a role in determining the coverage probability, it was not important in determining the ratio since the effect of n was included in calculating the variance. Figure 313 illustrates how MEAV/VarAR2 VarieS as a function of the distribution for the predictors, p and Ap2. This figure corresponds to Figure 310, describing coverage probability as a function of the X x p~ x Ap2 interaction, and shows a similar pattern. For the multivariate normal case, variance ratios got further from 1.0 as Ap2 increased for p~ = 0. As p~ increased, variance ratios improved for all values of Ap2. This improvement was greater for larger values of Ap2. By the time p~ = .60, there was no difference in the MEAV/VarAR2 ratio as a function ofp~ The behavior of the variance ratio helps to explain the fact that for normal data coverage probability increases with both Ap2 and p . Fr Xl dJistibuted pseudotio andC pseudonlO, the pattern for MEAV/VarAR2 as a function of Ap2 and p~ was very similar. This was also observed for coverage probability. At all values of p the variance ratio got smaller as Ap2 increased. For Ap2 = .05, the MEAV/VarAR2 ratio was consistent across the range for p~ There was a slight curvilinear relationship in the Ap2 X p~ plOts for Ap2 > .15 such that variance estimation improved slightly from p~ = .00 to p~ = .30 and then declined from p~ = .30 to p~ = .60. Therefore, variance estimates were best for all values of Ap2 at p~ = .30 and the most serious variance underestimation occurred when both p~ and Ap2 were largest. When X was distributed pseudoX! or pseudoexponential the difference between the variance ratio at the smallest value of Ap2 and the largest was greater than for the previous distibuion atp =.00 and this difference became progressively larger as p increased. For the most extreme degree of nonnormality, although MEAV/VarAR2 was never greater than .90, when Ap2 represents a large effect size, the accuracy of the estimated variance was particularly poor over the range of pl values. The scatterplot in Figure 314 is further evidence of a strong positive association between coverage probability and MEAV/VarAR2. The correlation between coverage probability and the variance ratio was r = .91. As the asymptotic variance more accurately estimated the actual sampling variance of M2, COVerage probability approached the nominal confidence level. When coverage probabilities were poor, the estimated asymptotic variance could be less than half that of the empirical sampling variance of M2. The strength of the relationship between coverage probability and MEAV/VarMR2 depends on the distribution for the predictors, as shown in Figures 315 to 319. For multivariate normal data, presented in Figure 315, the mean variance ratio was .946 (SD = .06). The median was .963 with a range from .666 to 1.050. Approximately 10% of the estimates were greater than 1.0 indicating that the asymptotic variance, albeit rarely, sometimes overestimated the empirical sampling variance. As the plot shows, however, a variance ratio near 1.0 was not a guarantee that the coverage probability will necessarily be close to .95 and coverage probability was as low as .85. Not surprisingly, the correlation between coverage probability and MEAV/VarMR2 was lower than that for the full data set, r = .62. Although the correlation between coverage probability and MEAV/VarMR2 was similar to that for normal data, r = .63, when the predictors were sampled from the pseudotlo distribution, less than 1% of the variance ratios were above 1 (Figure 316). The mean variance ratio was .881 (SD=.065), the median was .886, and the range was .631 to 1.044. The estimates from the pseudoX,2 distribution again demonstrate close similarity to the pseudotlo distribution. Although the scatterplot in Figure 317 is somewhat less dispersed reflected in a slightly higher correlation, r = .68, the descriptive statistics show close agreement. The mean variance ratio was .870 (SD=.068), the median was .874, and the range was .626 to 1.044. Again, less than 1% of the ratio estimates were greater than 1.0. As multivariate skewness and kurtosis increased, the correlation between coverage probability and MEAV/VarAR2 became much stronger. For the pseudoX0 distribution r = .86. As Figure 318 demonstrates, the scatterplot was more compact and more spread out. The range of values was wider, 548 to 1.004, due to a lower minimum value. There was only 1 variance ratio greater than 1. The mean was .785 (SD = .100) and the median was .788. Figure 319 shows the strongest relationship (r = .91) between coverage probability and MEAV/VarAR2 for the pseudoexponential distribution. With skewness and kurtosis corresponding to the exponential distribution, the scatterplot was tightly concentrated and substantially more elongated. None of the variance ratios were greater than 1 and over 25% were less than .60. Variance ratios ranged from a low of .381 to a high of .972. The mean was .881 (SD=. 132) and the median was .673. In summary, for multivariate normal data, MEAV/VarAR2 was best when Ap2 was small, but as p~ increased, variance was more accurately estimated and by the time p~ = .60, MEAV/VarAR2 was not dependent on Ap2. This pattern of results did not hold when nonnormality was introduced in the predictors. For moderate nonnormality, MEAV/VarAR2 tended to be more dependent on the value of Ap2 than on the magnitude ofpi When nonnormality was more extreme, variance estimation became more inaccurate as both Ap2 and p~ increased. Thus, when a variable was added to a multiple regression model that already explained a sizeable proportion of the variation in the outcome, for example, p = .60, the effect size associated with that variable was large, for example, Ap2 = .30, and the data were not multivariate normal, using Alf and Graf s formula underestimated the variance. Furthermore, this study showed that when nonnormality was severe, the estimated asymptotic variance could be less than half that indicated by the sampling distribution of AR2. In practice, this is likely to produce standard errors that are too small resulting in a confidence interval that is too narrow. Reliance on this confidence interval as a measure of the strength of the effect size will lead us to underestimate the importance of an individual predictor to the regression. Table 31. Replication of Algina and Moulder' s Results for Multivariate Data and Two Predictors. n pi0.00 0.05 0.10 0.15 0.20 0.25 0.30 175 0.00 1.000 1.000 0.907 0.904 0.925 0.925 0.931 0.930 0.936 0.937 0.938 0.939 0.940 0.934 0.10 0.20 0.30 0.40 0.50 0.60 300 0.00 0.10 0.20 0.30 0.40 0.50 0.60 425 0.00 0.10 0.20 0.30 0.40 0.50 0.60 600 0.00 0.10 0.20 0.30 0.40 0.50 0 60 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1 000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1 000 0.911 0.913 0.919 0.922 0.923 0. 931 0.923 0. 928 0. 929 0. 930 0. 934 0. 935 0. 938 0. 931 0. 933 0. 933 0. 935 0. 936 0. 939 0.941 0. 935 0. 937 0. 939 0. 939 0.940 0.943 0.943 0.910 0.912 0.919 0.923 0. 929 0. 928 0.923 0. 927 0. 931 0. 935 0. 929 0. 936 0. 937 0. 933 0. 934 0.940 0. 937 0. 935 0. 939 0.941 0. 935 0. 936 0.941 0. 939 0.942 0.941 0.942 0. 926 0. 930 0. 931 0. 934 0. 936 0. 939 0. 937 0. 935 0. 938 0.940 0.941 0.941 0.943 0. 938 0.941 0.942 0.942 0.943 0.944 0.944 0.943 0.945 0.945 0.945 0.945 0.946 0.946 0.922 0. 929 0. 928 0. 934 0. 939 0. 937 0. 932 0. 939 0. 936 0. 939 0.943 0.943 0.942 0.942 0.940 0.942 0.944 0.941 0.943 0.947 0.941 0.942 0.947 0.943 0.945 0.948 0.945 0. 933 0. 935 0. 935 0.940 0. 939 0.941 0. 938 0.942 0.940 0.941 0.944 0.943 0.944 0.942 0.944 0.945 0.945 0.945 0.946 0.946 0.945 0.945 0.946 0.945 0.947 0.946 0.946 0. 932 0. 933 0.941 0. 939 0. 938 0.943 0. 936 0.940 0.946 0.942 0.942 0.944 0.943 0.943 0.944 0.946 0.941 0.946 0.943 0.945 0.944 0.944 0.942 0.948 0.946 0.943 0.945 0. 938 0. 938 0. 938 0. 939 0.941 0.941 0.943 0.944 0.942 0.944 0.945 0.946 0.944 0.944 0.944 0.945 0.946 0.944 0.947 0.945 0.945 0.948 0.946 0.945 0.949 0.949 0.949 0. 935 0. 938 0. 939 0.941 0.942 0.940 0.943 0.946 0.941 0.944 0.947 0.948 0.942 0.948 0.946 0.944 0.943 0.949 0.946 0.947 0.946 0.944 0.944 0.942 0.948 0.942 0.948 0.940 0.942 0. 939 0.942 0.942 0.942 0.943 0.944 0.943 0.944 0.946 0.945 0.945 0.945 0.944 0.947 0.946 0.946 0.948 0.947 0.947 0.947 0.948 0.945 0.948 0.947 0.948 0. 938 0.940 0.941 0.940 0. 939 0. 939 0.945 0.942 0.946 0.944 0.946 0.944 0.945 0.944 0.948 0.942 0.947 0.947 0.945 0.947 0.944 0.946 0.948 0.945 0.951 0.948 0.948 0. 938 0.940 0.942 0.941 0.943 0.943 0.944 0.943 0.946 0.943 0.946 0.946 0.945 0.947 0.947 0.946 0.945 0.949 0.945 0.946 0.946 0.949 0.948 0.946 0.948 0.949 0.947 0. 938 0. 939 0. 939 0.944 0.941 0.940 0.944 0.944 0.946 0.943 0.946 0.948 0.945 0.947 0.948 0.947 0.947 0.948 0.946 0.945 0.950 0.949 0.944 0.948 0.948 0.947 0.945 Note: Bold results are estimated coverage probabilities between .94 and .96; italicized results are estimated coverage probabilities between .925 and .975. Shaded columns are results from this study; unshaded columns are the results reported by Algina and Moulder (2001, p. 638640). Table 32. Replication of Algina and Moulder' s Results for Multivariate Data and Six Predictors n p 0.00 0.05 0.10 0.15 0.20 0.25 0.30 175 0.00 1.000 1.000 0.897 0.896 0.918 0.915 0. 927 0. 925 0. 930 0. 928 0. 933 0. 937 0. 935 0. 936 0. 10 1.000 1.000 0.903 0.906 0.920 0.920 0. 928 0.923 0. 932 0. 934 0. 935 0. 935 0. 934 0. 932 0.20 1.000 1.000 0.908 0.909 0.922 0. 926 0. 926 0. 931 0. 931 0. 934 0. 934 0. 928 0. 934 0. 935 0.30 1.000 1.000 0.909 0.914 0.922 0. 925 0. 930 0. 928 0. 930 0. 934 0. 933 0. 932 0. 933 0. 935 0.40 1.000 1.000 0.912 0.915 0. 925 0.924 0. 930 0. 932 0. 932 0. 925 0. 932 0. 932 0. 934 0. 933 0.50 1.000 1.000 0.918 0.918 0. 927 0. 929 0. 931 0. 934 0. 931 0. 926 0. 932 0. 933 0. 930 0. 939 0.60 1.000 1.000 0.921 0.919 0. 929 0. 933 0. 929 0. 934 0. 930 0. 929 0. 932 0. 931 0. 931 0. 927 300 0.00 1.000 1.000 0.920 0.919 0. 932 0. 932 0. 935 0. 936 0. 938 0. 937 0. 939 0.940 0.942 0.941 0. 10 1.000 1.000 0.923 0.918 0. 932 0. 927 0. 937 0. 937 0. 939 0. 939 0. 939 0.940 0.940 0.944 0.20 1.000 1.000 0. 925 0. 926 0. 933 0. 935 0. 938 0. 939 0. 939 0. 939 0.940 0.941 0.940 0.942 0.30 1.000 1.000 0. 926 0.924 0. 935 0. 938 0. 939 0. 935 0.941 0.942 0.940 0. 938 0.940 0.940 0.40 1.000 1.000 0. 927 0. 931 0. 936 0. 931 0. 936 0.940 0.941 0.942 0.941 0.940 0.940 0.942 0.50 1.000 1.000 0. 933 0. 931 0. 935 0. 935 0. 938 0. 936 0. 938 0. 934 0.940 0.942 0. 939 0. 939 0.60 1.000 1.000 0. 933 0. 933 0. 938 0.943 0. 938 0. 934 0.940 0. 939 0. 939 0.940 0. 939 0. 937 425 0.00 1.000 1.000 0. 927 0. 925 0. 938 0. 939 0.940 0. 937 0.941 0.945 0.943 0.941 0.944 0.944 0. 10 1.000 1.000 0. 930 0. 927 0. 937 0. 936 0.941 0. 935 0.941 0.944 0.943 0.945 0.943 0.942 0.20 1.000 1.000 0. 931 0. 932 0. 939 0.942 0.940 0.942 0.943 0.941 0.944 0.944 0.943 0.945 0.30 1.000 1.000 0. 934 0. 935 0. 937 0.943 0.941 0.940 0.941 0.942 0.945 0. 938 0.943 0.942 0.40 1.000 1.000 0. 935 0. 937 0.941 0. 937 0.942 0.944 0.943 0. 939 0.941 0.944 0.943 0.946 0.50 1.000 1.000 0. 936 0. 934 0.941 0.940 0.940 0. 936 0.943 0.940 0.943 0.940 0.943 0.941 0.60 1.000 1.000 0. 936 0. 939 0.942 0.944 0.942 0.943 0.941 0.944 0.941 0.944 0.941 0.944 600 0.00 1.000 1.000 0. 933 0. 934 0.941 0. 938 0.944 0.941 0.944 0.944 0.945 0.949 0.946 0.947 0. 10 1.000 1.000 0. 937 0. 936 0.941 0.943 0.942 0.942 0.943 0.945 0.944 0.949 0.947 0.945 0.20 1.000 1.000 0. 937 0. 935 0.942 0. 936 0.941 0.943 0.943 0.947 0.944 0.946 0.945 0.946 0.30 1.000 1.000 0. 939 0.941 0.943 0. 938 0.944 0.941 0.946 0.941 0.945 0.949 0.947 0.943 0.40 1.000 1.000 0.940 0. 939 0.942 0.941 0.946 0.942 0.945 0.941 0.945 0.946 0.946 0.941 0.50 1.000 1.000 0.942 0. 935 0.942 0.944 0.945 0.943 0.945 0.945 0.945 0.946 0.943 0.944 0.60 1.000 1.000 0.941 0.942 0.942 0.942 0.945 0.945 0.945 0.946 0.943 0.942 0.944 0.946 Note: Bold results are estimated coverage probabilities between .94 and .96; italicized results are estimated coverage probabilities between .925 and .975. Shaded columns are results from this study; unshaded columns are the results reported by Algina and Moulder (2001, p. 638640). 0.920 0.922 0.920 0.921 0.922 0.914 0.916 0. 935 0. 936 0. 936 0. 936 0. 932 0. 934 0. 932 0. 937 0. 935 0. 938 0. 939 0. 937 0. 939 0. 938 0. 938 0.941 0.949 0.942 0.942 0.941 0.940 0.921 0.923 0.923 0.923 0.919 0.917 0.911 0. 936 0. 935 0. 936 0. 934 0. 932 0.931 0. 927 0. 938 0. 938 0. 939 0. 939 0.941 0. 939 0. 937 0.942 0.945 0.942 0.941 0.942 0.940 0. 939 0.920 0. 927 0. 925 0. 926 0.923 0. 926 0. 927 0.921 0.921 0.921 0.922 0.914 0.917 0.918 0.916 0.922 0.916 0.916 0.912 0.910 0.904 0. 934 0. 938 0. 935 0. 937 0. 936 0. 935 0. 937 0. 935 0. 931 0. 936 0. 934 0. 930 0. 934 0. 931 0. 931 0. 933 0. 930 0. 930 0. 935 0. 927 0. 926 0.941 0.940 0.940 0. 938 0.940 0.940 0. 939 0. 939 0. 934 0. 939 0. 938 0. 936 0. 935 0. 936 0. 937 0. 934 0. 935 0. 935 0. 935 0. 936 0. 938 0.943 0.943 0.946 0.944 0.943 0. 939 0.940 0.944 0.941 0.942 0.943 0. 937 0.940 0.942 0. 937 0.942 0. 939 0. 938 0.942 0.940 0. 937 Note: Bold results are estimated coverage probabilities between .94 and .96; italicized results are estimated coverage probabilities between .925 and .975. Shaded columns are results from this study; unshaded columns are the results reported by Algina and Moulder (2001, p. 638640). Table 33. Replication of Algina and Moulder' s Results for Multivariate Data and Ten Predictors. n pr 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.917 0.919 0.919 0.920 0.920 0.918 0.919 0. 929 0. 933 0. 933 0. 932 0. 934 0. 933 0.931 0. 936 0. 938 0. 937 0. 936 0. 938 0. 936 0. 937 0.941 0.941 0.940 0.941 0.944 0.941 0.940 175 0.00 1.000 1.000 0.890 0.10 0.20 0.30 0.40 0.50 0.60 300 0.00 0.10 0.20 0.30 0.40 0.50 0.60 425 0.00 0.10 0.20 0.30 0.40 0.50 0.60 600 0.00 0.10 0.20 0.30 0.40 0.50 0 60 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1 000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.999 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1 000 0.895 0.897 0.901 0.904 0.909 0.911 0.914 0.917 0.919 0.922 0.924 0.924 0. 926 0.923 0. 927 0. 928 0. 930 0. 932 0. 930 0. 935 0. 933 0.933 0. 935 0. 935 0. 936 0. 934 0.940 0.893 0.910 0.913 0.888 0.910 0.916 0.898 0.913 0.911 0.904 0.916 0.918 0.907 0.918 0.914 0.910 0.920 0.916 0.912 0.919 0.917 0.922 0. 928 0. 926 0.921 0. 926 0.923 0.919 0. 927 0.924 0.923 0. 931 0. 931 0.919 0. 929 0. 934 0.922 0. 930 0. 931 0. 927 0. 930 0.924 0.924 0. 935 0. 935 0. 926 0. 934 0. 931 0. 928 0. 936 0. 939 0. 934 0. 936 0. 934 0. 930 0. 936 0. 934 0. 931 0. 936 0. 939 0. 934 0. 936 0. 935 0. 926 0. 938 0. 938 0. 939 0. 937 0.941 0. 929 0. 939 0. 937 0. 930 0. 939 0.941 0. 937 0.941 0.944 0.940 0.942 0.942 0. 939 0.940 0.940 0.919 0.921 0.918 0.919 0.917 0.921 0.920 0.920 0.924 0.921 0.924 0.918 0.922 0.915 0. 934 0. 935 0. 934 0. 934 0. 934 0. 932 0. 934 0. 934 0. 937 0. 933 0. 938 0. 932 0. 935 0. 931 0.941 0.938 0. 934 0. 939 0.938 0.940 0. 934 0. 939 0. 936 0. 939 0. 939 0. 938 0. 937 0. 935 0.940 0.944 0.940 0.941 0.938 0.941 0.940 0.941 0.944 0.942 0.939 0.942 0. 936 0. 939 Table 34. Empirical Coverage Probabilities for Normal Predictors and Normal Errors. n p~ AP2 2 4 6 8 10 200 0.00 0.05 0.912 0.911 0.901 0.896 0.898 Note: Bold results are estimated coverage probabilities between .94 and .96; italicized results are estimated coverage probabilities between .925 and .975. 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0. 926 0.930 0.934 0.941 0.943 0.912 0.932 0.934 0.942 0.940 0.942 0.916 0. 929 0.942 0.938 0.942 0.940 0. 925 0.935 0.939 0.938 0.944 0.943 0.924 0.935 0.941 0.942 0.945 0.949 0.932 0.934 0.946 0.942 0.940 0.941 0.937 0.941 0.943 0.940 0.949 0.944 0. 925 0.930 0.936 0.936 0.940 0.907 0. 932 0. 935 0. 935 0. 938 0.940 0.917 0. 930 0. 933 0. 938 0. 93 7 0.942 0.913 0. 932 0. 932 0. 936 0.940 0.940 0.919 0.936 0. 935 0. 938 0. 939 0. 939 0. 92 7 0. 935 0.941 0. 938 0.942 0.942 0. 928 0.939 0. 93 7 0. 938 0. 938 0. 939 0. 926 0.932 0.934 0.934 0.940 0.904 0. 925 0. 929 0. 938 0. 936 0.942 0.916 0. 928 0. 932 0. 938 0. 939 0.942 0.915 0. 92 7 0. 933 0. 931 0. 935 0. 937 0.919 0. 928 0. 933 0. 938 0. 933 0. 933 0. 925 0. 931 0. 935 0. 930 0. 937 0.935 0.922 0.931 0. 931 0. 935 0. 935 0. 934 0.918 0. 928 0. 928 0. 92 7 0. 932 0.904 0.923 0. 925 0. 925 0. 930 0. 937 0.916 0.921 0.924 0. 928 0. 930 0. 930 0.911 0.924 0. 931 0.923 0. 931 0. 932 0.917 0. 92 7 0. 930 0. 928 0. 930 0. 931 0.913 0.923 0. 928 0. 928 0. 930 0. 92 7 0.923 0.923 0. 929 0. 927 0. 929 0.923 0.915 0.916 0.924 0. 928 0. 931 0.906 0.918 0.922 0. 92 7 0.923 0. 930 0.902 0.922 0. 925 0. 928 0. 928 0. 925 0.906 0.919 0. 926 0. 926 0. 928 0. 92 7 0.909 0.919 0.920 0.922 0.924 0.920 0.914 0.922 0.923 0.924 0.918 0.921 0.918 0.924 0.923 0.921 0.916 0.919 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued n pi Ap2 2 4 6 8 10 400 0.00 0.05 0.931 0. 92 7 0. 929 0.923 0.924 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.935 0.942 0.940 0.945 0.946 0. 929 0.940 0.943 0.946 0.946 0.946 0.931 0.939 0.944 0.944 0.947 0.942 0.938 0.941 0.942 0.945 0.946 0.946 0.933 0.944 0.950 0.944 0.948 0.947 0.935 0.949 0.946 0.950 0.946 0.948 0.943 0.941 0.943 0.945 0.943 0.950 0.940 0.941 0.945 0.943 0.947 0. 932 0.942 0.942 0.942 0.942 0.947 0. 929 0.940 0.942 0.945 0.947 0.946 0. 933 0.944 0.940 0.944 0.944 0.945 0. 935 0.941 0.940 0.947 0.944 0.946 0. 935 0.940 0.942 0.942 0.946 0.949 0.940 0.943 0.944 0.943 0.943 0.947 0. 934 0.940 0.940 0.942 0.943 0. 928 0.939 0.941 0.938 0.941 0.946 0. 929 0.941 0. 936 0.942 0.942 0.940 0. 934 0. 935 0.941 0.942 0.943 0.943 0. 934 0. 936 0.941 0.944 0.940 0.943 0. 936 0.942 0.945 0. 938 0.942 0.943 0.938 0.934 0.943 0.944 0. 938 0.945 0. 934 0. 937 0.941 0.947 0.942 0. 928 0. 938 0.940 0. 939 0.942 0.945 0. 930 0. 93 7 0. 936 0.943 0. 939 0.942 0. 935 0. 936 0. 938 0.941 0. 935 0.940 0. 930 0. 937 0.941 0. 939 0.944 0.944 0. 932 0. 936 0. 937 0.941 0. 938 0. 939 0. 935 0. 935 0. 938 0. 939 0. 932 0. 936 0. 936 0. 939 0. 937 0. 937 0. 938 0. 92 7 0. 936 0. 937 0. 936 0.941 0. 937 0. 929 0. 931 0. 936 0. 935 0.941 0. 936 0. 929 0. 937 0. 939 0. 938 0. 938 0. 939 0. 930 0. 931 0. 934 0. 939 0. 933 0. 934 0. 931 0. 937 0. 939 0. 937 0. 939 0. 937 0. 932 0. 938 0. 935 0. 935 0. 935 0. 92 7 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued k 6 0.937 0.942 0.945 0.944 0.948 0.945 0.940 0.946 0.942 0.940 0.947 0.941 0.939 0.946 0.945 0.946 0.944 0.946 0.933 0.941 0.945 0.945 0.941 0.948 0. 936 0.944 0.945 0.944 0.948 0.945 0.940 0.941 0.945 0.943 0.941 0.945 0.944 0.945 0.946 0.941 0.945 0.945 0.00 AP2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 2 0.931 0.945 0.947 0.945 0.942 0.948 0.943 0.939 0.943 0.945 0.948 0.950 0.938 0.948 0.949 0.945 0.946 0.952 0.939 0.945 0.943 0.944 0.949 0.948 0.937 0.949 0.949 0.949 0.949 0.945 0.941 0.946 0.944 0.950 0.946 0.951 0.940 0.945 0.948 0.952 0.950 0.948 4 0.935 0.943 0.946 0.949 0.950 0.944 0. 933 0.942 0.945 0.950 0.947 0.949 0.936 0.944 0.946 0.949 0.947 0.946 0.940 0.942 0.951 0.945 0.945 0.945 0.943 0.941 0.945 0.945 0.949 0.949 0.941 0.949 0.941 0.950 0.946 0.947 0.945 0.946 0.945 0.942 0.947 0.948 8 0. 92 7 0.941 0.942 0.941 0.944 0.947 0. 935 0.940 0.942 0.943 0.944 0.942 0. 938 0.940 0.940 0.943 0.941 0.947 0. 938 0.943 0.943 0.941 0.947 0.941 0. 934 0.943 0.944 0.946 0.944 0.943 0.943 0.946 0.942 0.943 0.946 0.945 0.942 0. 939 0.943 0.940 0.945 0.942 10 0. 929 0. 936 0. 937 0.942 0.944 0.941 0. 932 0.940 0.941 0.942 0.940 0.943 0. 936 0.941 0.945 0.943 0.944 0.940 0. 936 0. 938 0.943 0. 938 0.941 0. 938 0. 939 0.940 0.944 0. 939 0.943 0.943 0. 936 0.941 0.948 0.944 0.941 0. 938 0.944 0.944 0.942 0. 937 0.944 0.944 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued n pf Ap2 2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.939 0.944 0.947 0.948 0.948 0.949 0.938 0.943 0.945 0.940 0.942 0.951 0.938 0.944 0.950 0.946 0.949 0.947 0.944 0.949 0.947 0.943 0.947 0.951 0.946 0.947 0.946 0.947 0.948 0.948 0.947 0.948 0.950 0.949 0.951 0.953 0.945 0.953 0.948 0.943 0.947 0.950 0.940 0.943 0.941 0.951 0.949 0.951 0.942 0.949 0.942 0.943 0.945 0.948 0.940 0.944 0.944 0.946 0.947 0.948 0. 938 0.945 0.949 0.949 0.948 0.945 0. 938 0.947 0.948 0.946 0.947 0.945 0.940 0.947 0.946 0.944 0.946 0.948 0.944 0.948 0.946 0.948 0.948 0.951 0.939 0.946 0.946 0.943 0.948 0.946 0. 939 0.942 0.946 0.948 0.946 0.946 0. 938 0.944 0.941 0.947 0.944 0.943 0.942 0.946 0.945 0.946 0.944 0.948 0.941 0.945 0.944 0.947 0.950 0.946 0.951 0.949 0.949 0.944 0.946 0.951 0.945 0.948 0.943 0.945 0.945 0.944 0. 93 7 0.942 0.945 0.944 0.944 0.945 0. 938 0.941 0.947 0.946 0.947 0.947 0. 935 0.945 0.945 0.945 0.948 0.946 0. 939 0.942 0.948 0.949 0. 939 0.948 0.944 0.944 0.943 0.944 0.943 0.946 0.942 0.943 0.944 0.949 0.947 0.944 0.943 0.944 0.941 0.946 0.943 0.945 0. 936 0.945 0.945 0.946 0.946 0.944 0. 939 0. 938 0.943 0.943 0.946 0.944 0.940 0. 939 0.946 0.946 0.946 0.941 0. 939 0.942 0.943 0.943 0.944 0.943 0.942 0.942 0.941 0.944 0.946 0.941 0. 939 0.941 0. 939 0.944 0.943 0.942 0. 937 0.943 0.942 0.942 0.944 0.943 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued n p~ Ap2 2 4 6 8 10 1000 0.00 0.05 0.940 0.943 0. 938 0. 939 0. 937 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.947 0.947 0.947 0.949 0.953 0.945 0.950 0.945 0.950 0.949 0.949 0.944 0.949 0.948 0.947 0.950 0.947 0.945 0.946 0.945 0.948 0.947 0.951 0.943 0.944 0.947 0.948 0.947 0.951 0.946 0.950 0.949 0.951 0.946 0.948 0.946 0.951 0.946 0.951 0.944 0.947 0.946 0.945 0.947 0.947 0.950 0.946 0.944 0.948 0.951 0.949 0.951 0.944 0.941 0.944 0.946 0.948 0.947 0. 93 7 0.949 0.948 0.948 0.949 0.948 0.944 0.945 0.949 0.947 0.948 0.946 0.946 0.943 0.949 0.948 0.947 0.945 0.943 0.945 0.944 0.950 0.949 0.951 0.942 0.945 0.944 0.946 0.949 0.945 0.945 0.943 0.946 0.947 0.946 0.945 0.946 0.947 0.943 0.949 0.949 0.940 0.943 0.946 0.945 0.947 0.947 0.948 0.948 0.945 0.944 0.949 0.950 0.945 0.948 0.945 0.948 0.946 0.951 0.942 0.946 0.944 0.947 0.944 0.949 0.944 0.941 0.948 0.948 0.945 0.944 0.946 0.945 0.944 0.948 0.945 0.943 0.944 0.947 0.945 0.948 0.946 0.946 0.943 0.944 0.948 0.943 0.945 0.945 0.946 0.940 0.947 0.948 0.949 0.946 0.942 0.944 0.943 0.947 0.947 0.945 0.945 0.943 0.945 0.946 0.944 0.943 0.946 0.941 0.950 0.948 0.943 0.942 0.947 0.944 0.944 0.950 0.940 0.944 0.946 0.945 0.947 0.945 0.942 0.944 0.946 0.945 0.943 0.950 0.940 0.946 0.952 0.949 0.945 0.947 0.947 0.943 0.945 0.946 0.948 0.941 0.943 0.946 0.948 0.942 0.941 0.943 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued n p~ Ap2 2 4 6 8 10 1500 0.00 0.05 0.946 0.942 0.944 0.944 0.944 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.949 0.946 0.949 0.947 0.950 0.945 0.945 0.950 0.945 0.949 0.949 0.945 0.951 0.949 0.948 0.951 0.947 0.951 0.950 0.947 0.950 0.949 0.953 0.944 0.949 0.949 0.947 0.946 0.947 0.948 0.949 0.947 0.951 0.952 0.947 0.951 0.948 0.948 0.953 0.949 0.950 0.947 0.949 0.944 0.946 0.948 0.945 0.944 0.949 0.951 0.947 0.948 0.944 0.943 0.950 0.949 0.947 0.946 0.942 0.950 0.949 0.947 0.952 0.952 0.944 0.949 0.955 0.950 0.949 0.952 0.948 0.951 0.951 0.951 0.947 0.952 0.950 0.948 0.951 0.952 0.951 0.951 0.949 0.949 0.949 0.947 0.949 0.949 0.948 0.954 0.953 0.948 0.947 0.945 0.945 0.947 0.950 0.946 0.948 0.945 0.947 0.946 0.950 0.954 0.949 0.942 0.947 0.947 0.947 0.949 0.947 0.947 0.950 0.950 0.947 0.948 0.945 0.944 0.948 0.949 0.944 0.948 0.952 0.943 0.947 0.945 0.945 0.945 0.942 0.945 0.947 0.946 0.945 0.949 0.940 0.945 0.952 0.948 0.949 0.952 0.941 0.947 0.950 0.949 0.950 0.949 0.942 0.948 0.948 0.946 0.947 0.947 0.945 0.950 0.946 0.948 0.947 0.950 0.949 0.947 0.948 0.946 0.946 0.949 0.945 0.945 0.947 0.946 0.944 0.946 0.947 0.952 0.945 0.949 0.944 0.944 0.948 0.945 0.948 0.947 0.952 0.943 0.945 0.946 0.948 0.941 0.944 0.948 0.948 0.944 0.945 0.945 0.949 0.943 0.942 0.949 0.945 0.947 0.949 0.945 0.947 0.949 0.948 0.941 0.947 0.10 0.20 0.30 0.40 0.50 0.60 Table 34. Continued n p~ Ap2 2 4 6 8 10 2000 0.00 0.05 0.946 0.945 0.945 0.943 0.940 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.950 0.946 0.947 0.945 0.951 0.949 0.949 0.947 0.946 0.945 0.947 0.951 0.946 0.949 0.948 0.948 0.945 0.947 0.945 0.946 0.949 0.945 0.948 0.948 0.947 0.946 0.955 0.949 0.949 0.949 0.948 0.949 0.946 0.948 0.948 0.948 0.947 0.949 0.953 0.946 0.953 0.948 0.944 0.944 0.945 0.949 0.948 0.952 0.945 0.954 0.950 0.943 0.946 0.947 0.952 0.951 0.950 0.945 0.948 0.951 0.947 0.948 0.951 0.949 0.951 0.949 0.951 0.951 0.947 0.951 0.944 0.951 0.946 0.949 0.953 0.949 0.950 0.946 0.949 0.951 0.951 0.952 0.945 0.948 0.949 0.949 0.947 0.943 0.951 0.945 0.949 0.948 0.950 0.944 0.943 0.950 0.946 0.947 0.951 0.945 0.945 0.950 0.948 0.950 0.950 0.948 0.948 0.950 0.949 0.948 0.943 0.946 0.947 0.950 0.948 0.951 0.949 0.943 0.949 0.950 0.950 0.947 0.950 0.948 0.952 0.948 0.948 0.944 0.946 0.947 0.947 0.949 0.946 0.950 0.947 0.948 0.951 0.952 0.944 0.949 0.950 0.948 0.947 0.945 0.947 0.945 0.947 0.947 0.951 0.948 0.950 0.952 0.946 0.952 0.947 0.948 0.950 0.952 0.944 0.954 0.948 0.949 0.951 0.948 0.944 0.950 0.949 0.945 0.951 0.946 0.944 0.948 0.948 0.948 0.950 0.944 0.943 0.949 0.950 0.949 0.946 0.944 0.945 0.948 0.945 0.946 0.945 0.948 0.944 0.949 0.944 0.948 0.948 0.948 0.945 0.947 0.950 0.947 0.949 0.945 0.941 0.946 0.948 0.944 0.943 0.10 0.20 0.30 0.40 0.50 0.60 Table 35. Empirical Coverage Probabilities for Normal Predictors and Nonnormal Errors. n pZ Ap2 2 4 6 8 10 200 0.00 0.05 0.910 0.906 0.902 0.901 0.898 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.919 0.921 0.920 0.916 0.916 0.913 0.924 0. 925 0. 925 0.923 0.922 0.915 0. 927 0. 928 0. 931 0. 929 0. 927 0.920 0. 930 0. 934 0. 932 0. 932 0. 934 0.924 0. 932 0. 935 0. 935 0. 93 7 0. 938 0. 926 0. 935 0. 93 7 0. 938 0.941 0. 939 0. 932 0. 938 0.940 0.942 0.941 0.943 0.917 0.920 0.920 0.917 0.916 0.908 0.921 0. 926 0.923 0.924 0.921 0.912 0. 926 0.924 0. 927 0. 927 0. 925 0.916 0. 927 0. 929 0. 929 0. 930 0. 930 0.919 0. 930 0. 931 0. 932 0. 933 0. 935 0.924 0. 931 0. 932 0. 935 0. 934 0. 936 0. 925 0. 93 7 0. 93 7 0. 938 0. 93 7 0. 93 7 0.920 0.919 0.915 0.914 0.913 0.906 0.918 0.921 0.920 0.918 0.916 0.907 0.922 0.922 0. 925 0.924 0.923 0.914 0.924 0. 925 0. 925 0. 926 0. 928 0.914 0. 926 0. 930 0. 92 7 0. 929 0. 931 0.920 0. 928 0. 928 0. 933 0. 930 0. 931 0. 926 0. 930 0. 930 0. 930 0. 930 0. 930 0.910 0.916 0.914 0.913 0.909 0.902 0.915 0.918 0.914 0.918 0.914 0.909 0.916 0.922 0.921 0.919 0.918 0.909 0.918 0.923 0.920 0.923 0.922 0.913 0.921 0. 925 0. 925 0. 925 0.921 0.915 0.924 0. 925 0. 928 0.924 0. 92 7 0.920 0. 925 0. 925 0. 925 0. 926 0.923 0.910 0.909 0.912 0.907 0.906 0.901 0.913 0.915 0.911 0.911 0.909 0.904 0.916 0.920 0.919 0.913 0.915 0.906 0.917 0.918 0.920 0.919 0.916 0.910 0.918 0.919 0.919 0.917 0.917 0.913 0.921 0.920 0.921 0.917 0.918 0.914 0.921 0.920 0.920 0.917 0.915 0.10 0.20 0.30 0.40 0.50 0.60 Note: Bold results are estimated coverage probabilities between .94 and .96: italicized results are estimated coverage probabilities between .925 and .975. Table 35. Continued k 6 0. 92 7 0. 931 0. 931 0. 925 0.922 0.920 0. 926 0. 930 0. 929 0. 929 0. 926 0. 925 0. 926 0. 933 0. 932 0. 930 0. 930 0. 930 0. 928 0. 933 0. 932 0. 933 0. 935 0. 934 0. 931 0. 938 0. 93 7 0. 93 7 0. 93 7 0. 938 0. 935 0. 93 7 0. 938 0. 936 0. 938 0.940 0. 936 0. 938 0.942 0. 939 0.941 0. 939 A 2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 2 0. 927 0. 930 0. 931 0. 927 0.924 0.918 0. 929 0. 933 0. 932 0. 930 0. 929 0.924 0. 930 0. 934 0. 932 0. 933 0. 935 0. 932 0. 933 0. 936 0. 938 0. 937 0. 935 0. 938 0. 936 0. 937 0. 939 0.940 0.940 0.943 0. 93 7 0. 939 0.942 0.945 0.945 0.944 0. 939 0.944 0.943 0.946 0.947 0.944 4 0. 925 0. 929 0. 932 0. 929 0.921 0.920 0. 926 0. 932 0. 931 0. 930 0. 928 0.924 0. 927 0. 934 0. 932 0. 933 0. 934 0. 931 0. 930 0. 938 0. 93 7 0. 93 7 0. 936 0. 936 0. 935 0. 938 0. 938 0. 939 0. 935 0. 93 7 0. 93 7 0. 939 0.940 0. 939 0.941 0.941 0. 936 0.943 0.940 0.943 0.944 0.944 8 0.922 0. 928 0.923 0.923 0.921 0.921 0. 926 0. 928 0. 92 7 0. 928 0. 925 0.921 0. 926 0. 930 0. 929 0. 930 0. 930 0. 926 0. 930 0. 932 0. 933 0. 932 0. 931 0. 931 0. 932 0. 934 0.934 0.934 0.934 0.933 0. 932 0.937 0. 935 0.937 0. 936 0. 939 0.933 0.937 0. 938 0. 935 0. 938 0. 938 10 0.922 0. 927 0.924 0.922 0.921 0.916 0.920 0. 927 0. 926 0. 925 0. 925 0.919 0. 925 0. 929 0. 925 0. 927 0. 926 0. 925 0.924 0. 932 0. 931 0. 930 0. 931 0. 930 0. 928 0. 934 0. 931 0. 932 0. 931 0. 930 0. 931 0. 933 0. 934 0. 933 0. 934 0. 931 0. 931 0. 934 0. 933 0. 934 0. 932 0. 932 0.10 0.20 0.30 0.40 0.50 0.60 Table 35. Continued k 6 0. 931 0. 931 0. 931 0. 930 0. 926 0.921 0. 932 0. 934 0. 934 0. 931 0. 92 7 0. 92 7 0. 933 0. 935 0. 934 0. 934 0. 931 0. 929 0. 936 0. 936 0. 938 0. 93 7 0. 93 7 0. 933 0. 936 0.940 0. 939 0. 938 0. 939 0. 939 0. 938 0. 939 0.941 0.943 0.942 0. 939 0.940 0.941 0.941 0.942 0.942 0.942 Ap2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 2 0. 932 0. 932 0. 933 0. 929 0.922 0.924 0. 935 0. 935 0. 93 7 0. 934 0. 929 0. 927 0. 936 0. 93 7 0.940 0. 938 0. 935 0. 934 0. 938 0.940 0.940 0. 938 0. 939 0. 938 0. 93 7 0.942 0.941 0.940 0.941 0.940 0.941 0.942 0.945 0.944 0.943 0.945 0.944 0.945 0.943 0.944 0.947 0.946 4 0. 933 0. 931 0. 930 0. 930 0. 926 0.922 0. 935 0. 936 0. 932 0. 932 0. 930 0. 927 0. 935 0. 938 0. 935 0. 936 0. 931 0. 935 0. 936 0. 938 0. 93 7 0. 935 0. 93 7 0. 938 0. 938 0.940 0.940 0. 939 0.941 0.940 0.941 0.943 0.941 0.941 0.944 0.944 0.941 0.945 0.942 0.943 0.943 0.946 8 0. 932 0. 932 0. 930 0. 926 0.923 0.918 0. 930 0. 933 0. 931 0. 931 0. 928 0. 926 0. 932 0. 935 0.934 0. 935 0. 932 0. 930 0.934 0.935 0.935 0.936 0.936 0. 934 0.935 0.937 0. 938 0. 93 7 0. 939 0. 936 0.937 0. 93 7 0. 938 0. 939 0. 939 0.940 0.939 0.942 0.942 0.943 0.941 0. 939 10 0. 927 0. 930 0. 928 0. 925 0.922 0.922 0. 930 0. 932 0. 930 0. 928 0.922 0. 926 0. 933 0. 935 0. 932 0. 932 0. 928 0. 929 0. 935 0. 934 0. 932 0. 933 0. 932 0. 931 0. 935 0. 936 0. 935 0. 93 7 0. 934 0. 936 0. 936 0. 938 0. 939 0. 938 0. 93 7 0. 936 0. 93 7 0. 938 0. 936 0. 93 7 0. 938 0. 938 0.10 0.20 0.30 0.40 0.50 0.60 Table 35. Continued k 6 0. 933 0. 934 0. 933 0. 928 0. 926 0.924 0. 934 0. 938 0. 934 0. 931 0. 930 0. 928 0. 93 7 0. 939 0. 938 0. 93 7 0. 932 0. 933 0. 938 0. 939 0. 939 0. 939 0. 938 0. 938 0.941 0.941 0.942 0.940 0. 938 0. 938 0.941 0.942 0.943 0.942 0.943 0.944 0.940 0.945 0.943 0.943 0.945 0.944 Ap2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 2 0. 936 0. 93 7 0. 933 0. 932 0. 926 0.923 0. 939 0. 938 0. 934 0. 933 0. 930 0. 929 0. 93 7 0. 939 0. 938 0. 93 7 0. 934 0. 933 0.941 0.942 0.941 0.941 0. 938 0. 938 0.940 0.943 0.942 0.942 0.941 0.943 0.943 0.943 0.945 0.944 0.946 0.945 0.943 0.946 0.946 0.945 0.943 0.948 4 0. 936 0. 933 0. 933 0. 929 0. 928 0.921 0. 938 0. 934 0. 935 0. 935 0. 930 0. 926 0. 93 7 0.940 0. 938 0. 935 0. 935 0. 932 0. 936 0.943 0.940 0. 938 0. 93 7 0. 93 7 0.940 0.941 0.941 0.941 0.941 0. 938 0.942 0.940 0.942 0.943 0.944 0.943 0.940 0.944 0.944 0.945 0.946 0.944 8 0. 936 0. 933 0. 932 0. 928 0.924 0.921 0. 935 0. 933 0. 936 0. 930 0. 929 0. 925 0.935 0. 938 0. 938 0.934 0. 934 0. 931 0. 938 0. 935 0. 938 0. 93 7 0. 93 7 0. 936 0. 938 0. 939 0.940 0. 93 7 0. 938 0. 939 0.942 0.942 0.942 0.938 0.939 0.942 0.943 0.942 0.941 0.943 0.942 0.941 10 0. 933 0. 931 0. 930 0. 926 0.924 0.920 0. 934 0. 934 0. 932 0. 930 0. 928 0.923 0. 93 7 0. 934 0. 935 0. 933 0. 933 0. 930 0. 936 0. 939 0. 93 7 0. 934 0. 935 0. 933 0. 938 0. 93 7 0. 938 0. 939 0. 936 0. 93 7 0. 938 0. 939 0.940 0.940 0.941 0.941 0.940 0.942 0.942 0.941 0.942 0.941 0.10 0.20 0.30 0.40 0.50 0.60 Ap2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 p 0.00 2 0. 938 0. 938 0. 936 0. 933 0. 927 0.924 0. 938 0. 939 0. 936 0. 936 0. 929 0. 929 0.940 0. 939 0. 939 0. 935 0. 936 0. 934 0.940 0.941 0.942 0.941 0.941 0. 939 0.943 0.943 0.943 0.942 0.941 0.941 0.943 0.946 0.942 0.943 0.943 0.946 0.945 0.946 0.945 0.946 0.947 0.948 4 0. 939 0. 93 7 0. 932 0. 930 0. 930 0.924 0. 938 0. 936 0. 939 0. 934 0. 930 0. 928 0. 939 0.941 0. 939 0. 93 7 0. 934 0. 934 0. 939 0.941 0. 939 0. 939 0. 93 7 0. 939 0.940 0.943 0.941 0.941 0. 939 0.942 0.944 0.944 0.945 0.944 0.943 0.945 0.944 0.942 0.945 0.947 0.947 0.947 8 0.935 0.936 0. 934 0. 930 0.923 0.922 0.938 0. 936 0.935 0. 931 0. 928 0. 925 0.938 0. 938 0.938 0. 934 0. 934 0. 934 0. 939 0.940 0. 939 0. 938 0.938 0. 936 0.940 0.943 0.940 0. 938 0.940 0.940 0.940 0.943 0.941 0.942 0.941 0.941 0.941 0.943 0.941 0.943 0.943 0.942 10 0. 935 0. 935 0. 93 7 0. 927 0. 927 0.923 0. 93 7 0. 934 0. 935 0. 935 0. 929 0. 927 0. 93 7 0. 939 0. 936 0. 934 0. 933 0. 931 0.940 0. 938 0. 936 0. 93 7 0. 936 0. 938 0. 938 0.942 0.940 0. 93 7 0.941 0.941 0.940 0.944 0.942 0.940 0.940 0.940 0.941 0.940 0.942 0.941 0.941 0.945 1000 0.10 0.20 0.30 0.40 0.50 0.60 Table 35. Continued k 6 0. 936 0. 93 7 0. 932 0. 931 0. 92 7 0.922 0. 93 7 0. 938 0. 93 7 0. 933 0. 931 0. 928 0.940 0. 938 0. 936 0. 934 0. 933 0. 932 0.940 0.941 0.941 0. 938 0. 936 0. 938 0.941 0.941 0.943 0. 939 0.940 0. 939 0.943 0.943 0.942 0.942 0.943 0.944 0.944 0.944 0.945 0.945 0.944 0.946 Table 35. Continued k 6 0. 939 0.940 0. 935 0. 930 0. 929 0.921 0.940 0.940 0. 939 0. 933 0. 932 0. 930 0.944 0.940 0.940 0. 935 0. 93 7 0. 936 0.943 0.943 0.941 0.942 0. 938 0. 93 7 0.943 0.944 0.942 0.943 0.942 0.944 0.945 0.941 0.945 0.943 0.943 0.943 0.945 0.946 0.945 0.946 0.947 0.948 A 2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 2 0.941 0. 939 0. 93 7 0. 932 0. 929 0.924 0.943 0.941 0. 93 7 0. 934 0. 933 0. 931 0.944 0.940 0. 939 0. 939 0. 933 0. 935 0.942 0.945 0.943 0. 939 0. 939 0.941 0.945 0.943 0.944 0.942 0.942 0.942 0.944 0.945 0.944 0.944 0.945 0.947 0.947 0.947 0.944 0.946 0.948 0.950 4 0.940 0. 93 7 0. 935 0. 933 0. 927 0. 925 0.941 0. 939 0. 939 0. 935 0. 931 0. 928 0.942 0.943 0.940 0. 939 0. 935 0. 934 0.942 0.944 0.940 0.941 0. 939 0.940 0.944 0.941 0.941 0.941 0.943 0.941 0.946 0.943 0.944 0.946 0.945 0.945 0.947 0.946 0.946 0.945 0.944 0.948 8 0. 939 0. 93 7 0. 934 0. 931 0. 926 0.924 0.940 0.940 0. 935 0. 933 0. 931 0. 926 0.941 0.941 0. 93 7 0. 93 7 0.934 0.931 0.941 0.943 0.942 0.939 0. 93 7 0.938 0.943 0.942 0.940 0.940 0.941 0.942 0.943 0.943 0.943 0.942 0.946 0.944 0.947 0.943 0.944 0.942 0.947 0.946 10 0. 938 0. 936 0. 935 0. 932 0. 928 0.922 0. 938 0. 938 0. 936 0. 934 0. 930 0. 931 0.942 0. 939 0.941 0. 936 0. 935 0. 934 0.944 0.942 0. 939 0. 93 7 0. 936 0. 938 0.942 0.943 0.943 0.940 0. 939 0.941 0.945 0.944 0.942 0.945 0.941 0.944 0.946 0.943 0.945 0.944 0.944 0.943 1500 0.10 0.20 0.30 0.40 0.50 0.60 Ap2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 p 0.00 0.10 0.20 0.30 0.40 0.50 0.60 2 0.944 0.940 0. 936 0. 933 0. 928 0.924 0.941 0.942 0.940 0. 936 0. 929 0. 931 0.943 0.941 0.942 0.940 0. 935 0. 934 0.943 0.942 0.942 0.943 0.941 0. 939 0.947 0.945 0.942 0.941 0.943 0.942 0.945 0.946 0.944 0.944 0.944 0.946 0.946 0.948 0.946 0.948 0.948 0.949 4 0.944 0. 93 7 0. 938 0. 933 0. 930 0. 925 0.941 0.941 0. 935 0. 933 0. 931 0. 930 0.944 0.940 0.940 0. 93 7 0. 936 0. 935 0.944 0.944 0.940 0.940 0. 938 0. 938 0.947 0.943 0.944 0.942 0.944 0.941 0.948 0.945 0.945 0.944 0.944 0.946 0.945 0.947 0.949 0.946 0.949 0.949 8 0.940 0. 939 0. 936 0. 931 0. 931 0.921 0.941 0.940 0. 938 0. 934 0. 930 0. 930 0.942 0.942 0. 939 0. 935 0.935 0. 932 0.944 0.943 0.940 0.940 0. 938 0. 939 0.945 0.943 0.944 0.942 0.942 0.941 0.947 0.946 0.943 0.944 0.943 0.946 0.945 0.945 0.943 0.946 0.946 0.946 10 0.941 0. 939 0. 935 0. 932 0. 926 0.924 0.941 0.940 0. 93 7 0. 936 0. 930 0. 930 0.943 0.942 0.940 0. 936 0. 935 0. 933 0.943 0.942 0.940 0. 938 0. 936 0. 938 0.945 0.943 0.944 0.940 0.941 0.942 0.944 0.944 0.942 0.942 0.942 0.943 0.945 0.944 0.946 0.944 0.946 0.946 2000 Table 35. Continued k 6 0.940 0.940 0. 935 0. 933 0. 928 0.923 0.942 0.942 0. 938 0. 934 0. 932 0. 929 0.943 0.940 0.940 0. 939 0. 936 0. 933 0.943 0.943 0.940 0.942 0.941 0. 938 0.946 0.942 0.943 0.941 0.942 0.941 0.946 0.945 0.946 0.944 0.943 0.946 0.947 0.945 0.948 0.946 0.947 0.948 Table 36. Empirical Coverage Probabilities for Nonnormal Predictors and Normal Errors. k n P: Ap2 2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.904 0.918 0.919 0.921 0.919 0.916 0.910 0.919 0.921 0.919 0.918 0.912 0.910 0.918 0.920 0.919 0.917 0.914 0.914 0.919 0.918 0.913 0.906 0.902 0.918 0.917 0.914 0.910 0.901 0.894 0.916 0.915 0.907 0.899 0.890 0.881 0.917 0.910 0.896 0.887 0.871 0.866 0.904 0.914 0.918 0.917 0.917 0.912 0.908 0.914 0.918 0.914 0.915 0.915 0.911 0.916 0.917 0.915 0.910 0.907 0.910 0.916 0.915 0.911 0.904 0.901 0.913 0.916 0.911 0.906 0.897 0.889 0.913 0.911 0.906 0.894 0.887 0.878 0.913 0.907 0.896 0.882 0.870 0.860 0.898 0.914 0.916 0.917 0.913 0.910 0.903 0.912 0.915 0.911 0.913 0.907 0.907 0.914 0.913 0.911 0.909 0.904 0.909 0.913 0.911 0.907 0.903 0.898 0.909 0.912 0.909 0.901 0.894 0.888 0.909 0.907 0.900 0.891 0.881 0.875 0.910 0.901 0.891 0.878 0.863 0.857 0.898 0.909 0.911 0.910 0.911 0.910 0.897 0.910 0.911 0.911 0.906 0.903 0.902 0.908 0.908 0.907 0.903 0.899 0.903 0.910 0.907 0.903 0.901 0.892 0.905 0.910 0.903 0.893 0.890 0.879 0.906 0.905 0.898 0.886 0.876 0.865 0.905 0.897 0.880 0.873 0.858 0.848 0.892 0.906 0.909 0.907 0.908 0.902 0.898 0.905 0.906 0.905 0.902 0.899 0.899 0.905 0.905 0.904 0.900 0.893 0.901 0.906 0.903 0.900 0.893 0.886 0.902 0.904 0.897 0.888 0.885 0.873 0.901 0.900 0.890 0.879 0.871 0.860 0.900 0.893 0.879 0.864 0.853 0.841 0.10 0.20 0.30 0.40 0.50 0.60 Note: Bold results are estimated coverage probabilities between .94 and .96: italicized results are estimated coverage probabilities between .925 and .975. 2 0.924 0. 930 0. 926 0. 927 0.922 0.919 0. 927 0. 929 0. 927 0.924 0.922 0.916 0. 926 0. 929 0. 926 0. 925 0.918 0.915 0. 928 0. 929 0.924 0.917 0.912 0.906 0. 929 0.924 0.918 0.914 0.907 0.898 0. 926 0.923 0.913 0.901 0.895 0.887 0. 926 0.915 0.901 0.891 0.877 0.870 4 0. 925 0. 928 0. 929 0. 925 0. 925 0.922 0.924 0. 929 0. 927 0. 925 0.921 0.919 0. 928 0. 926 0. 927 0.922 0.916 0.914 0. 929 0. 927 0.923 0.918 0.912 0.904 0. 926 0.924 0.918 0.914 0.903 0.897 0. 926 0.920 0.908 0.903 0.891 0.886 0. 925 0.916 0.901 0.887 0.875 0.866 8 0.922 0.928 0.924 0.924 0.919 0.914 0.923 0. 926 0.922 0.923 0.916 0.911 0.922 0.924 0.921 0.919 0.914 0.912 0.924 0.924 0.920 0.912 0.905 0.902 0.923 0.920 0.915 0.911 0.897 0.895 0.922 0.918 0.904 0.898 0.887 0.878 0.920 0.909 0.895 0.881 0.870 0.859 10 0.917 0.922 0.924 0.920 0.916 0.913 0.919 0.923 0.920 0.917 0.916 0.909 0.919 0.922 0.921 0.916 0.914 0.908 0.919 0.921 0.918 0.911 0.904 0.899 0.922 0.919 0.912 0.904 0.894 0.891 0.922 0.914 0.903 0.892 0.884 0.873 0.920 0.908 0.893 0.876 0.866 0.856 0.10 0.20 0.30 0.40 0.50 0.60 Table 36. Continued n pl 400 0.00 k 6 0.920 0. 928 0. 926 0. 925 0.923 0.917 0.922 0. 928 0. 926 0.922 0.917 0.917 0.924 0. 926 0.923 0.919 0.917 0.912 0. 926 0. 927 0.923 0.916 0.911 0.904 0. 925 0.924 0.915 0.909 0.901 0.896 0. 925 0.920 0.909 0.901 0.891 0.882 0.924 0.910 0.898 0.888 0.873 0.861 Ap2 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 Table 36. Continued n P: Ap2 2 4 6 8 10 600 0.00 0.05 0. 931 0. 929 0. 930 0. 928 0. 927 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.934 0.934 0.932 0. 927 0.922 0. 934 0. 933 0. 931 0. 926 0.924 0.918 0. 932 0. 930 0. 930 0. 926 0.921 0.917 0. 932 0. 931 0. 927 0.919 0.915 0.909 0. 931 0. 930 0.922 0.914 0.908 0.898 0. 931 0. 926 0.914 0.905 0.892 0.886 0. 930 0.918 0.902 0.887 0.880 0.868 0.933 0.932 0. 928 0.924 0.922 0.933 0.934 0. 929 0. 925 0.924 0.921 0.932 0.934 0.930 0. 927 0.919 0.915 0.930 0.931 0. 927 0.916 0.915 0.905 0.931 0. 927 0.921 0.914 0.907 0.902 0.933 0.921 0.914 0.904 0.893 0.885 0. 925 0.917 0.903 0.889 0.876 0.867 0. 931 0. 928 0. 927 0.923 0.922 0. 928 0. 931 0. 928 0. 927 0.921 0.916 0. 929 0. 931 0. 925 0.923 0.916 0.914 0. 929 0. 930 0.924 0.917 0.913 0.906 0. 931 0. 926 0.923 0.911 0.906 0.895 0. 930 0.919 0.912 0.904 0.892 0.884 0. 929 0.915 0.900 0.889 0.876 0.863 0. 930 0.929 0.928 0.922 0.919 0.928 0.931 0.928 0.923 0.919 0.916 0. 930 0. 930 0.925 0.920 0.917 0.911 0. 930 0. 927 0.920 0.917 0.913 0.908 0.929 0.923 0.917 0.911 0.903 0.894 0.928 0.918 0.910 0.901 0.887 0.880 0. 927 0.913 0.900 0.885 0.875 0.863 0. 928 0. 928 0. 926 0.922 0.918 0. 928 0. 929 0. 926 0.923 0.919 0.917 0. 929 0.930 0.923 0.919 0.917 0.912 0. 929 0. 927 0.920 0.912 0.910 0.904 0. 929 0.922 0.915 0.907 0.902 0.892 0. 926 0.919 0.907 0.897 0.890 0.878 0.924 0.910 0.896 0.881 0.869 0.863 0.10 0.20 0.30 0.40 0.50 0.60 Table 36. Continued n pi Ap2 2 4 6 8 10 800 0.00 0.05 0. 937 0.935 0. 931 0. 934 0.932 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0. 936 0.933 0.930 0. 925 0.924 0. 938 0. 934 0. 931 0. 928 0.923 0.920 0. 935 0.934 0.932 0. 926 0.922 0.918 0. 936 0.931 0. 927 0.920 0.916 0.909 0. 936 0. 933 0.922 0.916 0.907 0.902 0. 933 0. 926 0.916 0.907 0.896 0.885 0. 932 0.918 0.901 0.893 0.879 0.869 0.935 0.932 0. 929 0. 926 0.922 0.933 0.934 0.931 0. 929 0. 926 0.920 0.937 0.933 0. 928 0.924 0.920 0.914 0.937 0.932 0. 927 0.919 0.914 0.908 0.934 0.930 0.924 0.913 0.907 0.899 0.934 0. 925 0.912 0.906 0.893 0.885 0. 929 0.917 0.901 0.889 0.878 0.864 0. 936 0. 934 0. 929 0. 925 0.922 0. 934 0. 934 0. 932 0. 926 0. 926 0.921 0. 933 0. 933 0. 929 0. 925 0.918 0.915 0. 933 0. 930 0.923 0.921 0.913 0.907 0. 935 0. 930 0.921 0.910 0.906 0.899 0. 931 0. 925 0.913 0.908 0.895 0.884 0. 931 0.918 0.901 0.891 0.874 0.866 0. 934 0. 930 0.929 0.925 0.923 0.935 0.935 0.928 0. 926 0.921 0.918 0.935 0.933 0.928 0.924 0.919 0.912 0.931 0.929 0.923 0.917 0.913 0.907 0.935 0.928 0.920 0.913 0.903 0.897 0.932 0.923 0.912 0.902 0.891 0.883 0. 927 0.915 0.903 0.887 0.873 0.866 0.934 0.931 0. 928 0.922 0.922 0.932 0.931 0.931 0.924 0.921 0.918 0.932 0.931 0. 927 0.922 0.916 0.914 0.931 0. 929 0. 92 7 0.917 0.912 0.906 0.934 0. 927 0.920 0.910 0.900 0.895 0. 928 0.922 0.911 0.898 0.889 0.881 0. 928 0.914 0.899 0.887 0.872 0.861 0.10 0.20 0.30 0.40 0.50 0.60 Table 36. Continued n pi Ap2 2 4 6 8 10 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0. 936 0. 938 0. 933 0. 931 0. 92 7 0.923 0. 93 7 0. 936 0. 932 0. 930 0. 92 7 0.920 0.941 0. 935 0. 930 0. 92 7 0.922 0.917 0. 938 0. 933 0. 928 0.923 0.917 0.911 0. 933 0. 932 0.924 0.915 0.907 0.901 0. 934 0. 928 0.914 0.906 0.897 0.889 0. 932 0.922 0.904 0.892 0.880 0.867 0. 938 0. 938 0. 934 0. 931 0. 92 7 0.922 0. 936 0. 936 0. 933 0. 930 0.924 0.923 0. 937 0. 935 0. 931 0. 928 0.921 0.916 0. 936 0. 936 0. 928 0.923 0.915 0.908 0. 938 0. 929 0.922 0.916 0.908 0.898 0. 936 0. 92 7 0.914 0.903 0.894 0.887 0. 932 0.917 0.906 0.888 0.880 0.867 0. 934 0. 936 0. 934 0. 929 0. 92 7 0.923 0. 936 0. 935 0. 931 0. 927 0. 925 0.915 0. 935 0. 933 0. 930 0. 925 0.920 0.916 0. 935 0. 934 0. 92 7 0.920 0.915 0.909 0. 934 0. 928 0.920 0.913 0.906 0.900 0. 934 0.924 0.914 0.902 0.893 0.884 0. 930 0.917 0.902 0.889 0.876 0.867 0. 935 0. 935 0. 934 0. 929 0. 928 0. 925 0. 936 0. 934 0. 929 0. 928 0. 926 0.920 0. 938 0. 933 0. 929 0.924 0.920 0.913 0. 93 7 0. 932 0. 92 7 0.917 0.914 0.907 0. 932 0. 929 0.922 0.912 0.904 0.898 0. 933 0. 926 0.914 0.902 0.897 0.885 0. 931 0.918 0.902 0.885 0.874 0.866 0. 935 0. 934 0. 931 0. 92 7 0.924 0.922 0. 934 0. 932 0. 932 0. 926 0.923 0.920 0. 935 0. 932 0. 931 0.921 0.919 0.910 0. 935 0. 929 0. 928 0.920 0.914 0.907 0. 934 0. 928 0.922 0.912 0.902 0.898 0. 933 0. 925 0.915 0.902 0.894 0.882 0. 929 0.916 0.900 0.886 0.872 0.863 1000 0.10 0.20 0.30 0.40 0.50 0.60 Table 36. Continued k n pi Ap2 2 4 6 8 10 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0.941 0. 939 0. 934 0. 932 0. 928 0.923 0.940 0. 936 0. 934 0. 930 0. 926 0.922 0.941 0. 93 7 0. 932 0. 928 0.922 0.918 0. 939 0. 933 0. 930 0.923 0.915 0.910 0. 93 7 0. 932 0. 925 0.918 0.909 0.901 0. 935 0. 92 7 0.919 0.906 0.896 0.890 0. 933 0.918 0.906 0.891 0.880 0.868 0. 939 0. 938 0. 932 0. 931 0. 928 0. 925 0.940 0. 93 7 0. 934 0. 928 0. 925 0.921 0.941 0. 936 0. 930 0. 92 7 0.922 0.915 0. 938 0. 933 0. 930 0.922 0.918 0.912 0. 938 0. 932 0.923 0.913 0.908 0.901 0. 934 0. 925 0.918 0.904 0.897 0.891 0. 935 0.920 0.903 0.891 0.875 0.867 0. 939 0. 938 0. 937 0. 929 0. 927 0.923 0. 939 0. 939 0. 935 0. 930 0. 925 0.923 0. 939 0. 934 0. 931 0. 926 0.921 0.917 0. 938 0. 935 0. 929 0.920 0.914 0.910 0. 938 0. 932 0.923 0.915 0.905 0.902 0. 938 0.924 0.914 0.907 0.896 0.886 0. 932 0.917 0.903 0.889 0.878 0.867 0.940 0. 938 0. 935 0. 931 0. 925 0.921 0. 938 0. 938 0. 933 0. 930 0.924 0.916 0. 938 0. 935 0. 930 0. 925 0.919 0.918 0. 93 7 0. 930 0. 929 0.922 0.915 0.908 0. 936 0. 931 0. 925 0.912 0.909 0.900 0. 935 0. 925 0.916 0.903 0.898 0.888 0. 931 0.917 0.905 0.891 0.878 0.867 0.941 0. 934 0. 936 0. 931 0. 928 0.921 0. 938 0. 934 0. 930 0. 929 0.924 0.917 0. 938 0. 935 0. 928 0. 925 0.920 0.913 0.940 0. 932 0. 92 7 0.921 0.913 0.907 0. 934 0. 929 0.919 0.913 0.908 0.898 0. 936 0.924 0.911 0.900 0.893 0.883 0. 933 0.915 0.903 0.890 0.878 0.866 1500 0.10 0.20 0.30 0.40 0.50 0.60 Table 36. Continued n pi Ap2 2 4 6 8 10 2000 0.00 0.05 0.941 0.941 0.940 0.941 0.939 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0. 939 0. 937 0.931 0. 926 0. 925 0.941 0.942 0.936 0. 931 0. 926 0.922 0.943 0.938 0.933 0. 925 0.923 0.917 0. 939 0. 936 0. 927 0.921 0.914 0.910 0.941 0.931 0.924 0.915 0.908 0.904 0. 939 0. 927 0.918 0.907 0.894 0.889 0. 934 0.918 0.906 0.892 0.879 0.872 0.938 0.935 0. 929 0. 927 0. 925 0.942 0.939 0.934 0.932 0. 926 0.920 0.942 0.937 0.931 0. 928 0.923 0.919 0.938 0.934 0. 928 0.920 0.917 0.907 0.939 0.933 0.924 0.915 0.908 0.899 0.937 0. 927 0.915 0.906 0.900 0.889 0.933 0.920 0.906 0.890 0.879 0.870 0. 939 0. 936 0. 930 0.924 0. 926 0.941 0. 938 0. 934 0. 928 0. 927 0.922 0.940 0. 93 7 0. 931 0. 927 0.921 0.917 0.942 0. 93 7 0. 926 0.923 0.915 0.911 0. 939 0. 934 0.923 0.916 0.904 0.897 0. 938 0. 925 0.915 0.905 0.897 0.888 0. 933 0.917 0.905 0.892 0.878 0.869 0.939 0.935 0.933 0. 927 0.924 0.939 0. 93 7 0. 936 0.931 0.925 0.920 0.940 0.938 0.932 0.925 0.921 0.916 0.941 0.935 0.928 0.921 0.916 0.912 0. 93 7 0.929 0.923 0.916 0.905 0.899 0.935 0. 926 0.914 0.907 0.896 0.885 0. 934 0.918 0.903 0.888 0.877 0.868 0.939 0.935 0.930 0. 928 0.922 0.941 0.937 0.933 0. 928 0. 926 0.924 0.941 0.937 0.930 0. 928 0.921 0.917 0.938 0.934 0. 928 0.924 0.914 0.908 0.941 0. 929 0.924 0.913 0.909 0.898 0.935 0. 925 0.914 0.905 0.895 0.881 0.930 0.916 0.903 0.890 0.875 0.866 0.10 0.20 0.30 0.40 0.50 0.60 Table 37. Empirical Coverage Probabilities for Predictors Nonnormal and Errors Nonnormal. n p~ AP2 2 4 6 8 10 200 0.00 0.05 0.903 0.900 0.897 0.894 0.891 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.911 0.909 0.904 0.897 0.891 0.908 0.914 0.910 0.906 0.902 0.894 0.910 0.915 0.912 0.906 0.901 0.896 0.913 0.916 0.912 0.906 0.899 0.894 0.916 0.915 0.909 0.903 0.895 0.888 0.916 0.912 0.905 0.896 0.888 0.879 0.916 0.908 0.896 0.885 0.874 0.863 0.908 0.907 0.901 0.896 0.890 0.904 0.911 0.908 0.904 0.899 0.893 0.906 0.911 0.909 0.904 0.897 0.893 0.909 0.913 0.908 0.902 0.897 0.892 0.913 0.913 0.906 0.899 0.892 0.886 0.913 0.909 0.900 0.893 0.882 0.875 0.913 0.905 0.892 0.881 0.870 0.860 0.905 0.904 0.898 0.893 0.888 0.900 0.909 0.906 0.901 0.896 0.891 0.903 0.909 0.905 0.901 0.895 0.891 0.906 0.908 0.905 0.900 0.893 0.888 0.907 0.908 0.903 0.894 0.888 0.883 0.909 0.906 0.897 0.887 0.878 0.871 0.910 0.902 0.889 0.875 0.864 0.856 0.902 0.900 0.895 0.890 0.885 0.896 0.903 0.901 0.898 0.893 0.886 0.901 0.906 0.901 0.897 0.893 0.887 0.902 0.905 0.902 0.896 0.889 0.883 0.904 0.904 0.898 0.891 0.883 0.875 0.904 0.901 0.894 0.882 0.873 0.865 0.906 0.895 0.883 0.870 0.857 0.847 0.899 0.898 0.893 0.888 0.881 0.892 0.901 0.898 0.892 0.888 0.882 0.896 0.901 0.899 0.893 0.886 0.881 0.899 0.901 0.896 0.891 0.884 0.878 0.900 0.901 0.895 0.885 0.877 0.868 0.902 0.897 0.888 0.876 0.867 0.857 0.901 0.891 0.877 0.863 0.849 0.838 0.10 0.20 0.30 0.40 0.50 0.60 Note: Bold results are estimated coverage probabilities between .94 and .96; italicized results are estimated coverage probabilities between .925 and .975. Table 37. Continued n p~ Ap2 2 4 6 8 10 400 0.00 0.05 0.920 0.919 0.919 0.916 0.915 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.922 0.916 0.910 0.903 0.896 0.923 0.923 0.918 0.912 0.905 0.901 0. 926 0.924 0.919 0.911 0.905 0.900 0. 926 0.923 0.917 0.910 0.902 0.899 0. 926 0.923 0.916 0.907 0.898 0.891 0. 927 0.921 0.910 0.900 0.890 0.880 0. 925 0.913 0.900 0.887 0.876 0.864 0.921 0.916 0.907 0.902 0.895 0.923 0.922 0.917 0.910 0.904 0.899 0.923 0.920 0.916 0.910 0.903 0.898 0.924 0.922 0.915 0.907 0.902 0.896 0.924 0.919 0.913 0.905 0.898 0.889 0. 926 0.918 0.907 0.897 0.887 0.880 0.923 0.912 0.898 0.886 0.873 0.864 0.920 0.915 0.907 0.901 0.894 0.920 0.922 0.915 0.909 0.903 0.897 0.922 0.922 0.916 0.909 0.905 0.897 0.923 0.920 0.914 0.907 0.901 0.893 0.924 0.920 0.911 0.902 0.895 0.889 0.923 0.918 0.906 0.894 0.885 0.878 0.921 0.910 0.895 0.883 0.871 0.862 0.917 0.912 0.906 0.902 0.893 0.918 0.919 0.914 0.907 0.901 0.895 0.920 0.919 0.913 0.907 0.901 0.896 0.921 0.918 0.913 0.905 0.899 0.893 0.922 0.917 0.909 0.900 0.893 0.885 0.920 0.913 0.903 0.893 0.882 0.875 0.920 0.907 0.893 0.881 0.869 0.859 0.915 0.911 0.905 0.897 0.890 0.916 0.917 0.912 0.906 0.900 0.893 0.918 0.917 0.911 0.906 0.898 0.894 0.918 0.916 0.912 0.902 0.895 0.891 0.918 0.916 0.907 0.898 0.888 0.883 0.920 0.912 0.900 0.890 0.879 0.872 0.918 0.906 0.891 0.875 0.864 0.853 0.10 0.20 0.30 0.40 0.50 0.60 Table 37. Continued k n pZ Ap2 2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0. 928 0. 926 0.919 0.914 0.904 0.897 0. 930 0. 927 0.921 0.914 0.908 0.901 0. 930 0. 92 7 0.921 0.913 0.906 0.901 0. 930 0. 926 0.919 0.911 0.905 0.898 0. 931 0.924 0.916 0.907 0.900 0.893 0. 931 0.923 0.910 0.901 0.889 0.884 0. 928 0.914 0.901 0.889 0.876 0.868 0. 92 7 0. 925 0.918 0.912 0.905 0.898 0. 928 0. 926 0.918 0.913 0.905 0.900 0. 929 0. 925 0.919 0.914 0.907 0.900 0.930 0. 926 0.919 0.910 0.904 0.897 0.930 0. 925 0.915 0.907 0.898 0.892 0. 929 0.920 0.909 0.899 0.891 0.882 0. 926 0.914 0.900 0.887 0.876 0.864 0. 926 0.923 0.918 0.911 0.903 0.897 0. 928 0.924 0.918 0.914 0.904 0.897 0. 929 0. 925 0.919 0.912 0.903 0.899 0.930 0.924 0.918 0.909 0.903 0.897 0. 929 0.922 0.913 0.905 0.897 0.889 0. 929 0.918 0.908 0.898 0.888 0.881 0. 926 0.912 0.898 0.885 0.874 0.864 0.924 0.923 0.918 0.910 0.903 0.895 0. 926 0.924 0.917 0.911 0.904 0.898 0. 927 0.924 0.917 0.911 0.903 0.898 0. 927 0.924 0.915 0.909 0.901 0.895 0. 928 0.921 0.913 0.904 0.896 0.889 0. 927 0.918 0.907 0.897 0.887 0.877 0. 925 0.911 0.896 0.883 0.872 0.860 0.923 0.921 0.916 0.908 0.903 0.895 0. 925 0.922 0.917 0.909 0.902 0.896 0. 925 0.922 0.915 0.909 0.903 0.896 0. 927 0.921 0.914 0.907 0.899 0.893 0. 927 0.920 0.910 0.903 0.895 0.886 0. 926 0.916 0.903 0.895 0.884 0.876 0.923 0.910 0.895 0.881 0.869 0.858 0.10 0.20 0.30 0.40 0.50 0.60 Table 37. Continued n pi Ap2 2 4 6 8 10 800 0.00 0.05 0. 932 0.931 0. 929 0. 930 0. 928 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0. 928 0.922 0.913 0.906 0.897 0. 932 0. 928 0.921 0.916 0.907 0.902 0. 932 0. 929 0.923 0.915 0.907 0.901 0. 933 0. 928 0.921 0.912 0.905 0.899 0. 933 0. 925 0.916 0.909 0.900 0.894 0. 932 0.923 0.911 0.901 0.891 0.885 0. 930 0.916 0.901 0.889 0.878 0.869 0. 926 0.919 0.913 0.905 0.898 0.931 0. 92 7 0.920 0.915 0.908 0.901 0.932 0. 928 0.921 0.913 0.908 0.901 0.933 0. 92 7 0.919 0.912 0.904 0.897 0.932 0. 926 0.918 0.907 0.899 0.892 0.931 0.922 0.911 0.901 0.890 0.884 0.930 0.915 0.901 0.888 0.876 0.866 0. 926 0.920 0.914 0.906 0.897 0.931 0. 92 7 0.922 0.914 0.905 0.900 0.931 0. 92 7 0.920 0.912 0.907 0.901 0.932 0. 92 7 0.918 0.911 0.904 0.898 0.932 0.924 0.914 0.907 0.898 0.889 0.930 0.919 0.910 0.899 0.890 0.880 0. 92 7 0.914 0.899 0.887 0.875 0.866 0. 926 0.919 0.911 0.905 0.897 0. 929 0. 927 0.920 0.912 0.905 0.899 0. 932 0. 927 0.918 0.912 0.906 0.899 0. 930 0. 926 0.918 0.909 0.903 0.897 0. 930 0.923 0.915 0.905 0.897 0.891 0. 929 0.919 0.907 0.898 0.888 0.880 0. 927 0.914 0.898 0.885 0.873 0.863 0. 925 0.917 0.911 0.904 0.896 0. 930 0. 925 0.919 0.911 0.905 0.899 0. 930 0. 926 0.918 0.911 0.905 0.897 0. 929 0.924 0.916 0.910 0.902 0.894 0. 930 0.923 0.914 0.903 0.895 0.888 0. 928 0.919 0.907 0.897 0.887 0.878 0. 925 0.911 0.896 0.883 0.871 0.862 0.10 0.20 0.30 0.40 0.50 0.60 Table 37. Continued k n pi Ap2 2 4 6 8 10 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.05 0.10 0.15 0.20 0.25 0.30 0.00 0. 932 0. 928 0.922 0.913 0.906 0.897 0. 932 0. 928 0.921 0.916 0.907 0.902 0. 932 0. 929 0.923 0.915 0.907 0.901 0. 933 0. 928 0.921 0.912 0.905 0.899 0. 933 0. 925 0.916 0.909 0.900 0.894 0. 932 0.923 0.911 0.901 0.891 0.885 0. 930 0.916 0.901 0.889 0.878 0.869 0. 931 0. 926 0.919 0.913 0.905 0.898 0. 931 0. 92 7 0.920 0.915 0.908 0.901 0. 932 0. 928 0.921 0.913 0.908 0.901 0. 933 0. 92 7 0.919 0.912 0.904 0.897 0. 932 0. 926 0.918 0.907 0.899 0.892 0. 931 0.922 0.911 0.901 0.890 0.884 0. 930 0.915 0.901 0.888 0.876 0.866 0. 929 0. 926 0.920 0.914 0.906 0.897 0. 931 0. 927 0.922 0.914 0.905 0.900 0. 931 0. 927 0.920 0.912 0.907 0.901 0. 932 0. 927 0.918 0.911 0.904 0.898 0. 932 0.924 0.914 0.907 0.898 0.889 0. 930 0.919 0.910 0.899 0.890 0.880 0. 92 7 0.914 0.899 0.887 0.875 0.866 0. 930 0. 926 0.919 0.911 0.905 0.897 0. 929 0. 92 7 0.920 0.912 0.905 0.899 0. 932 0. 92 7 0.918 0.912 0.906 0.899 0. 930 0. 926 0.918 0.909 0.903 0.897 0. 930 0.923 0.915 0.905 0.897 0.891 0. 929 0.919 0.907 0.898 0.888 0.880 0. 92 7 0.914 0.898 0.885 0.873 0.863 0. 928 0. 925 0.917 0.911 0.904 0.896 0. 930 0. 925 0.919 0.911 0.905 0.899 0. 930 0. 926 0.918 0.911 0.905 0.897 0. 929 0.924 0.916 0.910 0.902 0.894 0. 930 0.923 0.914 0.903 0.895 0.888 0. 928 0.919 0.907 0.897 0.887 0.878 0. 925 0.911 0.896 0.883 0.871 0.862 800 0.10 0.20 0.30 0.40 0.50 0.60 